ABy Admin
Jun 24'23

Exercise

[math] \require{textmacros} \def \bbeta {\bf \beta} \def\fat#1{\mbox{\boldmath$#1$}} \def\reminder#1{\marginpar{\rule[0pt]{1mm}{11pt}}\textbf{#1}} \def\SSigma{\bf \Sigma} \def\ttheta{\bf \theta} \def\aalpha{\bf \alpha} \def\ddelta{\bf \delta} \def\eeta{\bf \eta} \def\llambda{\bf \lambda} \def\ggamma{\bf \gamma} \def\nnu{\bf \nu} \def\vvarepsilon{\bf \varepsilon} \def\mmu{\bf \mu} \def\nnu{\bf \nu} \def\ttau{\bf \tau} \def\SSigma{\bf \Sigma} \def\TTheta{\bf \Theta} \def\XXi{\bf \Xi} \def\PPi{\bf \Pi} \def\GGamma{\bf \Gamma} \def\DDelta{\bf \Delta} \def\ssigma{\bf \sigma} \def\UUpsilon{\bf \Upsilon} \def\PPsi{\bf \Psi} \def\PPhi{\bf \Phi} \def\LLambda{\bf \Lambda} \def\OOmega{\bf \Omega} [/math]

Consider the linear regression model [math]\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon[/math] with [math]\vvarepsilon \sim \mathcal{N} ( \mathbf{0}_n, \sigma^2 \mathbf{I}_{nn})[/math]. This model is fitted to data, [math]\mathbf{X}_{1,\ast} = (4, -2)[/math] and [math]Y_1 =(10)[/math], using the ridge regression estimator [math]\hat{\bbeta}(\lambda) = (\mathbf{X}^{\top}_{1,\ast} \mathbf{X}_{1,\ast} + \lambda \mathbf{I}_{22})^{-1} \mathbf{X}_{1,\ast}^{\top} Y_1[/math].

  • Evaluate the ridge regression estimator for [math]\lambda=5[/math].
  • Suppose [math]\bbeta = (1,-1)^{\top}[/math]. Evaluate the bias of the ridge regression estimator.
  • Decompose the bias into a component due to the regularization and one attributable to the high-dimensionality of the study.
  • Would [math]\bbeta[/math] have equalled [math](2,-1)^{\top}[/math], the bias' component due to the high-dimensionality vanishes. Explain why.