Revision as of 02:10, 25 June 2023 by Admin (Created page with "<div class="d-none"> <math> \require{textmacros} \def \bbeta {\bf \beta} \def\fat#1{\mbox{\boldmath$#1$}} \def\reminder#1{\marginpar{\rule[0pt]{1mm}{11pt}}\textbf{#1}} \def\SS...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
ABy Admin
Jun 25'23

Exercise

[math] \require{textmacros} \def \bbeta {\bf \beta} \def\fat#1{\mbox{\boldmath$#1$}} \def\reminder#1{\marginpar{\rule[0pt]{1mm}{11pt}}\textbf{#1}} \def\SSigma{\bf \Sigma} \def\ttheta{\bf \theta} \def\aalpha{\bf \alpha} \def\ddelta{\bf \delta} \def\eeta{\bf \eta} \def\llambda{\bf \lambda} \def\ggamma{\bf \gamma} \def\nnu{\bf \nu} \def\vvarepsilon{\bf \varepsilon} \def\mmu{\bf \mu} \def\nnu{\bf \nu} \def\ttau{\bf \tau} \def\SSigma{\bf \Sigma} \def\TTheta{\bf \Theta} \def\XXi{\bf \Xi} \def\PPi{\bf \Pi} \def\GGamma{\bf \Gamma} \def\DDelta{\bf \Delta} \def\ssigma{\bf \sigma} \def\UUpsilon{\bf \Upsilon} \def\PPsi{\bf \Psi} \def\PPhi{\bf \Phi} \def\LLambda{\bf \Lambda} \def\OOmega{\bf \Omega} [/math]

Consider the linear regression model [math]\mathbf{Y} = \mathbf{X} \bbeta + \vvarepsilon[/math] with [math]\vvarepsilon \sim \mathcal{N} ( \mathbf{0}_n, \sigma_{\varepsilon}^2 \mathbf{I}_{nn})[/math]. This model (without intercept) is fitted to data using the elastic net estimator [math]\hat{\bbeta}(\lambda_1, \lambda_2) = \arg \min_{\bbeta} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1 + \tfrac{1}{2} \lambda_2 \| \bbeta \|_2^2[/math]. The relevant summary statistics of the data are:

[[math]] \begin{eqnarray*} \mathbf{X} = \left( \begin{array}{r} 1 \\ -1 \\ -1 \end{array} \right), \, \mathbf{Y} = \left( \begin{array}{r} -5 \\ 4 \\ 1 \end{array} \right), \, \mathbf{X}^{\top} \mathbf{X} = \left( \begin{array}{r} 3 \end{array} \right), \mbox{ and } \, \mathbf{X}^{\top} \mathbf{Y} = \left( \begin{array}{r} -10 \end{array} \right). \end{eqnarray*} [[/math]]

  • Evaluate for [math](\lambda_1, \lambda_2) = (3,2)[/math] the elastic net regression estimator of the linear regression model.
  • Now consider the evaluation the elastic net regression estimator of the linear regression model for the same penalty parameters, [math](\lambda_1, \lambda_2) = (3,2)[/math], but this time involving two covariates. The first covariate is as in part a), the second is orthogonal to that one. Do you expect the resulting elastic net estimate of the first regression coefficient [math]\hat{\beta}_1 ( \lambda_1, \lambda_2)[/math] to be larger, equal or smaller (in an absolute sense) than your answer to part a)? Motivate.
  • Now take in part b) the second covariate equal to the first one. Show that the first coefficient of elastic net estimate [math]\hat{\beta}_1 ( \lambda_1, 2 \lambda_2)[/math] is half that of part a). Note: there is no need to know the exact answer to part a).