Revision as of 01:56, 25 June 2023 by Admin (Created page with "<div class="d-none"> <math> \require{textmacros} \def \bbeta {\bf \beta} \def\fat#1{\mbox{\boldmath$#1$}} \def\reminder#1{\marginpar{\rule[0pt]{1mm}{11pt}}\textbf{#1}} \def\SS...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
ABy Admin
Jun 25'23

Exercise

[math] \require{textmacros} \def \bbeta {\bf \beta} \def\fat#1{\mbox{\boldmath$#1$}} \def\reminder#1{\marginpar{\rule[0pt]{1mm}{11pt}}\textbf{#1}} \def\SSigma{\bf \Sigma} \def\ttheta{\bf \theta} \def\aalpha{\bf \alpha} \def\ddelta{\bf \delta} \def\eeta{\bf \eta} \def\llambda{\bf \lambda} \def\ggamma{\bf \gamma} \def\nnu{\bf \nu} \def\vvarepsilon{\bf \varepsilon} \def\mmu{\bf \mu} \def\nnu{\bf \nu} \def\ttau{\bf \tau} \def\SSigma{\bf \Sigma} \def\TTheta{\bf \Theta} \def\XXi{\bf \Xi} \def\PPi{\bf \Pi} \def\GGamma{\bf \Gamma} \def\DDelta{\bf \Delta} \def\ssigma{\bf \sigma} \def\UUpsilon{\bf \Upsilon} \def\PPsi{\bf \Psi} \def\PPhi{\bf \Phi} \def\LLambda{\bf \Lambda} \def\OOmega{\bf \Omega} [/math]

Consider the standard linear regression model [math]Y_i = \mathbf{X}_{i,\ast} \bbeta + \varepsilon_i[/math] for [math]i=1, \ldots, n[/math] and with [math]\varepsilon_i \sim_{i.i.d.} \mathcal{N}(0, \sigma^2)[/math]. The rows of the design matrix [math]\mathbf{X}[/math] are of length 2, neither column represents the intercept. Relevant summary statistics from the data on the response [math]\mathbf{Y}[/math] and the covariates are:

[[math]] \begin{eqnarray*} \mathbf{X}^{\top} \mathbf{X} & = & \left( \begin{array}{rr} 40 & -20 \\ -20 & 10 \end{array} \right) \qquad \mbox{ and } \qquad \mathbf{X}^{\top} \mathbf{Y} \, \, \, = \, \, \, \left( \begin{array}{rr} 26 \\ -13 \end{array} \right). \end{eqnarray*} [[/math]]

  • Use lasso regression to regress without intercept the response on the first covariate. Draw (i.e. not sketch!) the regularization path of lasso regression estimator.
  • The two covariates are perfectly collinear. However, their regularization paths do not coincide. Why?
  • Fit the linear regression model with both covariates (and still without intercept) by means of the fused lasso, i.e. [math]\hat{\bbeta} (\lambda_1, \lambda_f) = \arg \min_{\bbeta} \| \mathbf{Y} - \mathbf{X} \bbeta \|_2^2 + \lambda_1 \| \bbeta \|_1 + \lambda_f | \beta_1 - \beta_2 |[/math] with [math]\lambda_1 = 10[/math] and [math]\lambda_F = \infty[/math]. Hint: at some point in your answer you may wish to write [math]\mathbf{X} = ( \mathbf{X}_{\ast,1} \, \, \, c \mathbf{X}_{\ast,1})[/math] and deduce [math]c[/math] from [math]\mathbf{X}^{\top} \mathbf{X}[/math].