Revision as of 13:13, 24 June 2023 by Admin (Created page with "This exercise is inspired by one from <ref name="Drap1998">Draper, N. R. and Smith, H. (1998).''Applied Regression Analysis (3rd edition)''.John Wiley & Sons</ref>. Consider...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
ABy Admin
Jun 24'23

Exercise

This exercise is inspired by one from [1].

Consider the simple linear regression model [math]Y_i = \beta_0 + \beta_1 X_i + \varepsilon_i[/math] with [math]\varepsilon_i \sim \mathcal{N}(0, \sigma^2)[/math]. The data on the covariate and response are: [math]\mathbf{X}^{\top} = (X_1, X_2, \ldots, X_{8})^{\top} = (-2, -1, -1, -1, 0, 1, 2, 2)^{\top}[/math] and [math]\mathbf{Y}^{\top} = (Y_1, Y_2, \ldots, Y_{8})^{\top} = (35, 40, 36, 38, 40, 43, 45, 43)^{\top}[/math], with corresponding elements in the same order.

  • Find the ridge regression estimator for the data above for a general value of [math]\lambda[/math].
  • Evaluate the fit, i.e. [math]\widehat{Y}_i(\lambda)[/math] for [math]\lambda=10[/math]. Would you judge the fit as good? If not, what is the most striking feature that you find unsatisfactory?
  • Now zero center the covariate and response data, denote it by [math]\tilde{X}_i[/math] and [math]\tilde{Y}_i[/math], and evaluate the ridge estimator of [math]\tilde{Y}_i = \beta_1 \tilde{X}_i + \varepsilon_i[/math] at [math]\lambda=4[/math]. Verify that in terms of original data the resulting predictor now is: [math]\widehat{Y}_i(\lambda) = 40 + 1.75 X[/math].

Note that the employed estimate in the predictor found in part c) is effectively a combination of a maximum likelihood and ridge regression one for intercept and slope, respectively. Put differently, only the slope has been regularized/penalized.

  1. Draper, N. R. and Smith, H. (1998).Applied Regression Analysis (3rd edition).John Wiley & Sons