exercise:Ec005243b1: Difference between revisions

From Stochiki
(Created page with "<div class="d-none"> <math> \require{textmacros} \def \bbeta {\bf \beta} \def\fat#1{\mbox{\boldmath$#1$}} \def\reminder#1{\marginpar{\rule[0pt]{1mm}{11pt}}\textbf{#1}} \def\SS...")
 
mNo edit summary
 
Line 31: Line 31:
</math>
</math>
</div>
</div>
 
Revisit [[exercise:F58709b674|question]]. From a Bayesian perspective, is the suggestion of a negative ridge penalty parameter sensible?
This exercise is freely rendered from <ref name="Hast2009">Hastie, T., Friedman, J., and Tibshirani, R. (2009).''The Elements of Statistical Learning''.Springer</ref>, but can be found in many other places. The original source is unknown to the author.
 
Show that the ridge regression estimator can be obtained by ordinary least squares regression on an augmented data set. Hereto augment the matrix <math>\mathbf{X}</math> with <math>p</math> additional rows <math>\sqrt{\lambda} \mathbf{I}_{pp}</math>, and augment the response vector <math>\mathbf{Y}</math> with <math>p</math> zeros.

Latest revision as of 22:47, 24 June 2023

[math] \require{textmacros} \def \bbeta {\bf \beta} \def\fat#1{\mbox{\boldmath$#1$}} \def\reminder#1{\marginpar{\rule[0pt]{1mm}{11pt}}\textbf{#1}} \def\SSigma{\bf \Sigma} \def\ttheta{\bf \theta} \def\aalpha{\bf \alpha} \def\ddelta{\bf \delta} \def\eeta{\bf \eta} \def\llambda{\bf \lambda} \def\ggamma{\bf \gamma} \def\nnu{\bf \nu} \def\vvarepsilon{\bf \varepsilon} \def\mmu{\bf \mu} \def\nnu{\bf \nu} \def\ttau{\bf \tau} \def\SSigma{\bf \Sigma} \def\TTheta{\bf \Theta} \def\XXi{\bf \Xi} \def\PPi{\bf \Pi} \def\GGamma{\bf \Gamma} \def\DDelta{\bf \Delta} \def\ssigma{\bf \sigma} \def\UUpsilon{\bf \Upsilon} \def\PPsi{\bf \Psi} \def\PPhi{\bf \Phi} \def\LLambda{\bf \Lambda} \def\OOmega{\bf \Omega} [/math]

Revisit question. From a Bayesian perspective, is the suggestion of a negative ridge penalty parameter sensible?