ABy Admin
Jun 25'23

Exercise

[math] \require{textmacros} \def \bbeta {\bf \beta} \def\fat#1{\mbox{\boldmath$#1$}} \def\reminder#1{\marginpar{\rule[0pt]{1mm}{11pt}}\textbf{#1}} \def\SSigma{\bf \Sigma} \def\ttheta{\bf \theta} \def\aalpha{\bf \alpha} \def\ddelta{\bf \delta} \def\eeta{\bf \eta} \def\llambda{\bf \lambda} \def\ggamma{\bf \gamma} \def\nnu{\bf \nu} \def\vvarepsilon{\bf \varepsilon} \def\mmu{\bf \mu} \def\nnu{\bf \nu} \def\ttau{\bf \tau} \def\SSigma{\bf \Sigma} \def\TTheta{\bf \Theta} \def\XXi{\bf \Xi} \def\PPi{\bf \Pi} \def\GGamma{\bf \Gamma} \def\DDelta{\bf \Delta} \def\ssigma{\bf \sigma} \def\UUpsilon{\bf \Upsilon} \def\PPsi{\bf \Psi} \def\PPhi{\bf \Phi} \def\LLambda{\bf \Lambda} \def\OOmega{\bf \Omega} [/math]

The variation in a binary response [math]Y_i \in \{ 0, 1 \}[/math] due to two covariates [math]\mathbf{X}_{i,\ast} = (X_{i,1}, X_{i,2})[/math] is described by the logistic regression model: [math]P(Y_i = 1) = \exp(\mathbf{X}_{i,\ast} \bbeta) [1 + \exp(\mathbf{X}_{i,\ast} \bbeta)][/math]. The study design and the observed response are given by:

[[math]] \begin{eqnarray*} \mathbf{X} & = & \left( \begin{array}{rr} 1 & -1 \\ -1 & 1 \end{array} \right) \quad \mbox{and} \quad \mathbf{Y} \, \, \, = \, \, \, \left( \begin{array}{rr} 1 \\ 0 \end{array} \right). \end{eqnarray*} [[/math]]

  • Write down the loglikelihood and show that [math]\hat{\bbeta} \in \{ (\beta_1, \beta_2)^{\top} \, : \, \beta_1 - \beta_2 = \infty \}[/math].
  • Augment the loglikelihood with the ridge penalty [math]\tfrac{1}{2} \lambda_1 \| \bbeta \|_2^2[/math] and show that [math]| \hat{\beta}_j (\lambda_2) | \lt \lambda_2[/math] for [math]j=1, 2[/math].