guide:5b5300544d: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
<div class="d-none"> | |||
<math> | |||
\newcommand{\DS}{\displaystyle} | |||
\newcommand{\intbeta}{\lfloor \beta \rfloor} | |||
\newcommand{\cA}{\mathcal{A}} | |||
\newcommand{\cB}{\mathcal{B}} | |||
\newcommand{\cC}{\mathcal{C}} | |||
\newcommand{\cD}{\mathcal{D}} | |||
\newcommand{\cE}{\mathcal{E}} | |||
\newcommand{\cF}{\mathcal{F}} | |||
\newcommand{\cG}{\mathcal{G}} | |||
\newcommand{\cH}{\mathcal{H}} | |||
\newcommand{\cI}{\mathcal{I}} | |||
\newcommand{\cJ}{\mathcal{J}} | |||
\newcommand{\cK}{\mathcal{K}} | |||
\newcommand{\cL}{\mathcal{L}} | |||
\newcommand{\cM}{\mathcal{M}} | |||
\newcommand{\cN}{\mathcal{N}} | |||
\newcommand{\cO}{\mathcal{O}} | |||
\newcommand{\cP}{\mathcal{P}} | |||
\newcommand{\cQ}{\mathcal{Q}} | |||
\newcommand{\cS}{\mathcal{S}} | |||
\newcommand{\cT}{\mathcal{T}} | |||
\newcommand{\cU}{\mathcal{U}} | |||
\newcommand{\cV}{\mathcal{V}} | |||
\newcommand{\cX}{\mathcal{X}} | |||
\newcommand{\cY}{\mathcal{Y}} | |||
\newcommand{\cZ}{\mathcal{Z}} | |||
\newcommand{\sg}{\mathsf{subG}} | |||
\newcommand{\subE}{\mathsf{subE}} | |||
\newcommand{\bA}{\mathbf{A}} | |||
\newcommand{\bB}{\mathbf{B}} | |||
\newcommand{\bC}{\mathbf{C}} | |||
\newcommand{\bD}{\mathbf{D}} | |||
\newcommand{\bE}{\mathbf{E}} | |||
\newcommand{\bF}{\mathbf{F}} | |||
\newcommand{\bG}{\mathbf{G}} | |||
\newcommand{\bH}{\mathbf{H}} | |||
\newcommand{\bI}{\mathbf{I}} | |||
\newcommand{\bJ}{\mathbf{J}} | |||
\newcommand{\bK}{\mathbf{K}} | |||
\newcommand{\bM}{\mathbf{M}} | |||
\newcommand{\bN}{\mathbf{N}} | |||
\newcommand{\bO}{\mathbf{O}} | |||
\newcommand{\bP}{\mathbf{P}} | |||
\newcommand{\bp}{\mathbf{p}} | |||
\newcommand{\bQ}{\mathbf{Q}} | |||
\newcommand{\bS}{\mathbf{S}} | |||
\newcommand{\bT}{\mathbf{T}} | |||
\newcommand{\bU}{\mathbf{U}} | |||
\newcommand{\bV}{\mathbf{V}} | |||
\newcommand{\bX}{\mathbf{X}} | |||
\newcommand{\bY}{\mathbf{Y}} | |||
\newcommand{\bZ}{\mathbf{Z}} | |||
\newcommand{\bflambda}{\boldsymbol{\lambda}} | |||
\newcommand{\bftheta}{\boldsymbol{\theta}} | |||
\newcommand{\bfg}{\boldsymbol{g}} | |||
\newcommand{\bfy}{\boldsymbol{y}} | |||
\def\thetaphat{\hat{\bftheta}_\bp} | |||
\def\bflam{\boldsymbol{\lambda}} | |||
\def\Lam{\Lambda} | |||
\def\lam{\lambda} | |||
\def\bfpi{\boldsymbol{\pi}} | |||
\def\bfz{\boldsymbol{z}} | |||
\def\bfw{\boldsymbol{w}} | |||
\def\bfeta{\boldsymbol{\eta}} | |||
\newcommand{\R}{\mathrm{ I}\kern-0.18em\mathrm{ R}} | |||
\newcommand{\h}{\mathrm{ I}\kern-0.18em\mathrm{ H}} | |||
\newcommand{\K}{\mathrm{ I}\kern-0.18em\mathrm{ K}} | |||
\newcommand{\p}{\mathrm{ I}\kern-0.18em\mathrm{ P}} | |||
\newcommand{\E}{\mathrm{ I}\kern-0.18em\mathrm{ E}} | |||
%\newcommand{\Z}{\mathrm{ Z}\kern-0.26em\mathrm{ Z}} | |||
\newcommand{\1}{\mathrm{ 1}\kern-0.24em\mathrm{ I}} | |||
\newcommand{\N}{\mathrm{ I}\kern-0.18em\mathrm{ N}} | |||
\newcommand{\field}[1]{\mathbb{#1}} | |||
\newcommand{\q}{\field{Q}} | |||
\newcommand{\Z}{\field{Z}} | |||
\newcommand{\X}{\field{X}} | |||
\newcommand{\Y}{\field{Y}} | |||
\newcommand{\bbS}{\field{S}} | |||
\newcommand{\n}{\mathcal{N}} | |||
\newcommand{\x}{\mathcal{X}} | |||
\newcommand{\pp}{{\sf p}} | |||
\newcommand{\hh}{{\sf h}} | |||
\newcommand{\ff}{{\sf f}} | |||
\newcommand{\Bern}{\mathsf{Ber}} | |||
\newcommand{\Bin}{\mathsf{Bin}} | |||
\newcommand{\Lap}{\mathsf{Lap}} | |||
\newcommand{\tr}{\mathsf{Tr}} | |||
\newcommand{\phin}{\varphi_n} | |||
\newcommand{\phinb}{\overline \varphi_n(t)} | |||
\newcommand{\pn}{\p_{\kern-0.25em n}} | |||
\newcommand{\pnm}{\p_{\kern-0.25em n,m}} | |||
\newcommand{\psubm}{\p_{\kern-0.25em m}} | |||
\newcommand{\psubp}{\p_{\kern-0.25em p}} | |||
\newcommand{\cfi}{\cF_{\kern-0.25em \infty}} | |||
\newcommand{\e}{\mathrm{e}} | |||
\newcommand{\ic}{\mathrm{i}} | |||
\newcommand{\Leb}{\mathrm{Leb}_d} | |||
\newcommand{\Var}{\mathrm{Var}} | |||
\newcommand{\ddelta}{d_{\symdiffsmall}} | |||
\newcommand{\dsubh}{d_H} | |||
\newcommand{\indep}{\perp\kern-0.95em{\perp}} | |||
\newcommand{\supp}{\mathop{\mathrm{supp}}} | |||
\newcommand{\rank}{\mathop{\mathrm{rank}}} | |||
\newcommand{\vol}{\mathop{\mathrm{vol}}} | |||
\newcommand{\conv}{\mathop{\mathrm{conv}}} | |||
\newcommand{\card}{\mathop{\mathrm{card}}} | |||
\newcommand{\argmin}{\mathop{\mathrm{argmin}}} | |||
\newcommand{\argmax}{\mathop{\mathrm{argmax}}} | |||
\newcommand{\ud}{\mathrm{d}} | |||
\newcommand{\var}{\mathrm{var}} | |||
\newcommand{\re}{\mathrm{Re}} | |||
\newcommand{\MSE}{\mathsf{MSE}} | |||
\newcommand{\im}{\mathrm{Im}} | |||
\newcommand{\epr}{\hfill\hbox{\hskip 4pt\vrule width 5pt | |||
height 6pt depth 1.5pt}\vspace{0.5cm}\par} | |||
\newcommand{\bi}[1]{^{(#1)}} | |||
\newcommand{\eps}{\varepsilon} | |||
\newcommand{\Deq}{\stackrel{\mathcal{D}}{=}} | |||
\newcommand{\ubar}{\underbar} | |||
\newcommand{\Kbeta}{K_{\hspace{-0.3mm} \beta}} | |||
\newcommand{\crzero}[1]{\overset{r_0}{\underset{#1}{\longleftrightarrow}}} | |||
\newcommand{\hint}[1]{\texttt{[Hint:#1]}} | |||
\newcommand{\vp}{\vspace{.25cm}} | |||
\newcommand{\vm}{\vspace{.5cm}} | |||
\newcommand{\vg}{\vspace{1cm}} | |||
\newcommand{\vgneg}{\vspace{-1cm}} | |||
\newcommand{\vneg}{\vspace{-.5cm}} | |||
\newcommand{\vpneg}{\vspace{-.25cm}} | |||
\newcommand{\tp}{\ptsize{10}} | |||
\newcommand{\douzp}{\ptsize{12}} | |||
\newcommand{\np}{\ptsize{9}} | |||
\newcommand{\hp}{\ptsize{8}} | |||
\newcommand{\red}{\color{red}} | |||
\newcommand{\ndpr}[1]{{\textsf{\red[#1]}}} | |||
\newcommand\iid{i.i.d\@ifnextchar.{}{.\@\xspace} } | |||
\newcommand\MoveEqLeft[1][2]{\kern #1em & \kern -#1em} | |||
\newcommand{\leadeq}[2][4]{\MoveEqLeft[#1] #2 \nonumber} | |||
\newcommand{\leadeqnum}[2][4]{\MoveEqLeft[#1] #2} | |||
\newcommand\independent{\protect\mathpalette{\protect\independenT}{\perp}} | |||
\def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}} | |||
\newcommand{\MIT}[1]{{\color{MITred} #1}} | |||
\newcommand{\dHyp}{\{-1,1\}^d} | |||
\newcommand{\thetahard}{\hat \theta^{hrd}} | |||
\newcommand{\thetasoft}{\hat \theta^{sft}} | |||
\newcommand{\thetabic}{\hat \theta^{bic}} | |||
\newcommand{\thetalasso}{\hat \theta^{\cL}} | |||
\newcommand{\thetaslope}{\hat \theta^{\cS}} | |||
\newcommand{\thetahard}{\hat \theta^{hrd}} | |||
\newcommand{\thetasoft}{\hat \theta^{sft}} | |||
\newcommand{\thetabic}{\hat \theta^{bic}} | |||
\newcommand{\thetalasso}{\hat \theta^{\cL}} | |||
\newcommand{\thetaslope}{\hat \theta^{\cS}} | |||
\newcommand{\thetals}{\hat \theta^{ls}} | |||
\newcommand{\thetalsm}{\tilde \theta^{ls_X}} | |||
\newcommand{\thetaridge}{\hat\theta^{\mathrm{ridge}}_\tau} | |||
\newcommand{\thetalsK}{\hat \theta^{ls}_K} | |||
\newcommand{\muls}{\hat \mu^{ls}} | |||
</math> | |||
</div> | |||
\label{chap:minimax} | |||
\newcommand{\KL}{\mathsf{KL}} | |||
\newcommand{\TV}{\mathsf{TV}} | |||
In the previous chapters, we have proved several upper bounds and the goal of this chapter is to assess their optimality. Specifically, our goal is to answer the following questions: | |||
<ul><li> Can our analysis be improved? In other words: do the estimators that we have studied actually satisfy better bounds? | |||
</li> | |||
<li> Can any estimator improve upon these bounds? | |||
</li> | |||
</ul> | |||
Both questions ask about some form of ''optimality''. The first one is about optimality of an estimator, whereas the second one is about optimality of a bound. | |||
The difficulty of these questions varies depending on whether we are looking for a positive or a negative answer. Indeed, a positive answer to these questions simply consists in finding a better proof for the estimator we have studied (question 1.) or simply finding a better estimator, together with a proof that it performs better (question 2.). A negative answer is much more arduous. For example, in question 2., it is a statement about ''all estimators''. How can this be done? The answer lies in information theory (see~<ref name="CovTho06">{{cite journal|last=Birg{{\'e}}|first=Lucien|journal=Z. Wahrsch. Verw. Gebiete|year=1983|title=Approximation dans les espaces m{\'e}triques et th{\'e}orie de l'estimation|volume=65|number=2}}</ref> for a nice introduction). | |||
In this chapter, we will see how to give a negative answer to question 2. It will imply a negative answer to question 1. | |||
==Optimality in a minimax sense== | |||
Consider the Gaussian Sequence Model (GSM) where we observe <math>\bY=(Y_1, \ldots, Y_d)^\top</math>, defined by | |||
<math display="block"> | |||
\begin{equation} | |||
\label{EQ:GSMminimax} | |||
Y_i=\theta^*_i+ \varepsilon_i\,, \qquad i=1, \ldots, d\,, | |||
\end{equation} | |||
</math> | |||
where <math>\varepsilon=(\eps_1, \ldots, \eps_d)^\top \sim \cN_d(0,\frac{\sigma^2}{n}I_d)</math>, <math>\theta^*=(\theta^*_1, \ldots, \theta^*_d)^\top \in \Theta </math> is the parameter of interest and <math>\Theta \subset \R^d</math> is a given set of parameters. We will need a more precise notation for probabilities and expectations throughout this chapter. Denote by <math>\p_{\theta^*}</math> and <math>\E_{\theta^*}</math> the probability measure and corresponding expectation that are associated to the distribution of <math>\bY</math> from the GSM~\eqref{EQ:GSMminimax}. | |||
Recall that GSM is a special case of the linear regression model when the design matrix satisfies the ORT condition. In this case, we have proved several performance guarantees (''upper bounds'') for various choices of <math>\Theta</math> that can be expressed either in the form | |||
<math display="block"> | |||
\begin{equation} | |||
\label{EQ:minimaxUB_E} | |||
\E\big[|\hat \theta_n -\theta^*|_2^2\big] \le C\phi(\Theta) | |||
\end{equation} | |||
</math> | |||
or the form | |||
<math display="block"> | |||
\begin{equation} | |||
\label{EQ:minimaxUB_P} | |||
|\hat \theta_n -\theta^*|_2^2 \le C\phi(\Theta)\,, \quad \text{with prob.} \ 1-d^{-2} | |||
\end{equation} | |||
</math> | |||
For some constant <math>C</math>. The rates <math>\phi(\Theta)</math> for different choices of <math>\Theta</math> that we have obtained are gathered in Table~[[#TAB:minimaxUB |Minimax Lower Bounds]] together with the estimator (and the corresponding result from Chapter~[[guide:090050c501#chap:GSM |Linear Regression Model]]) that was employed to obtain this rate. | |||
\renewcommand{\arraystretch}{2.5} | |||
<span id="TAB:minimaxUB"/> | |||
{|class="table" | |||
|+ Rate <math>\phi(\Theta)</math> obtained for different choices of <math>\Theta</math>. | |||
|- | |||
|<math>\Theta</math> || <math>\phi(\Theta)</math> || Estimator || Result | |||
|- | |||
|<math>\R^d</math> || <math>\DS \frac{\sigma^2 d}{n}</math> || <math>\thetals</math> || Theorem~[[guide:090050c501#TH:lsOI |Linear Regression Model]] | |||
|- | |||
|<math>\cB_1</math> || <math>\DS\sigma \sqrt{\frac{ \log d}{n}}</math> || <math>\thetals_{\cB_1}</math> || Theorem~[[guide:090050c501#TH:ell1const |Linear Regression Model]] | |||
|- | |||
|<math>\cB_0(k)</math> || <math>\DS\frac{ \sigma^2 k}{n}\log (ed/k)</math> || <math>\thetals_{\cB_0(k)}</math> || Corollaries~[[guide:090050c501#COR:bss1 |Linear Regression Model]]-[[guide:090050c501#COR:bss2 |Linear Regression Model]] | |||
|} | |||
\renewcommand{\arraystretch}{1} | |||
Can any of these results be improved? In other words, does there exists another estimator <math>\tilde \theta</math> such that <math>\sup_{\theta^* \in \Theta}\E|\tilde \theta -\theta^*|_2^2\ll\phi(\Theta)</math>? | |||
A first step in this direction is the Cram\'er-Rao lower bound <ref name="Sha03">{{cite journal|last=Birg{{\'e}}|first=Lucien|journal=Z. Wahrsch. Verw. Gebiete|year=1983|title=Approximation dans les espaces m{\'e}triques et th{\'e}orie de l'estimation|volume=65|number=2}}</ref> that allows us to prove lower bounds in terms of the Fisher information. Nevertheless, this notion of optimality is too stringent and often leads to nonexistence of optimal estimators. Rather, we prefer here the notion of ''minimax optimality'' that characterizes how fast <math>\theta^*</math> can be estimated ''uniformly'' over <math>\Theta</math>. | |||
{{defncard|label=id=|We say that an estimator <math>\hat \theta_n</math> is ''minimax optimal over <math>\Theta</math>'' if it satisfies \eqref{EQ:minimaxUB_E} and there exists <math>C' > 0</math> such that | |||
<math display="block"> | |||
\begin{equation} | |||
\label{EQ:minimaxLB1} | |||
\inf_{\hat \theta}\sup_{\theta \in \Theta}\E_\theta|\hat \theta- \theta|_2^2 \ge C'\phi(\Theta) | |||
\end{equation} | |||
</math> | |||
where the infimum is taker over all estimators (i.e., measurable functions of <math>\bY</math>). Moreover, <math>\phi(\Theta)</math> is called ''minimax rate of estimation over <math>\Theta</math>''.}} | |||
Note that minimax rates of convergence <math>\phi</math> are defined up to multiplicative constants. We may then choose this constant such that the minimax rate has a simple form such as <math>\sigma^2d/n</math> as opposed to <math>7\sigma^2d/n</math> for example. | |||
This definition can be adapted to rates that hold with high probability. As we saw in Chapter~[[guide:090050c501#chap:GSM |Linear Regression Model]] (Cf. Table~[[#TAB:minimaxUB |Minimax Lower Bounds]]), the upper bounds in expectation and those with high probability are of the same order of magnitude. It is also the case for lower bounds. Indeed, observe that it follows from the Markov inequality that for any <math>A > 0</math>, | |||
<math display="block"> | |||
\begin{equation} | |||
\label{EQ:markov_minimax} | |||
\E_\theta\big[\phi^{-1}(\Theta)|\hat \theta- \theta|_2^2\big] \ge A \p_\theta\big[\phi^{-1}(\Theta)|\hat \theta- \theta|_2^2 > A\big] | |||
\end{equation} | |||
</math> | |||
Therefore,~\eqref{EQ:minimaxLB1} follows if we prove that | |||
<math display="block"> | |||
\inf_{\hat \theta}\sup_{\theta \in \Theta}\p_\theta\big[|\hat \theta- \theta|_2^2 > A\phi(\Theta)\big]\ge C | |||
</math> | |||
for some positive constants <math>A</math> and <math>C"</math>. The above inequality also implies a lower bound with high probability. We can therefore employ the following alternate definition for minimax optimality. | |||
{{defncard|label=id=|We say that an estimator <math>\hat \theta</math> is ''minimax optimal over <math>\Theta</math>'' if it satisfies either \eqref{EQ:minimaxUB_E} or \eqref{EQ:minimaxUB_P} and there exists <math>C' > 0</math> such that | |||
<math display="block"> | |||
\begin{equation} | |||
\label{EQ:minimaxLB} | |||
\inf_{\hat \theta}\sup_{\theta \in \Theta}\p_\theta\big[|\hat \theta-\theta|_2^2 > \phi(\Theta)\big] \ge C' | |||
\end{equation} | |||
</math> | |||
where the infimum is taken over all estimators (i.e., measurable functions of <math>\bY</math>). Moreover, <math>\phi(\Theta)</math> is called ''minimax rate of estimation over <math>\Theta</math>''.}} | |||
==<span id="SEC:reduc"></span>Reduction to finite hypothesis testing== | |||
Minimax lower bounds rely on information theory and follow from a simple principle: if the number of observations is too small, it may be hard to distinguish between two probability distributions that are close to each other. For example, given <math>n</math> i.i.d. observations, it is impossible to reliably decide whether they are drawn from <math>\cN(0,1)</math> or <math>\cN(\frac1n,1)</math>. This simple argument can be made precise using the formalism of ''statistical hypothesis testing''. To do so, we reduce our estimation problem to a testing problem. The reduction consists of two steps. | |||
<ul><li> '''Reduction to a finite number of parameters.''' In this step the goal is to find the largest possible number of parameters <math>\theta_1, \ldots, \theta_M \in \Theta</math> under the constraint that | |||
<math display="block"> | |||
\begin{equation} | |||
\label{EQ:dist_constraint} | |||
|\theta_j -\theta_k|_2^2 \ge 4 \phi(\Theta)\,. | |||
\end{equation} | |||
</math> | |||
This problem boils down to a ''packing'' of the set <math>\Theta</math>. | |||
Then we can use the following trivial observations: | |||
<math display="block"> | |||
\inf_{\hat \theta}\sup_{\theta \in \Theta}\p_\theta\big[|\hat \theta- \theta|_2^2 > \phi(\Theta)\big] \ge \inf_{\hat \theta}\max_{1\le j \le M}\p_{\theta_j}\big[|\hat \theta- \theta_j|_2^2 > \phi(\Theta)\big]\,. | |||
</math> | |||
</li> | |||
<li> '''Reduction to a hypothesis testing problem.''' In this second step, the necessity of the constraint~\eqref{EQ:dist_constraint} becomes apparent. | |||
For any estimator <math>\hat \theta</math>, define the minimum distance test <math>\psi(\hat \theta)</math> that is associated to it by | |||
<math display="block"> | |||
\psi(\hat \theta)=\argmin_{1\le j \le M} |\hat \theta -\theta_j|_2\,, | |||
</math> | |||
with ties broken arbitrarily. | |||
Next observe that if, for some <math>j=1, \ldots, M</math>, <math>\psi(\hat \theta)\neq j</math>, then there exists <math>k \neq j</math> such that <math>|\hat \theta -\theta_k|_2 \le |\hat \theta -\theta_j|_2</math>. Together with the reverse triangle inequality it yields | |||
<math display="block"> | |||
|\hat \theta - \theta_j|_2 \ge |\theta_j -\theta_k|_2- |\hat \theta - \theta_k|_2 \ge |\theta_j -\theta_k|_2- |\hat \theta - \theta_j|_2 | |||
</math> | |||
so that | |||
<math display="block"> | |||
|\hat \theta - \theta_j|_2 \ge \frac12 |\theta_j -\theta_k|_2 | |||
</math> | |||
Together with constraint~\eqref{EQ:dist_constraint}, it yields | |||
<math display="block"> | |||
|\hat \theta - \theta_j|_2^2 \ge \phi(\Theta) | |||
</math> | |||
As a result, | |||
<math display="block"> | |||
\begin{align*} | |||
\inf_{\hat \theta}\max_{1\le j \le M}\p_{\theta_j}\big[|\hat \theta- \theta_j|_2^2 > \phi(\Theta)\big]&\ge \inf_{\hat \theta}\max_{1\le j \le M}\p_{\theta_j}\big[\psi(\hat \theta)\neq j\big]\\ | |||
&\ge \inf_{\psi}\max_{1\le j \le M}\p_{\theta_j}\big[\psi\neq j\big] | |||
\end{align*} | |||
</math> | |||
where the infimum is taken over all tests <math>\psi</math> based on <math>\bY</math> and that take values in <math>\{1, \ldots, M\}</math>. | |||
</li> | |||
</ul> | |||
\textsc{Conclusion:} it is sufficient for proving lower bounds to find <math>\theta_1, \ldots, \theta_M \in \Theta</math> such that <math>|\theta_j -\theta_k|_2^2 \ge 4 \phi(\Theta)</math> and | |||
<math display="block"> | |||
\inf_{\psi}\max_{1\le j \le M}\p_{\theta_j}\big[\psi\neq j\big] \ge C'\,. | |||
</math> | |||
The above quantity is called ''minimax probability of error''. In the next sections, we show how it can be bounded from below using arguments from information theory. For the purpose of illustration, we begin with the simple case where <math>M=2</math> in the next section. | |||
==Lower bounds based on two hypotheses== | |||
===The Neyman-Pearson Lemma and the total variation distance=== | |||
Consider two probability measures <math>\p_0</math> and <math>\p_1</math> and observations <math>X</math> drawn from either <math>\p_0</math> or <math>\p_1</math>. We want to know which distribution <math>X</math> comes from. It corresponds to the following statistical hypothesis problem: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
H_0&:& Z\sim \p_0\\ | |||
H_1&:& Z \sim \p_1 | |||
\end{eqnarray*} | |||
</math> | |||
A test <math>\psi=\psi(Z) \in \{0,1\}</math> indicates which hypothesis should be true. Any test <math>\psi</math> can make two types of errors. It can commit either an error of type I (<math>\psi=1</math> whereas <math>Z \sim \p_0</math>) or an error of type II (<math>\psi=0</math> whereas <math>Z \sim \p_1</math>). Of course, the test may also be correct. The following fundamental result called the ''Neyman Pearson Lemma'' indicates that any test <math>\psi</math> is bound to commit one of these two types of error with positive probability unless <math>\p_0</math> and <math>\p_1</math> have essentially disjoint support. | |||
Let <math>\nu</math> be a sigma finite measure satisfying <math>\p_0 \ll \nu</math> and <math>\p_1\ll \nu</math>. For example, we can take <math>\nu = \p_0+\p_1</math>. It follows from the Radon-Nikodym theorem <ref name="Bil95">{{cite journal|last=Birg{{\'e}}|first=Lucien|journal=Z. Wahrsch. Verw. Gebiete|year=1983|title=Approximation dans les espaces m{\'e}triques et th{\'e}orie de l'estimation|volume=65|number=2}}</ref> that both <math>\p_0</math> and <math>\p_1</math> admit probability densities with respect to <math>\nu</math>. We denote them by <math>p_0</math> and <math>p_1</math> respectively. For any function <math>f</math>, we write for simplicity | |||
<math display="block"> | |||
\int f = \int f(x)\nu (\ud x) | |||
</math> | |||
{{proofcard|Lemma (Neyman-Pearson)|LEM:Neyman|Let <math>\p_0</math> and <math>\p_1</math> be two probability measures. Then for any test <math>\psi</math>, it holds | |||
<math display="block"> | |||
\p_0(\psi=1)+\p_1(\psi=0) \ge \int \min (p_0, p_1) | |||
</math> | |||
Moreover, equality holds for the ''Likelihood Ratio test'' <math>\psi^\star=\1(p_1\ge p_0)</math>. | |||
|Observe first that | |||
<math display="block"> | |||
\begin{align*} | |||
\p_0(\psi^\star=1)+\p_1(\psi^\star=0) &=\int_{\psi^*=1}p_0 + \int_{\psi^*=0}p_1\\ | |||
&=\int_{p_1\ge p_0}p_0 + \int_{p_1 < p_0}p_1\\ | |||
&=\int_{p_1\ge p_0}\min(p_0,p_1)+ \int_{p_1 < p_0}\min(p_0,p_1)\\ | |||
&=\int\min(p_0,p_1)\,. | |||
\end{align*} | |||
</math> | |||
Next for any test <math>\psi</math>, define its rejection region <math>R=\{\psi=1\}</math>. Let <math>R^\star=\{p_1\ge p_0\}</math> denote the rejection region of the likelihood ratio test <math>\psi^\star</math>. It holds | |||
<math display="block"> | |||
\begin{align*} | |||
\p_0(\psi=1)+\p_1(\psi=0) &=1+\p_0(R)-\p_1(R) \\ | |||
&=1+\int_Rp_0-p_1\\ | |||
&=1+\int_{R\cap R^\star}p_0-p_1 +\int_{R\cap (R^\star)^c}p_0-p_1\\ | |||
&=1-\int_{R\cap R^\star}|p_0-p_1|+\int_{R\cap (R^\star)^c}|p_0-p_1|\\ | |||
&=1+\int|p_0-p_1|\big[\1(R\cap (R^\star)^c)-\1(R\cap R^\star)\big] | |||
\end{align*} | |||
</math> | |||
The above quantity is clearly minimized for <math>R=R^\star</math>.}} | |||
The lower bound in the Neyman-Pearson lemma is related to a well-known quantity: the total variation distance. | |||
\begin{propdef} | |||
\label{PROP:TV} | |||
The ''total variation distance'' between two probability measures <math>\p_0</math> and <math>\p_1</math> on a measurable space <math>(\cX, \cA)</math> is defined by | |||
<math display="block"> | |||
\begin{align*} | |||
\TV(\p_0,\p_1)&=\sup_{R \in \cA}|\p_0(R)-\p_1(R)|&(i)\\ | |||
&=\sup_{R \in \cA}\Big|\int_Rp_0-p_1\Big|&(ii)\\ | |||
&=\frac{1}{2}\int|p_0-p_1|&(iii)\\ | |||
&=1-\int\min(p_0, p_1)&(iv)\\ | |||
&=1-\inf_{\psi}\big[\p_0(\psi=1)+\p_1(\psi=0)\big]&(v) | |||
\end{align*} | |||
</math> | |||
where the infimum above is taken over all tests. | |||
\end{propdef} | |||
\begin{proof} | |||
Clearly <math>(i)=(ii)</math> and the Neyman-Pearson Lemma gives <math>(iv)=(v)</math>. Moreover, by identifying a test <math>\psi</math> to its rejection region, it is not hard to see that <math>(i)=(v)</math>. Therefore it remains only to show that <math>(iii)</math> is equal to any of the other expressions. Hereafter, we show that <math>(iii)=(iv)</math>. To that end, observe that | |||
<math display="block"> | |||
\begin{align*} | |||
\int|p_0-p_1|&=\int_{p_1\ge p_0}p_1 -p_0+\int_{p_1 < p_0}p_0 -p_1\\ | |||
&=\int_{p_1\ge p_0}p_1+\int_{p_1 < p_0}p_0-\int\min(p_0,p_1)\\ | |||
&=1-\int_{p_1 < p_0}p_1+1-\int_{p_1\ge p_0}p_0-\int\min(p_0,p_1)\\ | |||
&=2-2\int\min(p_0,p_1) | |||
\end{align*} | |||
</math> | |||
\end{proof} | |||
In view of the Neyman-Pearson lemma, it is clear that if we want to prove large lower bounds, we need to find probability distributions that are close in total variation. Yet, this conflicts with constraint~\eqref{EQ:dist_constraint} and a tradeoff needs to be achieved. To that end, in the Gaussian sequence model, we need to be able to compute the total variation distance between <math>\cN(\theta_0, \frac{\sigma^2}{n}I_d)</math> and <math>\cN(\theta_1, \frac{\sigma^2}{n}I_d)</math>. None of the expression in Definition-Proposition~[[#PROP:TV |Minimax Lower Bounds]] gives an easy way to do so. The Kullback-Leibler divergence is much more convenient. | |||
===The Kullback-Leibler divergence=== | |||
{{defncard|label=id=|The Kullback-Leibler divergence between probability measures <math>\p_1</math> and <math>\p_0</math> is given by | |||
<math display="block"> | |||
\KL(\p_1, \p_0)=\left\{ | |||
\begin{array}{ll} | |||
\DS \int \log\Big(\frac{\ud \p_1}{\ud \p_0}\Big)\ud \p_1\,,& \text{if}\ \p_1 \ll \p_0\\ | |||
\infty\,, & \text{otherwise}\,. | |||
\end{array}\right. | |||
</math> | |||
}} | |||
It can be shown <ref name="Tsy09">{{cite journal|last=LeCam|first=L.|journal=Ann. Statist.|year=1973|title=Convergence of estimates under dimensionality restrictions|volume=1}}</ref> that the integral is always well defined when <math>\p_1 \ll \p_0</math> (though it can be equal to <math>\infty</math> even in this case). Unlike the total variation distance, the Kullback-Leibler divergence is not a distance. Actually, it is not even symmetric. Nevertheless, it enjoys properties that are very useful for our purposes. | |||
{{proofcard|Proposition|PROP:KL|Let <math>\p</math> and <math>\q</math> be two probability measures. Then | |||
<ul><li> <math>\KL(\p, \q)\ge 0</math>. | |||
</li> | |||
<li> The function <math>(\p, \q) \mapsto \KL(\p,\q)</math> is convex. | |||
</li> | |||
<li> If <math>\p</math> and <math>\q</math> are product measures, i.e., | |||
<math display="block"> | |||
\p=\bigotimes_{i=1}^n \p_i\quad \text{and} \quad \q=\bigotimes_{i=1}^n \q_i | |||
</math> | |||
then | |||
<math display="block"> | |||
\KL(\p,\q)=\sum_{i=1}^n \KL(\p_i,\q_i)\,. | |||
</math> | |||
</li> | |||
</ul> | |||
|If <math>\p</math> is not absolutely continuous then the result is trivial. Next, assume that <math>\p \ll \q</math> and let <math>X \sim \p</math>. | |||
1. Observe that by Jensen's inequality, | |||
<math display="block"> | |||
\KL(\p,\q)=-\E\log\Big(\frac{\ud \q}{\ud \p}(X)\Big) \ge -\log\E\Big(\frac{\ud \q}{\ud \p}(X)\Big)=-\log(1)=0\,. | |||
</math> | |||
2. Consider the function <math>f:(x,y) \mapsto x\log(x/y)</math> and compute its Hessian: | |||
<math display="block"> | |||
\frac{\partial^2f}{\partial x^2}=\frac{1}{x}\,, \qquad \frac{\partial^2f}{\partial y^2}=\frac{x}{y^2}\,, \qquad \frac{\partial^2f}{\partial x\partial y}=-\frac{1}{y}\,, | |||
</math> | |||
We check that the matrix <math>\DS H= \left(\begin{array}{cc}\frac{1}{x} & -\frac{1}{y} \\-\frac{1}{y} & \frac{x}{y^2}\end{array}\right)</math> is positive semidefinite for all <math>x,y > 0</math>. To that end, compute its determinant and trace: | |||
<math display="block"> | |||
\det(H)=\frac{1}{y^2}-\frac{1}{y^2}=0\,, \mathsf{Tr}(H)=\frac{1}{x}\frac{x}{y^2} > 0\,. | |||
</math> | |||
Therefore, <math>H</math> has one null eigenvalue and one positive one and must therefore be positive semidefinite so that the function <math>f</math> is convex on <math>(0,\infty)^2</math>. Since the KL divergence is a sum of such functions, it is also convex on the space of positive measures. | |||
3. Note that if <math>X=(X_1, \ldots, X_n)</math>, | |||
<math display="block"> | |||
\begin{align*} | |||
\KL(\p,\q)&= \E\log\Big(\frac{\ud \p}{\ud \q}(X)\Big)\\ | |||
&=\sum_{i=1}^n \int \log\Big(\frac{\ud \p_i}{\ud \q_i}(X_i)\Big)\ud \p_1(X_1)\cdots \ud \p_n(X_n)\\ | |||
&=\sum_{i=1}^n \int \log\Big(\frac{\ud \p_i}{\ud \q_i}(X_i)\Big)\ud \p_i(X_i)\\ | |||
&=\sum_{i=1}^n \KL(\p_i,\q_i) | |||
\end{align*} | |||
</math>}} | |||
Point 2. in Proposition~[[#PROP:KL |Minimax Lower Bounds]] is particularly useful in statistics where observations typically consist of <math>n</math> independent random variables.'''Example''' | |||
\label{ex:KL_Gauss} | |||
For any <math>\theta \in \R^d</math>, let <math>P_\theta</math> denote the distribution of <math>\bY \sim \cN(\theta, \sigma^2I_d)</math>. Then | |||
<math display="block"> | |||
\KL(P_\theta, P_{\theta'})=\sum_{i=1}^d \frac{(\theta_i -\theta'_i)^2}{2\sigma^2}=\frac{|\theta-\theta'|_2^2}{2\sigma^2}\,. | |||
</math> | |||
The proof is left as an exercise (see Problem~[[#EXO:KL_Gauss |Minimax Lower Bounds]]). | |||
The Kullback-Leibler divergence is easier to manipulate than the total variation distance but only the latter is related to the minimax probability of error. Fortunately, these two quantities can be compared using Pinsker's inequality. We prove here a slightly weaker version of Pinsker's inequality that will be sufficient for our purpose. For a stronger statement, see <ref name="Tsy09">{{cite journal|last=LeCam|first=L.|journal=Ann. Statist.|year=1973|title=Convergence of estimates under dimensionality restrictions|volume=1}}</ref>, Lemma~2.5. | |||
{{proofcard|Lemma (Pinsker's inequality.)|LEM:Pinsker|Let <math>\p</math> and <math>\q</math> be two probability measures such that <math>\p \ll \q</math>. Then | |||
<math display="block"> | |||
\TV(\p,\q) \le \sqrt{\KL(\p,\q)}\,. | |||
</math> | |||
|Note that | |||
<math display="block"> | |||
\begin{align*} | |||
\KL(\p,\q)&=\int_{pq > 0}p\log\Big(\frac{p}{q}\Big)\\ | |||
&=-2\int_{pq > 0}p\log\Big(\sqrt{\frac{q}{p}}\Big)\\ | |||
&=-2\int_{pq > 0}p\log\Big(\Big[\sqrt{\frac{q}{p}}-1\Big]+1\Big)\\ | |||
&\ge -2\int_{pq > 0}p\Big[\sqrt{\frac{q}{p}}-1\Big]\qquad \text{ (by Jensen)}\\ | |||
&=2-2\int\sqrt{pq}\\ | |||
\end{align*} | |||
</math> | |||
Next, note that | |||
<math display="block"> | |||
\begin{align*} | |||
\Big(\int\sqrt{pq}\Big)^2&=\Big(\int\sqrt{\max(p,q)\min(p,q)}\Big)^2\\ | |||
&\le \int\max(p,q)\int\min(p,q) \qquad \text{(by Cauchy-Schwarz)}\\ | |||
&=\big[2-\int\min(p,q)\big]\int\min(p,q)\\ | |||
&=\big(1+ \TV(\p,\q)\big)\big(1- \TV(\p,\q)\big)\\ | |||
&=1- \TV(\p,\q)^2 | |||
\end{align*} | |||
</math> | |||
The two displays yield | |||
<math display="block"> | |||
\KL(\p,\q)\ge 2-2\sqrt{1- \TV(\p,\q)^2}\ge\TV(\p,\q)^2\,, | |||
</math> | |||
where we used the fact that <math>0\le \TV(\p,\q) \le 1</math> and <math>\sqrt{1-x} \le 1-x/2</math> for <math>x \in [0,1]</math>.}} | |||
Pinsker's inequality yields the following theorem for the GSM. | |||
{{proofcard|Theorem|TH:LB2hyp|Assume that <math>\Theta</math> contains two hypotheses <math>\theta_0</math> and <math>\theta_1</math> such that <math>|\theta_0-\theta_1|_2^2 = 8\alpha^2\sigma^2/n</math> for some <math>\alpha \in (0,1/2)</math>. Then | |||
<math display="block"> | |||
\inf_{\hat \theta} \sup_{\theta \in \Theta}\p_\theta(|\hat \theta-\theta|_2^2 \ge \frac{2\alpha\sigma^2}{n})\ge \frac{1}{2}-\alpha\,. | |||
</math> | |||
|Write for simplicity <math>\p_j=\p_{\theta_j}</math>, <math>j=0,1</math>. | |||
Recall that it follows from the reduction to hypothesis testing that | |||
<math display="block"> | |||
\begin{align*} | |||
\inf_{\hat \theta} \sup_{\theta \in \Theta}\p_\theta(|\hat \theta-\theta|_2^2 \ge \frac{2\alpha\sigma^2}{n})&\ge \inf_{\psi}\max_{j=0,1}\p_j(\psi\neq j)&\\ | |||
&\ge \frac12\inf_{\psi}\Big(\p_0(\psi=1)+\p_1(\psi=0)\Big)&\\ | |||
&=\frac12\Big[1-\TV(\p_0,\p_1)\Big]&\hfill \text{(Prop.-def.~[[#PROP:TV |Minimax Lower Bounds]])}\\ | |||
&\ge \frac12\Big[1-\sqrt{\KL(\p_1, \p_0)}\Big]& \text{(Lemma~[[#LEM:Pinsker |Minimax Lower Bounds]])}\\ | |||
&= \frac12\Big[1-\sqrt{\frac{n|\theta_1-\theta_0|_2^2}{2\sigma^2}}\Big]&\text{(Example~[[#ex:KL_Gauss |Minimax Lower Bounds]])}\\ | |||
&= \frac12\Big[1-2\alpha\Big]& \qedhere | |||
\end{align*} | |||
</math>}} | |||
Clearly, the result of Theorem~[[#TH:LB2hyp |Minimax Lower Bounds]] matches the upper bound for <math>\Theta=\R^d</math> only for <math>d=1</math>. How about larger <math>d</math>? A quick inspection of our proof shows that our technique, in its present state, cannot yield better results. Indeed, there are only two known candidates for the choice of <math>\theta^*</math>. With this knowledge, one can obtain upper bounds that do not depend on <math>d</math> by simply projecting <math>Y</math> onto the linear span of <math>\theta_0, \theta_1</math> and then solving the GSM in two dimensions. To obtain larger lower bounds, we need to use more than two hypotheses. In particular, in view of the above discussion, we need a set of hypotheses that spans a linear space of dimension proportional to <math>d</math>. In principle, we should need at least order <math>d</math> hypotheses but we will actually need much more. | |||
==Lower bounds based on many hypotheses== | |||
The reduction to hypothesis testing from Section~[[#SEC:reduc |Reduction to finite hypothesis testing]] allows us to use more than two hypotheses. Specifically, we should find <math>\theta_1, \ldots, \theta_M</math> such that | |||
<math display="block"> | |||
\inf_{\psi}\max_{1\le j \le M}\p_{\theta_j}\big[\psi\neq j\big] \ge C'\,, | |||
</math> | |||
for some positive constant <math>C'</math>. Unfortunately, the Neyman-Pearson Lemma no longer exists for more than two hypotheses. Nevertheless, it is possible to relate the minimax probability of error directly to the Kullback-Leibler divergence, without involving the total variation distance. This is possible using a well known result from information theory called ''Fano's inequality''. | |||
{{proofcard|Theorem (Fano's inequality)|thm:fano-ineq|Let <math>P_1, \ldots, P_M, M \ge 2</math> be probability distributions such that <math>P_j \ll P_k</math>, <math>\forall \, j ,k</math>. Then | |||
<math display="block"> | |||
\inf_{\psi}\max_{1\le j \le M}P_{j}\big[\psi(X)\neq j\big] \ge 1-\frac{\frac{1}{M^2}\sum_{j,k=1}^M\KL(P_j,P_k)+\log 2}{\log M} | |||
</math> | |||
where the infimum is taken over all tests with values in <math>\{1, \ldots, M\}</math>. | |||
|Define | |||
<math display="block"> | |||
p_j=P_j(\psi=j)\qquad \text{and} \qquad q_j=\frac{1}{M}\sum_{k=1}^M P_k(\psi=j) | |||
</math> | |||
so that | |||
<math display="block"> | |||
\bar p =\frac{1}{M}\sum_{j=1}^M p_j \in (0,1)\,,\qquad \bar q = \frac1M\sum_{j=1}^M q_j\,. | |||
</math> | |||
Moreover, define the KL divergence between two Bernoulli distributions: | |||
<math display="block"> | |||
\mathsf{kl}(p, q)= p \log \big(\frac{p}{ q}\big)+(1- p) \log \big(\frac{1- p}{1- q}\big)\,. | |||
</math> | |||
Note that <math>\bar p \log \bar p + (1-\bar p) \log(1-\bar p) \ge -\log 2</math>, which yields | |||
<math display="block"> | |||
\bar p \le \frac{\mathsf{kl}(\bar p, \bar q)+ \log 2}{\log M}\,. | |||
</math> | |||
Next, observe that by convexity (Proposition~[[#PROP:KL |Minimax Lower Bounds]], 2.), it holds | |||
<math display="block"> | |||
\mathsf{kl}(\bar p, \bar q) \le \frac{1}{M}\sum_{j=1}^M \mathsf{kl}( p_j, q_j) \le \frac{1}{M^2}\sum_{j.k=1}^M \mathsf{kl}( P_j(\psi=j), P_k(\psi=j)) \,. | |||
</math> | |||
It remains to show that | |||
<math display="block"> | |||
\mathsf{kl}( P_j(\psi=j), P_k(\psi=j)) \le \KL(P_j,P_k)\,. | |||
</math> | |||
This result can be seen to follow directly from a well-known inequality often referred to as ''data processing inequality'' but we are going to prove it directly. | |||
Denote by <math>\ud P_k^{=j}</math> (resp. <math>\ud P_k^{\neq j}</math>) the conditional density of <math>P_k</math> given <math>\psi(X)=j</math> (resp. <math>\psi(X)\neq j</math>) and recall that | |||
<math display="block"> | |||
\begin{align*} | |||
&\KL(P_j,P_k)=\int\log\Big(\frac{\ud P_j}{\ud P_k}\Big) \ud P_j\\ | |||
&=\int_{\psi=j}\log\Big(\frac{\ud P_j}{\ud P_k}\Big) \ud P_j+\int_{\psi\neq j}\log\Big(\frac{\ud P_j}{\ud P_k}\Big) \ud P_j\\ | |||
&=P_j(\psi=j)\int\log\Big(\frac{\ud P_j^{=j}}{\ud P_k^{=j}}\frac{P_j(\psi=j)}{P_k(\psi=j)}\Big) \ud P_j^{=j}\\ | |||
&\qquad \qquad +P_j(\psi\neq j)\int\log\Big(\frac{\ud P^{\neq j}}{\ud P_k^{\neq j}}\frac{P_j(\psi\neq j)}{P_k(\psi\neq j)}\Big) \ud P_j^{\neq j}\\ | |||
&=P_j(\psi=j)\Big(\log\Big(\frac{P_j(\psi=j)}{P_k(\psi=j)} \Big)+ \KL(P_j^{=j},P_j^{=j})\Big)\\ | |||
&\qquad \qquad +P_j(\psi\neq j)\Big(\log\Big(\frac{P_j(\psi\neq j)}{P_k(\psi\neq j)} \Big)+ \KL(P_j^{\neq j},P_j^{\neq j})\Big)\\ | |||
& \ge \mathsf{kl}( P_j(\psi=j), P_k(\psi=j))\,. | |||
\end{align*} | |||
</math> | |||
where we used Proposition~[[#PROP:KL |Minimax Lower Bounds]], 1. | |||
We have proved that | |||
<math display="block"> | |||
\frac{1}{M}\sum_{j=1}^M P_j(\psi=j) \le \frac{\frac{1}{M^2}\sum_{j,k=1}^M\KL(P_j,P_k)+\log 2}{\log M}\,. | |||
</math> | |||
Passing to the complemetary sets completes the proof of Fano's inequality.}} | |||
Fano's inequality leads to the following useful theorem. | |||
{{proofcard|Theorem|TH:mainLB|Assume that <math>\Theta</math> contains <math>M \ge 5</math> hypotheses <math>\theta_1, \ldots, \theta_M</math> such that for some constant <math>0 < \alpha < 1/4</math>, it holds | |||
<ul style{{=}}"list-style-type:lower-alpha"><li><math>\DS|\theta_j-\theta_k|_2^2 \ge 4\phi</math></li> | |||
<li><math>\DS |\theta_j-\theta_k|_2^2 \le \frac{2\alpha\sigma^2 }{n}\log(M)</math></li> | |||
</ul> | |||
Then | |||
<math display="block"> | |||
\inf_{\hat \theta} \sup_{\theta \in \Theta}\p_\theta\big(|\hat \theta-\theta|_2^2 \ge \phi\big)\ge \frac{1}{2}-2\alpha\,. | |||
</math> | |||
|in view of (''i''), it follows from the reduction to hypothesis testing that it is sufficient to prove that | |||
<math display="block"> | |||
\inf_{\psi}\max_{1\le j \le M}\p_{\theta_j}\big[\psi\neq j\big] \ge \frac{1}{2}-2\alpha. | |||
</math> | |||
If follows from (''ii'') and Example~[[#ex:KL_Gauss |Minimax Lower Bounds]] that | |||
<math display="block"> | |||
\KL(\p_j,\p_k)=\frac{n|\theta_j-\theta_k|_2^2}{2\sigma^2}\le \alpha\log(M)\,. | |||
</math> | |||
Moreover, since <math>M \ge 5</math>, | |||
<math display="block"> | |||
\frac{\frac{1}{M^2}\sum_{j,k=1}^M\KL(\p_j,\p_k)+\log 2}{\log(M-1)}\le \frac{\alpha\log(M)+\log 2}{\log(M-1)}\le 2\alpha+\frac12\,. | |||
</math> | |||
The proof then follows from Fano's inequality.}} | |||
Theorem~[[#TH:mainLB |Minimax Lower Bounds]] indicates that we must take <math>\phi\le \frac{\alpha\sigma^2 }{2n}\log(M)</math>. Therefore, the larger the <math>M</math>, the larger the lower bound can be. However, <math>M</math> cannot be arbitrary larger because of the constraint <math>(i)</math>. We are therefore facing a ''packing'' problem where the goal is to ‘`pack" as many Euclidean balls of radius proportional to <math>\sigma\sqrt{\log(M)/n}</math> in <math>\Theta</math> under the constraint that their centers remain close together (constraint <math>(ii)</math>). If <math>\Theta=\R^d</math>, this the goal is to pack the Euclidean ball of radius <math>R=\sigma\sqrt{2\alpha\log(M)/n}</math> with Euclidean balls of radius <math>R\sqrt{2\alpha/\gamma}</math>. This can be done using a volume argument (see Problem~[[#EXO:packing |Minimax Lower Bounds]]). However, we will use the more versatile lemma below. It gives a a lower bound on the size of a packing of the discrete hypercube <math>\{0,1\}^d</math> with respect to the ’'Hamming distance'' defined by | |||
<math display="block"> | |||
\rho(\omega,\omega')=\sum_{i=1}^d\1(\omega_i \neq \omega'_j)\,, \qquad \forall\, \omega, \omega' \in \{0,1\}^d. | |||
</math> | |||
{{proofcard|Lemma (Varshamov-Gilbert)|LEM:VG|For any <math>\gamma \in (0,1/2)</math>, there exist binary vectors <math>\omega_{1}, \ldots \omega_{M} \in \{0,1\}^d</math> such that | |||
<ul style{{=}}"list-style-type:lower-alpha"><li><math>\DS\rho(\omega_{j}, \omega_{k})\ge \big(\frac12-\gamma\big)d</math> for all <math>j \neq k</math>\,,</li> | |||
<li><math>\DS M =\lfloor e^{\gamma^2d}\rfloor\ge e^{\frac{\gamma^2d}{2}}</math>\,.</li> | |||
</ul> | |||
|Let <math>\omega_{j,i}</math>, <math>1\le i \le d, 1\le j \le M</math> be \iid Bernoulli random variables with parameter <math>1/2</math> and observe that | |||
<math display="block"> | |||
d-\rho(\omega_{j}, \omega_{k})=X\sim \Bin(d,1/2)\,. | |||
</math> | |||
Therefore it follows from a union bound that | |||
<math display="block"> | |||
\p\big[\exists j\neq k\,, \rho(\omega_{j}, \omega_{k}) < \big(\frac12-\gamma\big)d\big] \le \frac{M(M-1)}{2}\p\big(X-\frac{d}2 > \gamma d\big)\,. | |||
</math> | |||
Hoeffding's inequality then yields | |||
<math display="block"> | |||
\frac{M(M-1)}{2}\p\big(X-\frac{d}2 > \gamma d\big)\le \exp\Big(-2\gamma^2d+\log\big( \frac{M(M-1)}{2}\big)\Big) < 1 | |||
</math> | |||
as soon as | |||
<math display="block"> | |||
M(M-1) < 2\exp\Big(2\gamma^2d\Big). | |||
</math> | |||
A sufficient condition for the above inequality to hold is to take <math>M=\lfloor e^{\gamma^2d}\rfloor \ge e^{\frac{\gamma^2d}{2}}</math>. For this value of <math>M</math>, we have | |||
<math display="block"> | |||
\p\big(\forall j\neq k\,, \rho(\omega_{j}, \omega_{k})\ge \big(\frac12-\gamma\big)d\big) > 0 | |||
</math> | |||
and by virtue of the probabilistic method, there exist <math>\omega_{1}, \ldots \omega_{M} \in \{0,1\}^d</math> that satisfy <math>(i)</math> and <math>(ii)</math>}} | |||
==<span id="sec:lb-gsm"></span>Application to the Gaussian sequence model== | |||
We are now in a position to apply Theorem~[[#TH:mainLB |Minimax Lower Bounds]] by choosing <math>\theta_1, \ldots, \theta_M</math> based on <math>\omega_{1}, \ldots, \omega_{M}</math> from the Varshamov-Gilbert Lemma. | |||
===Lower bounds for estimation=== | |||
Take <math>\gamma=1/4</math> and apply the Varshamov-Gilbert Lemma to obtain <math>\omega_{1}, \ldots, \omega_{M}</math> with <math>M=\lfloor e^{d/16}\rfloor\ge e^{d/32} </math> and such that | |||
<math>\rho(\omega_{j}, \omega_{k})\ge d/4</math> for all <math>j \neq k</math>. Let <math>\theta_{1},\ldots, \theta_M</math> be such that | |||
<math display="block"> | |||
\theta_j=\omega_j\frac{\beta\sigma}{\sqrt{n}}\,, | |||
</math> | |||
for some <math>\beta > 0</math> to be chosen later. We can check the conditions of Theorem~[[#TH:mainLB |Minimax Lower Bounds]]: | |||
<ul style{{=}}"list-style-type:lower-alpha"><li><math>\DS|\theta_j-\theta_k|_2^2=\frac{\beta^2\sigma^2 }{n}\rho(\omega_{j}, \omega_{k}) \ge 4\frac{\beta^2\sigma^2 d}{16n}</math>;</li> | |||
<li><math>\DS |\theta_j-\theta_k|_2^2 =\frac{\beta^2\sigma^2 }{n}\rho(\omega_{j}, \omega_{k}) \le \frac{\beta^2\sigma^2 d}{n}\le \frac{32\beta^2\sigma^2 }{n}\log(M)=\frac{2\alpha \sigma^2}{n}\log(M)</math>\,,</li> | |||
</ul> | |||
for <math>\beta=\frac{\sqrt{\alpha}}{4}</math>. Applying now Theorem~[[#TH:mainLB |Minimax Lower Bounds]] yields | |||
<math display="block"> | |||
\inf_{\hat \theta} \sup_{\theta \in \R^d}\p_\theta\big(|\hat \theta-\theta|_2^2 \ge \frac{\alpha}{256}\frac{\sigma^2 d}{n}\big)\ge \frac{1}{2}-2\alpha\,. | |||
</math> | |||
It implies the following corollary. | |||
{{proofcard|Corollary|cor-1|The minimax rate of estimation of over <math>\R^d</math> in the Gaussian sequence model is <math>\phi(\R^d)=\sigma^2d/n</math>. Moreover, it is attained by the least squares estimator <math>\thetals=\bY</math>.|}} | |||
Note that this rate is minimax over sets <math>\Theta</math> that are strictly smaller than <math>\R^d</math> (see Problem~[[#EXO:minimax_other |Minimax Lower Bounds]]). Indeed, it is minimax over any subset of <math>\R^d</math> that contains <math>\theta_1, \ldots, \theta_M</math>. | |||
===Lower bounds for sparse estimation=== | |||
It appears from Table~[[#TAB:minimaxUB |Minimax Lower Bounds]] that when estimating sparse vectors, we have to pay for an extra logarithmic term <math>\log(ed/k)</math> for not knowing the sparsity pattern of the unknown <math>\theta^*</math>. In this section, we show that this term is unavoidable as it appears in the minimax optimal rate of estimation of sparse vectors. | |||
Note that the vectors <math>\theta_1, \ldots, \theta_M</math> employed in the previous subsection are not guaranteed to be sparse because the vectors <math>\omega_1, \ldots, \omega_M</math> obtained from the Varshamov-Gilbert Lemma may themselves not be sparse. To overcome this limitation, we need a sparse version of the Varhsamov-Gilbert lemma. | |||
{{proofcard|Lemma (Sparse Varshamov-Gilbert)|LEM:sVG|There exist positive constants <math>C_1</math> and <math>C_2</math> such that the following holds for any two integers <math>k</math> and <math>d</math> such that <math>1\le k \le d/8</math>. | |||
There exist binary vectors <math>\omega_{1}, \ldots \omega_{M} \in \{0,1\}^d</math> such that | |||
<ul style{{=}}"list-style-type:lower-alpha"><li><math>\DS\rho(\omega_{i}, \omega_{j})\ge \frac{k}2</math> for all <math>i \neq j</math>\,,</li> | |||
<li><math>\DS \log(M) \ge \frac{k}{8}\log(1+\frac{d}{2k})</math>\,.</li> | |||
<li><math>\DS |\omega_j|_0=k</math> for all <math>j</math>\,.</li> | |||
</ul> | |||
|Take <math>\omega_1, \ldots, \omega_M</math> independently and uniformly at random from the set | |||
<math display="block"> | |||
C_0(k)=\{\omega\in \{0,1\}^d\,:\, |\omega|_0=k\} | |||
</math> | |||
of <math>k</math>-sparse binary random vectors. Note that <math>C_0(k)</math> has cardinality <math>\binom{d}{k}</math>. To choose <math>\omega_j</math> uniformly from <math>C_0(k)</math>, we proceed as follows. Let <math>U_1, \ldots, U_k \in \{1, \ldots, d\}</math> be <math>k</math> random variables such that <math>U_1</math> is drawn uniformly at random from <math>\{1, \ldots, d\}</math> and for any <math>i=2, \ldots, k</math>, conditionally on <math>U_1, \ldots, U_{i-1}</math>, the random variable <math>U_i</math> is drawn uniformly at random from <math>\{1, \ldots, d\}\setminus \{U_1, \ldots, U_{i-1}\}</math>. Then define | |||
<math display="block"> | |||
\omega=\left\{ | |||
\begin{array}{ll} | |||
1 & \text{if}\ i \in \{U_1, \ldots, U_k\} \\ | |||
0 & \text{otherwise}\,. | |||
\end{array}\right. | |||
</math> | |||
Clearly, all outcomes in <math>C_0(k)</math> are equally likely under this distribution and therefore, <math>\omega</math> is uniformly distributed on <math>C_0(k)</math>. Observe that | |||
<math display="block"> | |||
\begin{align*} | |||
\p\big( \exists\, \omega_j \neq \omega_k\,:\, \rho(\omega_j, \omega_k) < k\big)&=\frac{1}{\binom{d}{k}} \sum_{\substack{x \in \{0,1\}^d\\ |x|_0=k}}\p\big( \exists\, \omega_j \neq x\,:\, \rho(\omega_j,x) < \frac{k}{2}\big)\\ | |||
&\le \frac{1}{\binom{d}{k}} \sum_{\substack{x \in \{0,1\}^d\\ |x|_0=k}}\sum_{j=1}^M\p\big(\omega_j \neq x\,:\, \rho(\omega_j,x) < \frac{k}{2}\big)\\ | |||
&=M\p\big(\omega \neq x_0\,:\, \rho(\omega,x_0) < \frac{k}{2}\big), | |||
\end{align*} | |||
</math> | |||
where <math>\omega</math> has the same distribution as <math>\omega_1</math> and <math>x_0</math> is any <math>k</math>-sparse vector in <math>\{0,1\}^d</math>. | |||
The last equality holds by symmetry since (i) all the <math>\omega_j</math>s have the same distribution and (ii) all the outcomes of <math>\omega_j</math> are equally likely. | |||
Note that | |||
<math display="block"> | |||
\rho(\omega, x_0) \ge k-\sum_{i=1}^k Z_i\,, | |||
</math> | |||
where <math>Z_i=\1(U_i \in \supp(x_0))</math>. Indeed the left hand side is the number of coordinates on which the vectors <math>\omega, x_0</math> disagree and the right hand side is the number of coordinates in <math>\supp(x_0)</math> on which the two vectors disagree. In particular, we have that <math>Z_1 \sim\Bern(k/d)</math> and for any <math>i=2, \ldots, d</math>, conditionally on <math>Z_1, \ldots, Z_{i-i}</math>, we have <math>Z_i\sim \Bern(Q_i)</math>, where | |||
<math display="block"> | |||
Q_i=\frac{k-\sum_{l=1}^{i-1}Z_l}{p-(i-1)}\le \frac{k}{d-k}\le \frac{2k}{d}, | |||
</math> | |||
since <math>k \le d/2</math>. | |||
Next we apply a Chernoff bound to get that for any <math>s > 0</math>, | |||
<math display="block"> | |||
\p\big(\omega \neq x_0\,:\, \rho(\omega,x_0) < \frac{k}{2}\big)\le \p\big(\sum_{i=1}^kZ_i > \frac{k}{2}\big)= \E\Big[\exp\big(s\sum_{i=1}^kZ_i\big)\Big]e^{-\frac{sk}{2}} | |||
</math> | |||
The above MGF can be controlled by induction on <math>k</math> as follows: | |||
<math display="block"> | |||
\begin{align*} | |||
\E\Big[\exp\big(s\sum_{i=1}^kZ_i\big)\Big]&=\E\Big[\exp\big(s\sum_{i=1}^{k-1}Z_i\big)\E\exp\big(sZ_k\big| Z_1,\ldots, Z_{k=1}\big)\Big]\\ | |||
&=\E\Big[\exp\big(s\sum_{i=1}^{k-1}Z_i\big)(Q_k(e^s-1)+1)\Big]\\ | |||
&\le \E\Big[\exp\big(s\sum_{i=1}^{k-1}Z_i\big)\Big]\big(\frac{2k}{d}(e^s-1)+1\big)\\ | |||
&\quad \vdots\\ | |||
&\le \big(\frac{2k}{d}(e^s-1)+1\big)^k\\ | |||
&=2^k | |||
\end{align*} | |||
</math> | |||
For <math>s=\log(1+\frac{d}{2k})</math>. Putting everything together, we get | |||
<math display="block"> | |||
\begin{align*} | |||
\p\big( \exists\, \omega_j \neq \omega_k\,:\, \rho(\omega_j, \omega_k) < k\big)&\le\exp\Big(\log M+k\log 2-\frac{sk}{2}\Big)\\ | |||
&=\exp\Big(\log M+k\log 2-\frac{k}{2}\log(1+\frac{d}{2k})\Big)\\ | |||
&\le \exp\Big(\log M+k\log 2-\frac{k}{2}\log(1+\frac{d}{2k})\Big)\\ | |||
&\le \exp\Big(\log M-\frac{k}{4}\log(1+\frac{d}{2k})\Big)\qquad \text{(for $d\ge 8k$)}\\ | |||
& < 1\,, | |||
\end{align*} | |||
</math> | |||
if we take <math>M</math> such that | |||
<math display="block"> | |||
\begin{equation*} | |||
\log M < \frac{k}{4}\log(1+\frac{d}{2k}). \qedhere | |||
\end{equation*} | |||
</math>}} | |||
Apply the sparse Varshamov-Gilbert lemma to obtain <math>\omega_{1}, \ldots, \omega_{M}</math> with <math>\log(M)\ge \frac{k}{8}\log(1+\frac{d}{2k})</math> and such that | |||
<math>\rho(\omega_{j}, \omega_{k})\ge k/2</math> for all <math>j \neq k</math>. Let <math>\theta_{1},\ldots, \theta_M</math> be such that | |||
<math display="block"> | |||
\theta_j=\omega_j\frac{\beta\sigma}{\sqrt{n}}\sqrt{\log(1+\frac{d}{2k})}\,, | |||
</math> | |||
for some <math>\beta > 0</math> to be chosen later. We can check the conditions of Theorem~[[#TH:mainLB |Minimax Lower Bounds]]: | |||
<ul style{{=}}"list-style-type:lower-alpha"><li><math>\DS|\theta_j-\theta_k|_2^2=\frac{\beta^2\sigma^2 }{n}\rho(\omega_{j}, \omega_{k})\log(1+\frac{d}{2k}) \ge 4\frac{\beta^2\sigma^2 }{8n}k\log(1+\frac{d}{2k})</math>;</li> | |||
<li><math>\DS |\theta_j-\theta_k|_2^2 =\frac{\beta^2\sigma^2 }{n}\rho(\omega_{j}, \omega_{k})\log(1+\frac{d}{2k}) \le \frac{2k\beta^2\sigma^2 }{n}\log(1+\frac{d}{2k})\le \frac{2\alpha \sigma^2}{n}\log(M)</math>\,,</li> | |||
</ul> | |||
for <math>\beta=\sqrt{\frac{\alpha}{8}}</math>. Applying now Theorem~[[#TH:mainLB |Minimax Lower Bounds]] yields | |||
<math display="block"> | |||
\inf_{\hat \theta} \sup_{\substack{\theta \in \R^d\\ |\theta|_0\le k}}\p_\theta\big(|\hat \theta-\theta|_2^2 \ge \frac{\alpha^2\sigma^2 }{64n}k\log(1+\frac{d}{2k})\big)\ge \frac{1}{2}-2\alpha\,. | |||
</math> | |||
It implies the following corollary. | |||
{{proofcard|Corollary|cor:minimax-sparse|Recall that <math>\cB_{0}(k) \subset \R^d</math> denotes the set of all <math>k</math>-sparse vectors of <math>\R^d</math>. | |||
The minimax rate of estimation over <math>\cB_{0}(k)</math> in the Gaussian sequence model is <math>\phi(\cB_{0}(k))=\frac{\sigma^2k}{n}\log(ed/k)</math>. Moreover, it is attained by the constrained least squares estimator <math>\thetals_{\cB_0(k)}</math>.|}} | |||
Note that the modified BIC estimator of Problem~[[guide:090050c501#EXO:sparse:betterbic |Linear Regression Model]] and the Slope [[guide:090050c501#eq:ad |eq:ad]] are also minimax optimal over <math>\cB_{0}(k)</math>. | |||
However, unlike <math> \thetals_{\cB_0(k)} </math> and the modified BIC, the Slope is also adaptive to <math> k </math>. | |||
For any <math>\eps > 0</math>, the Lasso estimator and the BIC estimator are minimax optimal for sets of parameters such that <math>k\le d^{1-\eps}</math>. | |||
===Lower bounds for estimating vectors in <math>\ell_1</math> balls=== | |||
Recall that in Maurey's argument, we approximated a vector <math>\theta</math> such that <math>|\theta|_1=R</math> by a vector <math>\theta'</math> such that <math>|\theta'|_0=\frac{R}{\sigma}\sqrt{\frac{n}{\log d}}</math>\,. We can essentially do the same for the lower bound. | |||
Assume that <math>d \ge \sqrt{n}</math> and let <math>\beta \in (0,1)</math> be a parameter to be chosen later and define <math>k</math> to be the smallest integer such that | |||
<math display="block"> | |||
k\ge \frac{R}{\beta\sigma}\sqrt{\frac{n}{\log (ed/\sqrt{n})}}\,. | |||
</math> | |||
Let <math>\omega_1, \ldots, \omega_M</math> be obtained from the sparse Varshamov-Gilbert Lemma~[[#LEM:sVG |Minimax Lower Bounds]] with this choice of <math>k</math> and define | |||
<math display="block"> | |||
\theta_j=\omega_j\frac{R}{k}\,. | |||
</math> | |||
Observe that <math>|\theta_j|_1=R</math> for <math>j=1,\ldots, M</math>. | |||
We can check the conditions of Theorem~[[#TH:mainLB |Minimax Lower Bounds]]: | |||
<ul style{{=}}"list-style-type:lower-alpha"><li><math>\DS|\theta_j-\theta_k|_2^2=\frac{R^2}{k^2}\rho(\omega_{j}, \omega_{k})\ge \frac{R^2}{2k}\ge 4R\min\big(\frac{R}8, \beta^2\sigma\frac{\log (ed/\sqrt{n})}{8n}\big)</math>\,.</li> | |||
<li><math>\DS |\theta_j-\theta_k|_2^2 \le \frac{2R^2}{k} \le 4R\beta\sigma\sqrt{\frac{\log(ed/\sqrt{n})}{n}}\le \frac{2\alpha \sigma^2}{n}\log(M)</math>\,,</li> | |||
</ul> | |||
for <math>\beta</math> small enough if <math>d \ge Ck</math> for some constant <math>C > 0</math> chosen large enough. Applying now Theorem~[[#TH:mainLB |Minimax Lower Bounds]] yields | |||
<math display="block"> | |||
\inf_{\hat \theta} \sup_{\theta \in \R^d}\p_\theta\big(|\hat \theta-\theta|_2^2 \ge R\min\big(\frac{R}8, \beta^2\sigma^2\frac{\log (ed/\sqrt{n})}{8n}\big)\big)\ge \frac{1}{2}-2\alpha\,. | |||
</math> | |||
It implies the following corollary. | |||
{{proofcard|Corollary|cor-2|Recall that <math>\cB_{1}(R) \subset \R^d</math> denotes the set vectors <math>\theta \in \R^d</math> such that <math>|\theta|_1 \le R</math>. Then there exist a constant <math>C > 0</math> such that if <math>d \ge n^{1/2+\eps}</math>, <math>\eps > 0</math>, the minimax rate of estimation over <math>\cB_{1}(R)</math> in the Gaussian sequence model is | |||
<math display="block"> | |||
\phi(\cB_{0}(k))=\min(R^2,R\sigma\frac{\log d}{n}) \,. | |||
</math> | |||
Moreover, it is attained by the constrained least squares estimator <math>\thetals_{\cB_1(R)}</math> if <math>R\ge \sigma\frac{\log d}{n}</math> and by the trivial estimator <math>\hat \theta =0</math> otherwise. | |||
|To complete the proof of the statement, we need to study the risk of the trivial estimator equal to zero for small <math>R</math>. Note that if <math>|\theta^*|_1\le R</math>, we have | |||
<math display="block"> | |||
\begin{equation*} | |||
|0 -\theta^*|_2^2=|\theta^*|_2^2\le |\theta^*|_1^2=R^2\,. \qedhere | |||
\end{equation*} | |||
</math>}} | |||
{{alert-info | | |||
Note that the inequality <math>|\theta^*|_2^2\le |\theta^*|_1^2</math> appears to be quite loose. Nevertheless, it is tight up to a multiplicative constant for the vectors of the form <math>\theta_j=\omega_j\frac{R}{k}</math> that are employed in the lower bound. Indeed, if <math>R\le \sigma\frac{\log d}{n}</math>, we have <math>k\le 2/\beta</math> | |||
<math display="block"> | |||
|\theta_j|_2^2=\frac{R^2}{k}\ge \frac{\beta}{2}|\theta_j|_1^2\,. | |||
</math> | |||
}} | |||
==Lower bounds for sparse estimation via \texorpdfstring{<math> \chi^2 </math>}{χ2== | |||
divergence} | |||
In this section, we will show how to derive lower bounds by directly controlling the distance between a simple and a composite hypothesis, which is useful when investigating lower rates for decision problems instead of estimation problems. | |||
Define the <math> \chi^2 </math> divergence between two probability distributions <math> \p, \q </math> as | |||
<math display="block"> | |||
\begin{equation*} | |||
\chi^2( \p, \q) = \left\{ | |||
\begin{aligned} | |||
\int \left( \frac{d\p}{d\q}-1 \right)^2 d\q \quad &\text{if } \p \ll \q,\\ | |||
\infty \quad &\text{otherwise.} | |||
\end{aligned} | |||
\right. | |||
\end{equation*} | |||
</math> | |||
When we compare the expression in the case where <math> \p \ll \q </math>, then we see that both the KL divergence and the <math> \chi^2 </math> divergence can be written as <math> \int f \left( \frac{d\p}{d\q} \right) d\q </math> with <math> f(x) = x \log x </math> and <math> f(x) = (x-1)^2 </math>, respectively. | |||
Also note that both of these functions are convex in <math> x </math> and fulfill <math> f(1) = 0 </math>, which by Jensen’s inequality shows us that they are non-negative and equal to <math> 0 </math> if and only if <math> \p = \q </math>. | |||
Firstly, let us derive another useful expression for the <math> \chi^2 </math> divergence. | |||
By expanding the square, we have | |||
<math display="block"> | |||
\begin{equation*} | |||
\chi^2(\p, \q) = \int \frac{(d\p)^2}{d\q} - 2 \int d\p + \int d\q = \int \left( \frac{d\p}{d\q} \right)^2 d\q - 1 | |||
\end{equation*} | |||
</math> | |||
Secondly, we note that we can bound the KL divergence from above by the <math> \chi^2 </math> divergence, via Jensen’s inequality, as | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:ag} | |||
\KL(\p, \q) = \int \log \left( \frac{d\p}{d\q} \right) d\p \leq \log \int \frac{(d\p)^2}{d\q} = \log(1+\chi^2(\p, \q)) \leq \chi^2(\p, \q), | |||
\end{equation} | |||
</math> | |||
where we used the concavity inequality <math> \log(1+x) \leq x </math> for <math> x > -1 </math>. | |||
Note that this estimate is a priori very rough and that we are transitioning from something at <math> \log </math> scale to something on a regular scale. | |||
However, since we are mostly interested in showing that these distances are bounded by a small constant after adjusting the parameters of our models appropriately, it will suffice for our purposes. | |||
In particular, we can combine \eqref{eq:ag} with the Neyman-Pearson Lemma [[#LEM:Neyman |Minimax Lower Bounds]] and Pinsker’s inequality, Lemma [[#LEM:Pinsker |Minimax Lower Bounds]], to get | |||
<math display="block"> | |||
\begin{equation*} | |||
\inf_{\psi} \p_0(\psi = 1) + \p_1(\psi = 0) \geq \frac{1}{2} (1 - \sqrt{\varepsilon}) | |||
\end{equation*} | |||
</math> | |||
for distinguishing between two hypothesis with a decision rule <math> \psi </math> if <math> \chi^2(\p_0, \p_1) \leq \varepsilon </math>. | |||
In the following, we want to apply this observation to derive lower rates for distinguishing between | |||
<math display="block"> | |||
\begin{equation*} | |||
H_0 : Y \sim \p_0 \quad H_1 : Y \sim \p_u, \text{ for some } u \in B_0(k), | |||
\end{equation*} | |||
</math> | |||
where <math> \p_u </math> denotes the probability distribution of a Gaussian sequence model <math> Y = u + \frac{\sigma}{n} \xi </math>, with <math> \xi \sim \cN(0, I_d) </math>. | |||
{{proofcard|Theorem|thm-1|Consider the detection problem | |||
<math display="block"> | |||
\begin{equation*} | |||
H_0: \theta^\ast = 0, \quad H_v : \theta^\ast = \mu v, \; \text{for some } v \in \cB_0(k), \, \|v\|_2 = 1, | |||
\end{equation*} | |||
</math> | |||
with data given by <math> Y = \theta^\ast + \frac{\sigma}{\sqrt{n}}\xi </math>, <math> \xi \sim \cN(0, I_d) </math>. | |||
There exist <math> \varepsilon > 0 </math> and <math> c_\varepsilon > 0 </math> such that if | |||
<math display="block"> | |||
\begin{equation*} | |||
\mu \leq \frac{\sigma}{2} \sqrt{\frac{k}{n}} \sqrt{\log \left( 1 + \frac{d \varepsilon}{k^2} \right)} | |||
\end{equation*} | |||
</math> | |||
then, | |||
<math display="block"> | |||
\begin{equation*} | |||
\inf_{\psi} \{ \p_0 (\psi = 1) \vee \max_{\substack{v \in \cB_0(k)\\\|v\|_2 = 1}} \p_v(\psi = 0) \} \geq \frac{1}{2}(1 - \sqrt{\varepsilon}). | |||
\end{equation*} | |||
</math> | |||
|We introduce the mixture hypothesis | |||
<math display="block"> | |||
\begin{equation*} | |||
\bar{\p} = \frac{1}{\binom{d}{k}} \sum_{\substack{S \subseteq [d]\\|S| = k}} \p_S, | |||
\end{equation*} | |||
</math> | |||
with <math> \p_S </math> being the distribution of <math> Y = \mu \1_S/\sqrt{k} + \frac{\sigma}{\sqrt{n}} \xi </math> and give lower bounds on distinguishing | |||
<math display="block"> | |||
\begin{equation*} | |||
H_0 : Y \sim \p_0, \quad H_1 : Y \sim \bar{\p}, | |||
\end{equation*} | |||
</math> | |||
by computing their <math> \chi^2 </math> distance, | |||
<math display="block"> | |||
\begin{align*} | |||
\chi^2(\bar{\p}, \p_0) = {} & \int \left( \frac{d \bar{p}}{d \p_0} \right)^2 d \p_0 - 1\\ | |||
= {} & \frac{1}{\binom{d}{k} } \sum_{S, T} \int \left( \frac{d \p_S}{d \p_0} \frac{d \p_T}{d \p_0} \right) d \p_0 - 1. | |||
\end{align*} | |||
</math> | |||
The first step is to compute <math> d \p_S / (d \p_0) </math>. | |||
Writing out the corresponding Gaussian densities yields | |||
<math display="block"> | |||
\begin{align*} | |||
\frac{d \p_S}{d \p_0}(X) = {} & \frac{\frac{1}{\sqrt{2 \pi}^d} \exp(-\frac{n}{2 \sigma^2} \| X - \mu \1_S /\sqrt{k}\|_2^2)}{\frac{1}{\sqrt{2 \pi}^d} \exp(-\frac{n}{2 \sigma^2} \| X - 0\|_2^2)}\\ | |||
= {} & \exp \left( \frac{n}{2 \sigma^2} \left( \frac{2 \mu}{\sqrt{k}} \langle X , \1_S \rangle - \mu^2 \right) \right) | |||
\end{align*} | |||
</math> | |||
For convenience, we introduce the notation <math> X = \frac{\sigma}{\sqrt{n}} Z </math> for a standard normal <math> Z \sim \cN(0, I_d) </math>, <math> \nu = \mu \sqrt{n}/(\sigma \sqrt{k}) </math>, and write <math> Z_S </math> for the restriction to the coordinates in the set <math> S </math>. | |||
Multiplying two of these densities and integrating with respect to <math> \p_0 </math> in turn then reduces to computing the mgf of a Gaussian and gives | |||
<math display="block"> | |||
\begin{align*} | |||
\leadeq{\E_{X \sim \p_0} \left[ \exp \left( \frac{n}{2 \sigma^2} \left( 2 \mu \frac{\sigma}{\sqrt{kn}} (Z_S + Z_T) - 2 \mu^2 \right) \right) \right]}\\ | |||
= {} & \E_{Z \sim \cN(0, I_d)} \exp \left( \nu (Z_S + Z_T) - \nu^2 k \right). | |||
\end{align*} | |||
</math> | |||
Decomposing <math> Z_S + Z_T = 2 \sum_{i \in S \cap T} Z_i + \sum_{i \in S \Delta T} Z_i </math> and noting that <math> | S \Delta T | \leq 2 k </math>, we see that | |||
<math display="block"> | |||
\begin{align*} | |||
\E_{\p_0} \left( \frac{d \p_S}{d \p_0} \frac{d \p_T}{d \p_0} \right) | |||
= {} & \exp \left( \frac{4}{2} \nu^2 | S \cap T + \frac{\nu^2}{2} | S \Delta T | - \nu^2 k \right)\\ | |||
\leq {} & \exp \left( 2 \nu^2 | S \cap T | \right). | |||
\end{align*} | |||
</math> | |||
Now, we need to take the expectation over two uniform draws of support sets <math> S </math> and <math> T </math>, which reduces via conditioning and exploiting the independence of the two draws to | |||
<math display="block"> | |||
\begin{align*} | |||
\chi^2( \bar{\p}, \p_0) \leq {} & \E_{S, T} [ \exp (2 \nu^2 | S \cap T |] -1\\ | |||
= {} & \E_{T} \E_{S}[ [ \exp (2 \nu^2 | S \cap T | \,|\, T] ] - 1\\ | |||
= {} & \E_S [ \exp (2 \nu^2 | S \cap [k] | ) ] - 1. | |||
\end{align*} | |||
</math> | |||
Similar to the proof of the sparse Varshamov-Gilbert bound, Lemma [[#LEM:sVG |Minimax Lower Bounds]], the distribution of <math> | S \cap [k] | </math> is stochastically dominated by a binomial distribution <math> \Bin(k, k/d) </math>, so that | |||
<math display="block"> | |||
\begin{align*} | |||
\chi^2( \bar{\p}, \p_0) | |||
\leq {} & \E[ \exp(2 \nu^2 \Bin(k, \frac{k}{d})] - 1\\ | |||
= {} & \left( \E[ \exp (2 \nu^2 \Bern(k/d))] \right)^k - 1\\ | |||
= {} & \left( \e^{2 \nu^2} \frac{k}{d} + \left( 1 - \frac{k}{d} \right) \right)^k - 1\\ | |||
= {} & \left( 1 + \frac{k}{d} (\e^{2 \nu^2} - 1) \right)^k - 1 | |||
\end{align*} | |||
</math> | |||
Since <math> (1 + x)^{k} - 1 \approx k x </math> for <math> x = o(1/k) </math>, we can take <math> \nu^2 = \frac{1}{2} \log(1 + \frac{d}{k^2} \varepsilon) </math> to get a bound of the order <math> \varepsilon </math>. | |||
Plugging this back into the definition of <math> \nu </math> yields | |||
<math display="block"> | |||
\begin{equation*} | |||
\mu = \frac{\sigma}{2} \sqrt{\frac{k}{n}} \sqrt{\log \left( 1 + \frac{d \varepsilon}{k^2} \right)}. | |||
\end{equation*} | |||
</math> | |||
The rate for detection for arbitrary <math> v </math> now follows from | |||
<math display="block"> | |||
\begin{align*} | |||
\p_0 (\psi = 1) \vee \max_{\substack{v \in \cB_0(k)\\\|v\|_2 = 1}} \p_v(\psi = 0) | |||
\geq {} & \p_0 (\psi = 1) \vee \bar{\p}(\psi = 0) \\ | |||
\geq {} & \frac{1}{2}(1 - \sqrt{\varepsilon}). \qedhere | |||
\end{align*} | |||
</math>}} | |||
{{alert-info | | |||
If instead of the <math> \chi^2 </math> divergence, we try to compare the <math> \KL </math> divergence between <math> \p_0 </math> and <math> \bar{\p} </math>, we are tempted to exploit its convexity properties to get | |||
<math display="block"> | |||
\begin{equation*} | |||
\KL(\bar{\p}, \p_0) \leq \frac{1}{\binom{d}{k} } \sum_{S} \KL(\p_S, \p_0) = \KL(\p_{[k]}, \p_0), | |||
\end{equation*} | |||
</math> | |||
which is just the distance between two Gaussian distributions. | |||
In turn, we do not see any increase in complexity brought about by increasing the number of competing hypotheses as we did in Section [[#sec:lb-gsm |Application to the Gaussian sequence model]]. | |||
By using the convexity estimate, we have lost the granularity of controlling how much the different probability distributions overlap. | |||
This is where the <math> \chi^2 </math> divergence helped. | |||
}} | |||
{{alert-info | | |||
This rate for detection implies lower rates for estimation of both <math> \theta^\ast </math> and <math> \| \theta^\ast \|_2 </math>: | |||
If given an estimator <math> \widehat{T}(X) </math> for <math> T(\theta) = \| \theta \|_2 </math> that achieves an error less than <math> \mu/2 </math> for some <math> X </math>, then we would get a correct decision rule for those <math> X </math> by setting | |||
<math display="block"> | |||
\begin{equation*} | |||
\psi(X) = \left\{ | |||
\begin{aligned} | |||
0, & \quad \text{if } \widehat{T}(X) < \frac{\mu}{2},\\ | |||
1, & \quad \text{if } \widehat{T}(X) \geq \frac{\mu}{2}. | |||
\end{aligned} | |||
\right. | |||
\end{equation*} | |||
</math> | |||
Conversely, the lower bound above means that such an error rate can not be achieved with vanishing error probability for <math> \mu </math> smaller than the critical value. | |||
Moreover, by the triangle inequality, we can the same lower bound also for estimation of <math> \theta^\ast </math> itself. | |||
We note that the rate that we obtained is slightly worse in the scaling of the log factor (<math> d/k^2 </math> instead of <math> d/k </math>) than the one obtained in Corollary [[#cor:minimax-sparse |Minimax Lower Bounds]]. | |||
This is because it applies to estimating the <math> \ell_2 </math> norm as well, which is a strictly easier problem and for which estimators can be constructed that achieve the faster rate of <math> \frac{k}{n} \log(d/k^2) </math>. | |||
}} | |||
==Lower bounds for estimating the \texorpdfstring{<math> \ell_1 </math>}{ℓ1== | |||
norm via moment matching} | |||
So far, we have seen many examples of rates that scale with <math> n^{-1} </math> for the squared error. | |||
In this section, we are going to see an example for estimating a functional that can only be estimated at a much slower rate of <math> 1/\log n </math>, following <ref name="CaiLow11">{{cite journal|last1=Cai|first1=T. Tony|last2=Low|first2=Mark G.|journal=Ann. Statist.|year=2011|title=Testing composite hypotheses, Hermite polynomials and optimal estimation of a nonsmooth functional|volume=39|number=2|publisher=The Institute of Mathematical Statistics}}</ref>. | |||
Consider the normalized <math> \ell_1 </math> norm, | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:aw} | |||
T(\theta) = \frac{1}{n} \sum_{i = 1}^{n} | \theta_i |, | |||
\end{equation} | |||
</math> | |||
as the target functional to be estimated, and measurements | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:at} | |||
Y \sim N(\theta, I_n). | |||
\end{equation} | |||
</math> | |||
We are going to show lower bounds for the estimation if restricted to an <math> \ell_\infty </math> ball, <math> \Theta_n(M) = \{ \theta \in \R^n : | \theta_i | \leq M \} </math>, and for a general <math> \theta \in \R^n </math>. | |||
===A constrained risk inequality=== | |||
We start with a general principle that is in the same spirit as the Neyman-Pearson lemma, (Lemma [[#LEM:Neyman |Minimax Lower Bounds]]), but deals with expectations rather than probabilities and uses the <math> \chi^2 </math> distance to estimate the closeness of distributions. | |||
Moreover, contrary to what we have seen so far, it allows us to compare two mixture distributions over candidate hypotheses, also known as priors. | |||
In the following, we write <math> X </math> for a random observation coming from a distribution indexed by <math> \theta \in \Theta </math>, <math> \widehat{T} = \widehat{T}(X) </math> an estimator for a function <math> T(\theta) </math>, and write <math> B(\theta) = \E_\theta \widehat{T} - T(\theta) </math> for the bias of <math> \widehat{T} </math>. | |||
Let <math> \mu_0 </math> and <math> \mu_1 </math> be two prior distributions over <math> \Theta </math> and denote by <math> m_i </math>, <math> v_i^2 </math> the means and variance of <math> T(\theta) </math> under the priors <math> \mu_i </math>, <math> i \in \{0, 1\} </math>, | |||
<math display="block"> | |||
\begin{equation*} | |||
\label{eq:au} | |||
m_i = \int T(\theta) d \mu_i(\theta), \quad \int v_i^2 = \int (T(\theta) - m_i)^2 d \mu_i (\theta). | |||
\end{equation*} | |||
</math> | |||
Moreover, write <math> F_i </math> for the marginal distributions over the priors and denote their density with respect to the average of the two probability distributions by <math> f_i </math>. | |||
With this, we write the <math> \chi^2 </math> distance between <math> f_0 </math> and <math> f_1 </math> as | |||
<math display="block"> | |||
\begin{equation*} | |||
\label{eq:av} | |||
I^2 = \chi^2(\p_{f_0}, \p_{f_1}) = \E_{f_0} \left( \frac{f_1(X)}{f_0(X)} - 1 \right)^2 = \E_{f_0} \left( \frac{f_1(X)}{f_0(X)} \right)^2 - 1. | |||
\end{equation*} | |||
</math> | |||
{{proofcard|Theorem|thm:constr_risk|(i) If <math> \int \E_\theta (\widehat{T}(X) - T(\theta))^2 d\mu_0(\theta) \leq \varepsilon^2 </math>, | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:constr_risk} | |||
\left| \int B(\theta) d \mu_1(\theta) - \int B(\theta) d \mu_0(\theta) \right| \geq | m_1 - m_0| - (\varepsilon + v_0) I. | |||
\end{equation} | |||
</math> | |||
(ii) If <math> | m_1 - m_0 | > v_0 I</math> and <math> 0 \leq \lambda \leq 1 </math>, | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:mixture_risk} | |||
\int \E_\theta( \widehat{T}(X) - T(\theta))^2 d (\lambda \mu_0 + (1- \lambda) \mu_1)(\theta) | |||
\geq \frac{\lambda(1-\lambda)(|m_1 - m_0| - v_0 I)^2}{\lambda + (1-\lambda)(I + 1)^2}, | |||
\end{equation} | |||
</math> | |||
in particular | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:minimax_risk_mixture} | |||
\max_{i \in \{0, 1\}} \int \E_\theta( \widehat{T}(X) - T(\theta))^2 d \mu_i(\theta) \geq \frac{(|m_1 - m_0| - v_0 I)^2}{(I + 2)^2}, | |||
\end{equation} | |||
</math> | |||
and | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:minimax_risk} | |||
\sup_{\theta \in \Theta} \E_\theta( \widehat{T}(X) - T(\theta))^2 \geq \frac{(|m_1 - m_0| - v_0 I)^2}{(I + 2)^2}. | |||
\end{equation} | |||
</math> | |||
|Without loss of generality, assume <math> m_1 \geq m_0 </math>. | |||
We start by considering the term | |||
<math display="block"> | |||
\begin{align*} | |||
\leadeq{\E_{f_0} \left[ ( \widehat{T}(X) - m_0 ) \left( \frac{f_1(X) - f_0(X)}{f_0(X)} \right) \right]}\\ | |||
= {} & \E \left[ \widehat{T}(X) \left( \frac{f_1(X) - f_0(X)}{f_0(X)} \right) \right]\\ | |||
= {} & \E_{f_1} \left[ m_1 + \widehat{T}(X) - m_1 \right] - \E_{f_0} \left[ m_0 + \widehat{T}(X) - m_0 \right]\\ | |||
= {} & m_1 + \int B(\theta) d \mu_1(\theta) - \left( m_0 + \int B(\theta) d \mu_0 (\theta) \right). | |||
\end{align*} | |||
</math> | |||
Moreover, | |||
<math display="block"> | |||
\begin{align*} | |||
\E_{f_0} (\widehat{T}(X) - m_0)^2 = {} & \int \E_\theta (\widehat{T}(X) -m_0)^2 d \mu_0(\theta)\\ | |||
= {} & \int \E_\theta (\widehat{T}(X) - T(\theta) + T(\theta) - m_0)^2 d \mu_0 (\theta)\\ | |||
= {} & \int \E_\theta (\widehat{T}(X) - T(\theta))^2 d \mu_0 ( \theta )\\ | |||
{} & + 2 \int B(\theta) (T(\theta) - m_0) d \mu_0 (\theta)\\ | |||
{} & + \int (T(\theta) - m_0)^2 d \mu_0 ( \theta )\\ | |||
\leq {} & \varepsilon^2 + 2 \varepsilon v_0 + v_0^2 = (\varepsilon + v_0)^2. | |||
\end{align*} | |||
</math> | |||
Therefore, by Cauchy-Schwarz, | |||
<math display="block"> | |||
\begin{align*} | |||
\E_{f_0} \left[ ( \widehat{T}(X) - m_0 ) \left( \frac{f_1(X) - f_0(X)}{f_0(X)} \right) \right] | |||
\leq {} & (\varepsilon + v_0) I, | |||
\end{align*} | |||
</math> | |||
and hence | |||
<math display="block"> | |||
\begin{equation*} | |||
m_1 + \int B(\theta) d \mu_1(\theta) - \left( m_0 + \int B(\theta) d \mu_0 (\theta) \right) \leq (\varepsilon + v_0) I, | |||
\end{equation*} | |||
</math> | |||
whence | |||
<math display="block"> | |||
\begin{equation*} | |||
\label{eq:ac} | |||
\int B(\theta) d \mu_0(\theta) - \int B(\theta) d \mu_1 (\theta) \geq m_1 - m_0 - (\varepsilon + v_0) I, | |||
\end{equation*} | |||
</math> | |||
which gives us \eqref{eq:constr_risk}. | |||
To estimate the risk under a mixture of <math> \mu_0 </math> and <math> \mu_1 </math>, we note that from \eqref{eq:constr_risk}, we have an estimate for the risk under <math> \mu_1 </math> given an upper bound on the risk under <math> \mu_0 </math>, which means we can reduce this problem to estimating a quadratic from below. | |||
Consider | |||
<math display="block"> | |||
\begin{equation*} | |||
J(x) = \lambda x^2 + (1-\lambda) (a - bx)^2, | |||
\end{equation*} | |||
</math> | |||
with <math> 0 < \lambda < 1 </math>, <math> a > 0 </math> and <math> b > 0 </math>. | |||
<math> J </math> is minimized by <math> x = x_{\mathrm{min}} = \frac{a b (1- \lambda)}{\lambda + b^2 (1 - \lambda)} </math> with <math> a - b x_{\mathrm{min}} > 0 </math> and <math> J(x_\mathrm{min}) = \frac{a^2 \lambda (1-\lambda)}{\lambda + b^2(1 - \lambda)} </math>. | |||
Hence, if we add a cut-off at zero in the second term, we do not change the minimal value and obtain that | |||
<math display="block"> | |||
\begin{equation*} | |||
\lambda x^2 + (1-\lambda)((a - bx)_{+})^2 | |||
\end{equation*} | |||
</math> | |||
has the same minimum. | |||
From \eqref{eq:constr_risk} and Jensen’s inequality, setting <math> \varepsilon^2 = \int B(\theta)^2 d \mu_0(\theta) </math>, we get | |||
<math display="block"> | |||
\begin{align*} | |||
\int B(\theta)^2 d \mu_1 ( \theta) | |||
\geq {} & \left( \int B(\theta) d \mu_1 (\theta) \right)^2\\ | |||
\geq {} & ((m_1 - m_0 - (\varepsilon + v_0) I - \varepsilon)_{+})^2. | |||
\end{align*} | |||
</math> | |||
Combining this with the estimate for the quadratic above yields | |||
<math display="block"> | |||
\begin{align*} | |||
\leadeq{\int \E_\theta( \widehat{T}(X) - T(\theta))^2 d (\lambda \mu_0 + (1- \lambda) \mu_1)(\theta)} \\ | |||
\geq {} & \lambda \varepsilon^2 + (1-\lambda)((m_1 - m_0 - (\varepsilon + v_0) I - \varepsilon)_{+})^2\\ | |||
\geq {} & \frac{\lambda(1-\lambda) (|m_1 - m_0 | - v_0 I)^2}{\lambda + (1-\lambda) (I+1)^2}, | |||
\end{align*} | |||
</math> | |||
which is \eqref{eq:mixture_risk}. | |||
Finally, since the minimax risk is bigger than any mixture risk, we can bound it from below by the maximum value of this bound, which is obtained at <math> \lambda = (I+1)/(I + 2) </math> to get \eqref{eq:minimax_risk_mixture}, and by the same argument \eqref{eq:minimax_risk}.}} | |||
===Unfavorable priors via polynomial approximation=== | |||
In order to construct unfavorable priors for the <math> \ell_1 </math> norm \eqref{eq:aw}, we resort to polynomial approximation theory. | |||
For a continuous function <math> f </math> on <math> [-1, 1] </math>, denote the maximal approximation error by polynomials of degree <math> k </math>, written as <math> \mathcal{P}_k </math>, by | |||
<math display="block"> | |||
\begin{equation*} | |||
\delta_k(f) = \inf_{G \in \mathcal{P}_k} \max_{x \in [-1, 1]} | f(x) - G(x) |, | |||
\end{equation*} | |||
</math> | |||
and the polynomial attaining this error by <math> G_k^\ast </math>. | |||
For <math> f(t) = |t| </math>, it is known <ref name="Riv90">{{cite journal|last1=Cai|first1=T. Tony|last2=Low|first2=Mark G.|first3={others}|journal=The Annals of Statistics|year=2011|title=Testing Composite Hypotheses, {{Hermite}} Polynomials and Optimal Estimation of a Nonsmooth Functional|volume=39|number=2}}</ref> that | |||
<math display="block"> | |||
\begin{equation*} | |||
\beta_\ast = \lim_{k \to \infty} 2 k \delta_{2k}(f) \in (0, \infty), | |||
\end{equation*} | |||
</math> | |||
which is a very slow rate of convergence of the approximation, compared to smooth functions for which we would expect geometric convergence. | |||
We can now construct our priors using some abstract measure theoretical results. | |||
{{proofcard|Lemma|lem:minimax_priors|For an integer <math> k > 0 </math>, there are two probability measures <math> \nu_0 </math> and <math> \nu_1 </math> on <math> [-1, 1] </math> that fulfill the following: | |||
<ul><li> <math> \nu_0 </math> and <math> \nu_1 </math> are symmetric about 0; | |||
</li> | |||
<li> <math> \int t^l d \nu_1 (t) = \int t^l d \nu_0(t) </math> for all <math> l \in \{0, 1, \dots, k \} </math>; | |||
</li> | |||
<li> <math>\int |t| d \nu_1(t) - \int | t | d \nu_0(t) = 2 \delta_k </math>. | |||
</li> | |||
</ul> | |||
|The idea is to construct the measures via the Hahn-Banach theorem and the Riesz representation theorem. | |||
First, consider <math> f(t) = |t| </math> as an element of the space <math> C([-1, 1]) </math> equipped with the supremum norm and define <math> \mathcal{P}_k </math> as the space of polynomials of degree up to <math> k </math>, and <math> \mathcal{F}_k := \operatorname{span}(\mathcal{P}_k \cup \{ f \}) </math>. | |||
On <math> \mathcal{F}_k </math>, define the functional <math> T(c f + p_k) = c \delta_k </math>, which is well-defined because <math> f </math> is not a polynomial. | |||
'''Claim: ''' <math> \| T \| = \sup \{ T(g) : g \in \mathcal{F}_k, \| g \|_\infty \leq 1\} = 1 </math>. | |||
<math> \| T \| \geq 1 </math>: Let <math> G_k^\ast </math> be the best-approximating polynomial to <math> f </math> in <math> \mathcal{P}_k </math>. | |||
Then, <math> \| f - G_k^\ast \|_\infty = \delta_k </math>, <math> \| (f - G^\ast_k)/\delta_k \|_\infty = 1 </math>, and <math> T((f - G^\ast_k)/\delta_k) = 1 </math>. | |||
<math> \| T \| \leq 1 </math>: Suppose <math> g = cf + p_k </math> with <math> p_k \in \mathcal{P}_k </math>, <math> \| g \|_\infty = 1 </math> and <math> T(g) > 1 </math>. | |||
Then <math> c > 1/\delta_k </math> and | |||
<math display="block"> | |||
\begin{equation*} | |||
\| f - (-p_k/c) \|_\infty = \frac{1}{c} < \delta_k, | |||
\end{equation*} | |||
</math> | |||
contradicting the definition of <math> \delta_k </math>. | |||
Now, by the Hahn-Banach theorem, there is a norm-preserving extension of <math> T </math> to <math> C([-1, 1]) </math> which we again denote by <math> T </math>. | |||
By the Riesz representation theorem, there is a Borel signed measure <math> \tau </math> with variation equal to <math> 1 </math> such that | |||
<math display="block"> | |||
\begin{equation*} | |||
T(g) = \int_{-1}^{1} g(t) d \tau(t), \quad \text{for all } g \in C([-1, 1]). | |||
\end{equation*} | |||
</math> | |||
The Hahn-Jordan decomposition gives two positive measures <math> \tau_+ </math> and <math> \tau_- </math> such that <math> \tau = \tau_+ - \tau_- </math> and | |||
<math display="block"> | |||
\begin{align*} | |||
\int_{-1}^{1} |t| d ( \tau_+ - \tau_-)(t) = {} & \delta_k \text{ and}\\ | |||
\int_{-1}^{1} t^l d \tau_+ (t) = {} & \int_{-1}^1 t^l d \tau_-(t), \quad l \in \{0, \dots, k\}. | |||
\end{align*} | |||
</math> | |||
Since <math> f </math> is a symmetric function, we can symmetrize these measures and hence can assume that they are symmetric about <math> 0 </math>. | |||
Finally, to get a probability measures with the desired properties, set <math> \nu_1 = 2 \tau_+ </math> and <math> \nu_0 = 2 \tau_- </math>.}} | |||
===Minimax lower bounds=== | |||
{{proofcard|Theorem|thm-2|We have the following bounds on the minimax rate for <math> T(\theta) = \frac{1}{n} \sum_{i = 1}^{n} | \theta_i| </math>: | |||
For <math> \theta \in \Theta_n(M) = \{ \theta \in \R^n : | \theta_i | \leq M \} </math>, | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:ai} | |||
\inf_{\widehat{T}} \sup_{\theta \in \Theta_n(M)} \E_\theta( \widehat{T}(X) - T(\theta))^2 \geq \beta_\ast^2 M^2 \left( \frac{\log \log n}{\log n} \right)^2(1 + o(1)). | |||
\end{equation} | |||
</math> | |||
For <math> \theta \in \R^n </math>, | |||
<math display="block"> | |||
\begin{equation} | |||
\label{eq:aj} | |||
\inf_{\widehat{T}} \sup_{\theta \in \R^n} \E_\theta( \widehat{T}(X) - T(\theta))^2 \geq \frac{\beta_\ast^2}{16 e^2 \log n}(1 + o(1)). | |||
\end{equation} | |||
</math> | |||
The last remaining ingredient for the proof are the ''Hermite polynomials'', a family of orthogonal polynomials with respect to the Gaussian density | |||
<math display="block"> | |||
\begin{equation*} | |||
\label{eq:ae} | |||
\varphi(y) = \frac{1}{\sqrt{2 \pi}} \e^{-y^2/2}. | |||
\end{equation*} | |||
</math> | |||
For our purposes, it is enough to define them by the derivatives of the density, | |||
<math display="block"> | |||
\begin{equation*} | |||
\frac{d^k}{dy^k} \varphi(y) = (-1)^k H_k(y) \varphi(y), | |||
\end{equation*} | |||
</math> | |||
and to observe that they are orthogonal, | |||
<math display="block"> | |||
\begin{equation*} | |||
\int H_k^2(y) \varphi(y) dy = k!, \quad \int H_k(y) H_j(y) \varphi(y) dy = 0, \quad \text{for } k \neq j. | |||
\end{equation*} | |||
</math> | |||
|We want to use Theorem [[#thm:constr_risk |Minimax Lower Bounds]]. | |||
To construct the prior measures, we scale the measures from Lemma [[#lem:minimax_priors |Minimax Lower Bounds]] appropriately: | |||
Let <math> k_n </math> be an even integer that is to be determined, and <math> \nu_0 </math>, <math> \nu_1 </math> the two measures given by Lemma [[#lem:minimax_priors |Minimax Lower Bounds]]. | |||
Define <math> g(x) = Mx </math> and define the measures <math> \mu_i </math> by dilating <math> \nu_i </math>, <math> \mu_i(A) = \nu_i(g^{-1}(A)) </math> for every Borel set <math> A \subseteq [-M, M] </math> and <math> i \in \{0, 1\} </math>. | |||
Hence, | |||
<ul><li> <math> \mu_0 </math> and <math> \mu_1 </math> are symmetric about 0; | |||
</li> | |||
<li> <math> \int t^l d \mu_1 (t) = \int t^l d \mu_0(t) </math> for all <math> l \in \{0, 1, \dots, k_n \} </math>; | |||
</li> | |||
<li> <math>\int |t| d \mu_1(t) - \int | t | d \mu_0(t) = 2 M \delta_{k_n} </math>. | |||
</li> | |||
</ul> | |||
To get priors for <math> n </math> \iid samples, consider the product priors <math> \mu_i^n = \prod_{j=1}^n \mu_i </math>. | |||
With this, we have | |||
<math display="block"> | |||
\begin{equation*} | |||
\E_{\mu_1^n} T(\theta) - \E_{\mu_0^n} T(\theta) = \E_{\mu_1} | \theta_1 | - \E_{\mu_0} | \theta_1 | = 2 M \delta_{k_n}, | |||
\end{equation*} | |||
</math> | |||
and | |||
<math display="block"> | |||
\begin{equation*} | |||
\E_{\mu_0^n} (T(\theta) - \E_{\mu_0^n} T(\theta))^2 = \E_{\mu_0^n} (T(\theta) - \E_{\mu_0^n} T(\theta))^2 \leq \frac{M^2}{n}, | |||
\end{equation*} | |||
</math> | |||
since each <math> \theta_i \in [-M, M] </math>. | |||
It remains to control the <math> \chi^2 </math> distance between the two marginal distributions. | |||
To this end, we will expand the Gaussian density in terms of Hermite polynomials. | |||
First, set <math> f_i(y) = \int \varphi(y-t) d \mu_i(t) </math>. | |||
Since <math> g(x) = \exp(-x) </math> is convex and <math> \mu_0 </math> is symmetric, | |||
<math display="block"> | |||
\begin{align*} | |||
f_0(y) | |||
\geq {} & \frac{1}{\sqrt{2 \pi}} \exp \left( - \int \frac{(y-t)^2}{2} d \mu_0 (t) \right)\\ | |||
= {} & \varphi(y) \exp \left( -\frac{1}{2} M^2 \int t^2 d\nu_0 (t) \right)\\ | |||
\geq {} & \varphi(y) \exp \left( -\frac{1}{2}M^2 \right). | |||
\end{align*} | |||
</math> | |||
Expand <math> \varphi </math> as | |||
<math display="block"> | |||
\begin{equation*} | |||
\varphi(y - t) = \sum_{k = 0}^{\infty} H_k(y) \varphi(y) \frac{t^k}{k!}. | |||
\end{equation*} | |||
</math> | |||
Then, | |||
<math display="block"> | |||
\begin{equation*} | |||
\int \frac{(f_1(y) - f_0(y))^2}{f_0(y)} dy \leq \e^{M^2/2} \sum_{k = k_n+1}^{\infty} \frac{1}{k!} M^{2k}. | |||
\end{equation*} | |||
</math> | |||
The <math> \chi^2 </math> distance for <math> n </math> \iid observations can now be easily bounded by | |||
<math display="block"> | |||
\begin{align} | |||
I_n^2 = {} & \int \frac{(\prod_{i=1}^n f_1(y_i) - \prod_{i=1}^n f_0(y_i))^2}{\prod_{i=1}^n f_0(y_i)} d y_1 \dotsi d y_n \nonumber \\ | |||
= {} & \int \frac{(\prod_{i=1}^n f_1(y_i))^2}{\prod_{i=1}^n f_0(y_i)} d y_1 \dotsi d y_n - 1 \nonumber \\ | |||
= {} & \left(\prod_{i=1}^n \int \frac{(f_1(y_i))^2}{f_0(y_i)} d y_i\right) - 1 \nonumber \\ | |||
\leq {} & \left( 1 + \e^{M^2/2} \sum_{k = k_n+1}^{\infty} \frac{1}{k!} M^{2k}\right)^n - 1 \nonumber \\ | |||
\leq {} & \left( 1 + \e^{3M^2/2} \frac{1}{k_n!} M^{2k_n}\right)^n - 1 \nonumber \\ | |||
\leq {} & \left( 1 + \e^{3M^2/2} \left( \frac{\e M^2}{k_n} \right)^{k_n}\right)^n - 1, \label{eq:ax} | |||
\end{align} | |||
</math> | |||
where the last step used a Stirling type estimate, <math> k! > (k/e)^k </math>. | |||
Note that if <math> x = o(1/n) </math>, then <math> (1+x)^n - 1 = o(n x) </math> by Taylor's formula, so we can choose <math> k_n \geq \log n / (\log \log n) </math> to guarantee that for <math> n </math> large enough, <math> I_n \to 0 </math>. | |||
With Theorem [[#thm:constr_risk |Minimax Lower Bounds]], we have | |||
<math display="block"> | |||
\begin{align*} | |||
\inf_{\widehat{T}} \sup_{\theta \in \Theta_n(M)} \E ( \widehat{T} - T(\theta) )^2 | |||
\geq {} & \frac{(2 M \delta_{k_n} - (M/\sqrt{n}) I_n)^2}{(I_n + 2)^2}\\ | |||
= {} & \beta_\ast^2 M^2 \left( \frac{\log \log n}{\log n} \right)^2 (1 + o(1)). | |||
\end{align*} | |||
</math> | |||
To prove the lower bound over <math> \R^n </math>, take <math> M = \sqrt{\log n} </math> and <math> k_n </math> to be the smallest integer such that <math> k_n \geq 2 \e \log n </math> and plug this into \eqref{eq:ax}, yielding | |||
<math display="block"> | |||
\begin{align*} | |||
I_n^2 \leq \left( 1 + n^{3/2} \left(\frac{\e \log n}{2 \e \log n}\right)^{k_n} \right)^n - 1, | |||
\end{align*} | |||
</math> | |||
which goes to zero. | |||
Hence, we can conclude just as before.}} | |||
Note that <ref name="CaiLow11">{{cite journal|last1=Cai|first1=T. Tony|last2=Low|first2=Mark G.|journal=Ann. Statist.|year=2011|title=Testing composite hypotheses, Hermite polynomials and optimal estimation of a nonsmooth functional|volume=39|number=2|publisher=The Institute of Mathematical Statistics}}</ref> also complement these lower bounds with upper bounds that are based on polynomial approximation, completing the picture of estimating <math> T(\theta) </math>. | |||
==Problem Set== | |||
==General references== | |||
{{cite arXiv|last1=Rigollet|first1=Philippe|last2=Hütter|first2=Jan-Christian|year=2023|title=High-Dimensional Statistics|eprint=2310.19244|class=math.ST}} | |||
==References== | |||
{{reflist}} |
Revision as of 15:13, 21 May 2024
[math] \newcommand{\DS}{\displaystyle} \newcommand{\intbeta}{\lfloor \beta \rfloor} \newcommand{\cA}{\mathcal{A}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cD}{\mathcal{D}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cG}{\mathcal{G}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cI}{\mathcal{I}} \newcommand{\cJ}{\mathcal{J}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cL}{\mathcal{L}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cN}{\mathcal{N}} \newcommand{\cO}{\mathcal{O}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cQ}{\mathcal{Q}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cT}{\mathcal{T}} \newcommand{\cU}{\mathcal{U}} \newcommand{\cV}{\mathcal{V}} \newcommand{\cX}{\mathcal{X}} \newcommand{\cY}{\mathcal{Y}} \newcommand{\cZ}{\mathcal{Z}} \newcommand{\sg}{\mathsf{subG}} \newcommand{\subE}{\mathsf{subE}} \newcommand{\bA}{\mathbf{A}} \newcommand{\bB}{\mathbf{B}} \newcommand{\bC}{\mathbf{C}} \newcommand{\bD}{\mathbf{D}} \newcommand{\bE}{\mathbf{E}} \newcommand{\bF}{\mathbf{F}} \newcommand{\bG}{\mathbf{G}} \newcommand{\bH}{\mathbf{H}} \newcommand{\bI}{\mathbf{I}} \newcommand{\bJ}{\mathbf{J}} \newcommand{\bK}{\mathbf{K}} \newcommand{\bM}{\mathbf{M}} \newcommand{\bN}{\mathbf{N}} \newcommand{\bO}{\mathbf{O}} \newcommand{\bP}{\mathbf{P}} \newcommand{\bp}{\mathbf{p}} \newcommand{\bQ}{\mathbf{Q}} \newcommand{\bS}{\mathbf{S}} \newcommand{\bT}{\mathbf{T}} \newcommand{\bU}{\mathbf{U}} \newcommand{\bV}{\mathbf{V}} \newcommand{\bX}{\mathbf{X}} \newcommand{\bY}{\mathbf{Y}} \newcommand{\bZ}{\mathbf{Z}} \newcommand{\bflambda}{\boldsymbol{\lambda}} \newcommand{\bftheta}{\boldsymbol{\theta}} \newcommand{\bfg}{\boldsymbol{g}} \newcommand{\bfy}{\boldsymbol{y}} \def\thetaphat{\hat{\bftheta}_\bp} \def\bflam{\boldsymbol{\lambda}} \def\Lam{\Lambda} \def\lam{\lambda} \def\bfpi{\boldsymbol{\pi}} \def\bfz{\boldsymbol{z}} \def\bfw{\boldsymbol{w}} \def\bfeta{\boldsymbol{\eta}} \newcommand{\R}{\mathrm{ I}\kern-0.18em\mathrm{ R}} \newcommand{\h}{\mathrm{ I}\kern-0.18em\mathrm{ H}} \newcommand{\K}{\mathrm{ I}\kern-0.18em\mathrm{ K}} \newcommand{\p}{\mathrm{ I}\kern-0.18em\mathrm{ P}} \newcommand{\E}{\mathrm{ I}\kern-0.18em\mathrm{ E}} %\newcommand{\Z}{\mathrm{ Z}\kern-0.26em\mathrm{ Z}} \newcommand{\1}{\mathrm{ 1}\kern-0.24em\mathrm{ I}} \newcommand{\N}{\mathrm{ I}\kern-0.18em\mathrm{ N}} \newcommand{\field}[1]{\mathbb{#1}} \newcommand{\q}{\field{Q}} \newcommand{\Z}{\field{Z}} \newcommand{\X}{\field{X}} \newcommand{\Y}{\field{Y}} \newcommand{\bbS}{\field{S}} \newcommand{\n}{\mathcal{N}} \newcommand{\x}{\mathcal{X}} \newcommand{\pp}{{\sf p}} \newcommand{\hh}{{\sf h}} \newcommand{\ff}{{\sf f}} \newcommand{\Bern}{\mathsf{Ber}} \newcommand{\Bin}{\mathsf{Bin}} \newcommand{\Lap}{\mathsf{Lap}} \newcommand{\tr}{\mathsf{Tr}} \newcommand{\phin}{\varphi_n} \newcommand{\phinb}{\overline \varphi_n(t)} \newcommand{\pn}{\p_{\kern-0.25em n}} \newcommand{\pnm}{\p_{\kern-0.25em n,m}} \newcommand{\psubm}{\p_{\kern-0.25em m}} \newcommand{\psubp}{\p_{\kern-0.25em p}} \newcommand{\cfi}{\cF_{\kern-0.25em \infty}} \newcommand{\e}{\mathrm{e}} \newcommand{\ic}{\mathrm{i}} \newcommand{\Leb}{\mathrm{Leb}_d} \newcommand{\Var}{\mathrm{Var}} \newcommand{\ddelta}{d_{\symdiffsmall}} \newcommand{\dsubh}{d_H} \newcommand{\indep}{\perp\kern-0.95em{\perp}} \newcommand{\supp}{\mathop{\mathrm{supp}}} \newcommand{\rank}{\mathop{\mathrm{rank}}} \newcommand{\vol}{\mathop{\mathrm{vol}}} \newcommand{\conv}{\mathop{\mathrm{conv}}} \newcommand{\card}{\mathop{\mathrm{card}}} \newcommand{\argmin}{\mathop{\mathrm{argmin}}} \newcommand{\argmax}{\mathop{\mathrm{argmax}}} \newcommand{\ud}{\mathrm{d}} \newcommand{\var}{\mathrm{var}} \newcommand{\re}{\mathrm{Re}} \newcommand{\MSE}{\mathsf{MSE}} \newcommand{\im}{\mathrm{Im}} \newcommand{\epr}{\hfill\hbox{\hskip 4pt\vrule width 5pt height 6pt depth 1.5pt}\vspace{0.5cm}\par} \newcommand{\bi}[1]{^{(#1)}} \newcommand{\eps}{\varepsilon} \newcommand{\Deq}{\stackrel{\mathcal{D}}{=}} \newcommand{\ubar}{\underbar} \newcommand{\Kbeta}{K_{\hspace{-0.3mm} \beta}} \newcommand{\crzero}[1]{\overset{r_0}{\underset{#1}{\longleftrightarrow}}} \newcommand{\hint}[1]{\texttt{[Hint:#1]}} \newcommand{\vp}{\vspace{.25cm}} \newcommand{\vm}{\vspace{.5cm}} \newcommand{\vg}{\vspace{1cm}} \newcommand{\vgneg}{\vspace{-1cm}} \newcommand{\vneg}{\vspace{-.5cm}} \newcommand{\vpneg}{\vspace{-.25cm}} \newcommand{\tp}{\ptsize{10}} \newcommand{\douzp}{\ptsize{12}} \newcommand{\np}{\ptsize{9}} \newcommand{\hp}{\ptsize{8}} \newcommand{\red}{\color{red}} \newcommand{\ndpr}[1]{{\textsf{\red[#1]}}} \newcommand\iid{i.i.d\@ifnextchar.{}{.\@\xspace} } \newcommand\MoveEqLeft[1][2]{\kern #1em & \kern -#1em} \newcommand{\leadeq}[2][4]{\MoveEqLeft[#1] #2 \nonumber} \newcommand{\leadeqnum}[2][4]{\MoveEqLeft[#1] #2} \newcommand\independent{\protect\mathpalette{\protect\independenT}{\perp}} \def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}} \newcommand{\MIT}[1]{{\color{MITred} #1}} \newcommand{\dHyp}{\{-1,1\}^d} \newcommand{\thetahard}{\hat \theta^{hrd}} \newcommand{\thetasoft}{\hat \theta^{sft}} \newcommand{\thetabic}{\hat \theta^{bic}} \newcommand{\thetalasso}{\hat \theta^{\cL}} \newcommand{\thetaslope}{\hat \theta^{\cS}} \newcommand{\thetahard}{\hat \theta^{hrd}} \newcommand{\thetasoft}{\hat \theta^{sft}} \newcommand{\thetabic}{\hat \theta^{bic}} \newcommand{\thetalasso}{\hat \theta^{\cL}} \newcommand{\thetaslope}{\hat \theta^{\cS}} \newcommand{\thetals}{\hat \theta^{ls}} \newcommand{\thetalsm}{\tilde \theta^{ls_X}} \newcommand{\thetaridge}{\hat\theta^{\mathrm{ridge}}_\tau} \newcommand{\thetalsK}{\hat \theta^{ls}_K} \newcommand{\muls}{\hat \mu^{ls}} [/math]
\label{chap:minimax} \newcommand{\KL}{\mathsf{KL}} \newcommand{\TV}{\mathsf{TV}} In the previous chapters, we have proved several upper bounds and the goal of this chapter is to assess their optimality. Specifically, our goal is to answer the following questions:
- Can our analysis be improved? In other words: do the estimators that we have studied actually satisfy better bounds?
- Can any estimator improve upon these bounds?
Both questions ask about some form of optimality. The first one is about optimality of an estimator, whereas the second one is about optimality of a bound. The difficulty of these questions varies depending on whether we are looking for a positive or a negative answer. Indeed, a positive answer to these questions simply consists in finding a better proof for the estimator we have studied (question 1.) or simply finding a better estimator, together with a proof that it performs better (question 2.). A negative answer is much more arduous. For example, in question 2., it is a statement about all estimators. How can this be done? The answer lies in information theory (see~[1] for a nice introduction). In this chapter, we will see how to give a negative answer to question 2. It will imply a negative answer to question 1.
Optimality in a minimax sense
Consider the Gaussian Sequence Model (GSM) where we observe [math]\bY=(Y_1, \ldots, Y_d)^\top[/math], defined by
where [math]\varepsilon=(\eps_1, \ldots, \eps_d)^\top \sim \cN_d(0,\frac{\sigma^2}{n}I_d)[/math], [math]\theta^*=(\theta^*_1, \ldots, \theta^*_d)^\top \in \Theta [/math] is the parameter of interest and [math]\Theta \subset \R^d[/math] is a given set of parameters. We will need a more precise notation for probabilities and expectations throughout this chapter. Denote by [math]\p_{\theta^*}[/math] and [math]\E_{\theta^*}[/math] the probability measure and corresponding expectation that are associated to the distribution of [math]\bY[/math] from the GSM~\eqref{EQ:GSMminimax}. Recall that GSM is a special case of the linear regression model when the design matrix satisfies the ORT condition. In this case, we have proved several performance guarantees (upper bounds) for various choices of [math]\Theta[/math] that can be expressed either in the form
or the form
For some constant [math]C[/math]. The rates [math]\phi(\Theta)[/math] for different choices of [math]\Theta[/math] that we have obtained are gathered in Table~Minimax Lower Bounds together with the estimator (and the corresponding result from Chapter~Linear Regression Model) that was employed to obtain this rate. \renewcommand{\arraystretch}{2.5}
[math]\Theta[/math] | [math]\phi(\Theta)[/math] | Estimator | Result |
[math]\R^d[/math] | [math]\DS \frac{\sigma^2 d}{n}[/math] | [math]\thetals[/math] | Theorem~Linear Regression Model |
[math]\cB_1[/math] | [math]\DS\sigma \sqrt{\frac{ \log d}{n}}[/math] | [math]\thetals_{\cB_1}[/math] | Theorem~Linear Regression Model |
[math]\cB_0(k)[/math] | [math]\DS\frac{ \sigma^2 k}{n}\log (ed/k)[/math] | [math]\thetals_{\cB_0(k)}[/math] | Corollaries~Linear Regression Model-Linear Regression Model |
\renewcommand{\arraystretch}{1} Can any of these results be improved? In other words, does there exists another estimator [math]\tilde \theta[/math] such that [math]\sup_{\theta^* \in \Theta}\E|\tilde \theta -\theta^*|_2^2\ll\phi(\Theta)[/math]? A first step in this direction is the Cram\'er-Rao lower bound [2] that allows us to prove lower bounds in terms of the Fisher information. Nevertheless, this notion of optimality is too stringent and often leads to nonexistence of optimal estimators. Rather, we prefer here the notion of minimax optimality that characterizes how fast [math]\theta^*[/math] can be estimated uniformly over [math]\Theta[/math].
We say that an estimator [math]\hat \theta_n[/math] is minimax optimal over [math]\Theta[/math] if it satisfies \eqref{EQ:minimaxUB_E} and there exists [math]C' \gt 0[/math] such that
Note that minimax rates of convergence [math]\phi[/math] are defined up to multiplicative constants. We may then choose this constant such that the minimax rate has a simple form such as [math]\sigma^2d/n[/math] as opposed to [math]7\sigma^2d/n[/math] for example. This definition can be adapted to rates that hold with high probability. As we saw in Chapter~Linear Regression Model (Cf. Table~Minimax Lower Bounds), the upper bounds in expectation and those with high probability are of the same order of magnitude. It is also the case for lower bounds. Indeed, observe that it follows from the Markov inequality that for any [math]A \gt 0[/math],
We say that an estimator [math]\hat \theta[/math] is minimax optimal over [math]\Theta[/math] if it satisfies either \eqref{EQ:minimaxUB_E} or \eqref{EQ:minimaxUB_P} and there exists [math]C' \gt 0[/math] such that
Reduction to finite hypothesis testing
Minimax lower bounds rely on information theory and follow from a simple principle: if the number of observations is too small, it may be hard to distinguish between two probability distributions that are close to each other. For example, given [math]n[/math] i.i.d. observations, it is impossible to reliably decide whether they are drawn from [math]\cN(0,1)[/math] or [math]\cN(\frac1n,1)[/math]. This simple argument can be made precise using the formalism of statistical hypothesis testing. To do so, we reduce our estimation problem to a testing problem. The reduction consists of two steps.
- Reduction to a finite number of parameters. In this step the goal is to find the largest possible number of parameters [math]\theta_1, \ldots, \theta_M \in \Theta[/math] under the constraint that
[[math]] \begin{equation} \label{EQ:dist_constraint} |\theta_j -\theta_k|_2^2 \ge 4 \phi(\Theta)\,. \end{equation} [[/math]]This problem boils down to a packing of the set [math]\Theta[/math]. Then we can use the following trivial observations:[[math]] \inf_{\hat \theta}\sup_{\theta \in \Theta}\p_\theta\big[|\hat \theta- \theta|_2^2 \gt \phi(\Theta)\big] \ge \inf_{\hat \theta}\max_{1\le j \le M}\p_{\theta_j}\big[|\hat \theta- \theta_j|_2^2 \gt \phi(\Theta)\big]\,. [[/math]]
- Reduction to a hypothesis testing problem. In this second step, the necessity of the constraint~\eqref{EQ:dist_constraint} becomes apparent.
For any estimator [math]\hat \theta[/math], define the minimum distance test [math]\psi(\hat \theta)[/math] that is associated to it by
[[math]] \psi(\hat \theta)=\argmin_{1\le j \le M} |\hat \theta -\theta_j|_2\,, [[/math]]with ties broken arbitrarily. Next observe that if, for some [math]j=1, \ldots, M[/math], [math]\psi(\hat \theta)\neq j[/math], then there exists [math]k \neq j[/math] such that [math]|\hat \theta -\theta_k|_2 \le |\hat \theta -\theta_j|_2[/math]. Together with the reverse triangle inequality it yields[[math]] |\hat \theta - \theta_j|_2 \ge |\theta_j -\theta_k|_2- |\hat \theta - \theta_k|_2 \ge |\theta_j -\theta_k|_2- |\hat \theta - \theta_j|_2 [[/math]]so that[[math]] |\hat \theta - \theta_j|_2 \ge \frac12 |\theta_j -\theta_k|_2 [[/math]]Together with constraint~\eqref{EQ:dist_constraint}, it yields[[math]] |\hat \theta - \theta_j|_2^2 \ge \phi(\Theta) [[/math]]As a result,[[math]] \begin{align*} \inf_{\hat \theta}\max_{1\le j \le M}\p_{\theta_j}\big[|\hat \theta- \theta_j|_2^2 \gt \phi(\Theta)\big]&\ge \inf_{\hat \theta}\max_{1\le j \le M}\p_{\theta_j}\big[\psi(\hat \theta)\neq j\big]\\ &\ge \inf_{\psi}\max_{1\le j \le M}\p_{\theta_j}\big[\psi\neq j\big] \end{align*} [[/math]]where the infimum is taken over all tests [math]\psi[/math] based on [math]\bY[/math] and that take values in [math]\{1, \ldots, M\}[/math].
\textsc{Conclusion:} it is sufficient for proving lower bounds to find [math]\theta_1, \ldots, \theta_M \in \Theta[/math] such that [math]|\theta_j -\theta_k|_2^2 \ge 4 \phi(\Theta)[/math] and
Lower bounds based on two hypotheses
The Neyman-Pearson Lemma and the total variation distance
Consider two probability measures [math]\p_0[/math] and [math]\p_1[/math] and observations [math]X[/math] drawn from either [math]\p_0[/math] or [math]\p_1[/math]. We want to know which distribution [math]X[/math] comes from. It corresponds to the following statistical hypothesis problem:
Let [math]\p_0[/math] and [math]\p_1[/math] be two probability measures. Then for any test [math]\psi[/math], it holds
Observe first that
The lower bound in the Neyman-Pearson lemma is related to a well-known quantity: the total variation distance. \begin{propdef} \label{PROP:TV} The total variation distance between two probability measures [math]\p_0[/math] and [math]\p_1[/math] on a measurable space [math](\cX, \cA)[/math] is defined by
The Kullback-Leibler divergence
The Kullback-Leibler divergence between probability measures [math]\p_1[/math] and [math]\p_0[/math] is given by
It can be shown [4] that the integral is always well defined when [math]\p_1 \ll \p_0[/math] (though it can be equal to [math]\infty[/math] even in this case). Unlike the total variation distance, the Kullback-Leibler divergence is not a distance. Actually, it is not even symmetric. Nevertheless, it enjoys properties that are very useful for our purposes.
Let [math]\p[/math] and [math]\q[/math] be two probability measures. Then
- [math]\KL(\p, \q)\ge 0[/math].
- The function [math](\p, \q) \mapsto \KL(\p,\q)[/math] is convex.
- If [math]\p[/math] and [math]\q[/math] are product measures, i.e.,
[[math]] \p=\bigotimes_{i=1}^n \p_i\quad \text{and} \quad \q=\bigotimes_{i=1}^n \q_i [[/math]]then[[math]] \KL(\p,\q)=\sum_{i=1}^n \KL(\p_i,\q_i)\,. [[/math]]
If [math]\p[/math] is not absolutely continuous then the result is trivial. Next, assume that [math]\p \ll \q[/math] and let [math]X \sim \p[/math]. 1. Observe that by Jensen's inequality,
2. Consider the function [math]f:(x,y) \mapsto x\log(x/y)[/math] and compute its Hessian:
Point 2. in Proposition~Minimax Lower Bounds is particularly useful in statistics where observations typically consist of [math]n[/math] independent random variables.Example \label{ex:KL_Gauss} For any [math]\theta \in \R^d[/math], let [math]P_\theta[/math] denote the distribution of [math]\bY \sim \cN(\theta, \sigma^2I_d)[/math]. Then
The Kullback-Leibler divergence is easier to manipulate than the total variation distance but only the latter is related to the minimax probability of error. Fortunately, these two quantities can be compared using Pinsker's inequality. We prove here a slightly weaker version of Pinsker's inequality that will be sufficient for our purpose. For a stronger statement, see [4], Lemma~2.5.
Let [math]\p[/math] and [math]\q[/math] be two probability measures such that [math]\p \ll \q[/math]. Then
Note that
Pinsker's inequality yields the following theorem for the GSM.
Assume that [math]\Theta[/math] contains two hypotheses [math]\theta_0[/math] and [math]\theta_1[/math] such that [math]|\theta_0-\theta_1|_2^2 = 8\alpha^2\sigma^2/n[/math] for some [math]\alpha \in (0,1/2)[/math]. Then
Write for simplicity [math]\p_j=\p_{\theta_j}[/math], [math]j=0,1[/math]. Recall that it follows from the reduction to hypothesis testing that
Clearly, the result of Theorem~Minimax Lower Bounds matches the upper bound for [math]\Theta=\R^d[/math] only for [math]d=1[/math]. How about larger [math]d[/math]? A quick inspection of our proof shows that our technique, in its present state, cannot yield better results. Indeed, there are only two known candidates for the choice of [math]\theta^*[/math]. With this knowledge, one can obtain upper bounds that do not depend on [math]d[/math] by simply projecting [math]Y[/math] onto the linear span of [math]\theta_0, \theta_1[/math] and then solving the GSM in two dimensions. To obtain larger lower bounds, we need to use more than two hypotheses. In particular, in view of the above discussion, we need a set of hypotheses that spans a linear space of dimension proportional to [math]d[/math]. In principle, we should need at least order [math]d[/math] hypotheses but we will actually need much more.
Lower bounds based on many hypotheses
The reduction to hypothesis testing from Section~Reduction to finite hypothesis testing allows us to use more than two hypotheses. Specifically, we should find [math]\theta_1, \ldots, \theta_M[/math] such that
Let [math]P_1, \ldots, P_M, M \ge 2[/math] be probability distributions such that [math]P_j \ll P_k[/math], [math]\forall \, j ,k[/math]. Then
Define
Fano's inequality leads to the following useful theorem.
Assume that [math]\Theta[/math] contains [math]M \ge 5[/math] hypotheses [math]\theta_1, \ldots, \theta_M[/math] such that for some constant [math]0 \lt \alpha \lt 1/4[/math], it holds
- [math]\DS|\theta_j-\theta_k|_2^2 \ge 4\phi[/math]
- [math]\DS |\theta_j-\theta_k|_2^2 \le \frac{2\alpha\sigma^2 }{n}\log(M)[/math]
Then
in view of (i), it follows from the reduction to hypothesis testing that it is sufficient to prove that
Theorem~Minimax Lower Bounds indicates that we must take [math]\phi\le \frac{\alpha\sigma^2 }{2n}\log(M)[/math]. Therefore, the larger the [math]M[/math], the larger the lower bound can be. However, [math]M[/math] cannot be arbitrary larger because of the constraint [math](i)[/math]. We are therefore facing a packing problem where the goal is to ‘`pack" as many Euclidean balls of radius proportional to [math]\sigma\sqrt{\log(M)/n}[/math] in [math]\Theta[/math] under the constraint that their centers remain close together (constraint [math](ii)[/math]). If [math]\Theta=\R^d[/math], this the goal is to pack the Euclidean ball of radius [math]R=\sigma\sqrt{2\alpha\log(M)/n}[/math] with Euclidean balls of radius [math]R\sqrt{2\alpha/\gamma}[/math]. This can be done using a volume argument (see Problem~Minimax Lower Bounds). However, we will use the more versatile lemma below. It gives a a lower bound on the size of a packing of the discrete hypercube [math]\{0,1\}^d[/math] with respect to the ’'Hamming distance defined by
For any [math]\gamma \in (0,1/2)[/math], there exist binary vectors [math]\omega_{1}, \ldots \omega_{M} \in \{0,1\}^d[/math] such that
- [math]\DS\rho(\omega_{j}, \omega_{k})\ge \big(\frac12-\gamma\big)d[/math] for all [math]j \neq k[/math]\,,
- [math]\DS M =\lfloor e^{\gamma^2d}\rfloor\ge e^{\frac{\gamma^2d}{2}}[/math]\,.
Let [math]\omega_{j,i}[/math], [math]1\le i \le d, 1\le j \le M[/math] be \iid Bernoulli random variables with parameter [math]1/2[/math] and observe that
Application to the Gaussian sequence model
We are now in a position to apply Theorem~Minimax Lower Bounds by choosing [math]\theta_1, \ldots, \theta_M[/math] based on [math]\omega_{1}, \ldots, \omega_{M}[/math] from the Varshamov-Gilbert Lemma.
Lower bounds for estimation
Take [math]\gamma=1/4[/math] and apply the Varshamov-Gilbert Lemma to obtain [math]\omega_{1}, \ldots, \omega_{M}[/math] with [math]M=\lfloor e^{d/16}\rfloor\ge e^{d/32} [/math] and such that [math]\rho(\omega_{j}, \omega_{k})\ge d/4[/math] for all [math]j \neq k[/math]. Let [math]\theta_{1},\ldots, \theta_M[/math] be such that
- [math]\DS|\theta_j-\theta_k|_2^2=\frac{\beta^2\sigma^2 }{n}\rho(\omega_{j}, \omega_{k}) \ge 4\frac{\beta^2\sigma^2 d}{16n}[/math];
- [math]\DS |\theta_j-\theta_k|_2^2 =\frac{\beta^2\sigma^2 }{n}\rho(\omega_{j}, \omega_{k}) \le \frac{\beta^2\sigma^2 d}{n}\le \frac{32\beta^2\sigma^2 }{n}\log(M)=\frac{2\alpha \sigma^2}{n}\log(M)[/math]\,,
for [math]\beta=\frac{\sqrt{\alpha}}{4}[/math]. Applying now Theorem~Minimax Lower Bounds yields
The minimax rate of estimation of over [math]\R^d[/math] in the Gaussian sequence model is [math]\phi(\R^d)=\sigma^2d/n[/math]. Moreover, it is attained by the least squares estimator [math]\thetals=\bY[/math].
Note that this rate is minimax over sets [math]\Theta[/math] that are strictly smaller than [math]\R^d[/math] (see Problem~Minimax Lower Bounds). Indeed, it is minimax over any subset of [math]\R^d[/math] that contains [math]\theta_1, \ldots, \theta_M[/math].
Lower bounds for sparse estimation
It appears from Table~Minimax Lower Bounds that when estimating sparse vectors, we have to pay for an extra logarithmic term [math]\log(ed/k)[/math] for not knowing the sparsity pattern of the unknown [math]\theta^*[/math]. In this section, we show that this term is unavoidable as it appears in the minimax optimal rate of estimation of sparse vectors. Note that the vectors [math]\theta_1, \ldots, \theta_M[/math] employed in the previous subsection are not guaranteed to be sparse because the vectors [math]\omega_1, \ldots, \omega_M[/math] obtained from the Varshamov-Gilbert Lemma may themselves not be sparse. To overcome this limitation, we need a sparse version of the Varhsamov-Gilbert lemma.
There exist positive constants [math]C_1[/math] and [math]C_2[/math] such that the following holds for any two integers [math]k[/math] and [math]d[/math] such that [math]1\le k \le d/8[/math]. There exist binary vectors [math]\omega_{1}, \ldots \omega_{M} \in \{0,1\}^d[/math] such that
- [math]\DS\rho(\omega_{i}, \omega_{j})\ge \frac{k}2[/math] for all [math]i \neq j[/math]\,,
- [math]\DS \log(M) \ge \frac{k}{8}\log(1+\frac{d}{2k})[/math]\,.
- [math]\DS |\omega_j|_0=k[/math] for all [math]j[/math]\,.
Take [math]\omega_1, \ldots, \omega_M[/math] independently and uniformly at random from the set
Apply the sparse Varshamov-Gilbert lemma to obtain [math]\omega_{1}, \ldots, \omega_{M}[/math] with [math]\log(M)\ge \frac{k}{8}\log(1+\frac{d}{2k})[/math] and such that [math]\rho(\omega_{j}, \omega_{k})\ge k/2[/math] for all [math]j \neq k[/math]. Let [math]\theta_{1},\ldots, \theta_M[/math] be such that
- [math]\DS|\theta_j-\theta_k|_2^2=\frac{\beta^2\sigma^2 }{n}\rho(\omega_{j}, \omega_{k})\log(1+\frac{d}{2k}) \ge 4\frac{\beta^2\sigma^2 }{8n}k\log(1+\frac{d}{2k})[/math];
- [math]\DS |\theta_j-\theta_k|_2^2 =\frac{\beta^2\sigma^2 }{n}\rho(\omega_{j}, \omega_{k})\log(1+\frac{d}{2k}) \le \frac{2k\beta^2\sigma^2 }{n}\log(1+\frac{d}{2k})\le \frac{2\alpha \sigma^2}{n}\log(M)[/math]\,,
for [math]\beta=\sqrt{\frac{\alpha}{8}}[/math]. Applying now Theorem~Minimax Lower Bounds yields
Recall that [math]\cB_{0}(k) \subset \R^d[/math] denotes the set of all [math]k[/math]-sparse vectors of [math]\R^d[/math]. The minimax rate of estimation over [math]\cB_{0}(k)[/math] in the Gaussian sequence model is [math]\phi(\cB_{0}(k))=\frac{\sigma^2k}{n}\log(ed/k)[/math]. Moreover, it is attained by the constrained least squares estimator [math]\thetals_{\cB_0(k)}[/math].
Note that the modified BIC estimator of Problem~Linear Regression Model and the Slope eq:ad are also minimax optimal over [math]\cB_{0}(k)[/math]. However, unlike [math] \thetals_{\cB_0(k)} [/math] and the modified BIC, the Slope is also adaptive to [math] k [/math]. For any [math]\eps \gt 0[/math], the Lasso estimator and the BIC estimator are minimax optimal for sets of parameters such that [math]k\le d^{1-\eps}[/math].
Lower bounds for estimating vectors in [math]\ell_1[/math] balls
Recall that in Maurey's argument, we approximated a vector [math]\theta[/math] such that [math]|\theta|_1=R[/math] by a vector [math]\theta'[/math] such that [math]|\theta'|_0=\frac{R}{\sigma}\sqrt{\frac{n}{\log d}}[/math]\,. We can essentially do the same for the lower bound. Assume that [math]d \ge \sqrt{n}[/math] and let [math]\beta \in (0,1)[/math] be a parameter to be chosen later and define [math]k[/math] to be the smallest integer such that
- [math]\DS|\theta_j-\theta_k|_2^2=\frac{R^2}{k^2}\rho(\omega_{j}, \omega_{k})\ge \frac{R^2}{2k}\ge 4R\min\big(\frac{R}8, \beta^2\sigma\frac{\log (ed/\sqrt{n})}{8n}\big)[/math]\,.
- [math]\DS |\theta_j-\theta_k|_2^2 \le \frac{2R^2}{k} \le 4R\beta\sigma\sqrt{\frac{\log(ed/\sqrt{n})}{n}}\le \frac{2\alpha \sigma^2}{n}\log(M)[/math]\,,
for [math]\beta[/math] small enough if [math]d \ge Ck[/math] for some constant [math]C \gt 0[/math] chosen large enough. Applying now Theorem~Minimax Lower Bounds yields
Recall that [math]\cB_{1}(R) \subset \R^d[/math] denotes the set vectors [math]\theta \in \R^d[/math] such that [math]|\theta|_1 \le R[/math]. Then there exist a constant [math]C \gt 0[/math] such that if [math]d \ge n^{1/2+\eps}[/math], [math]\eps \gt 0[/math], the minimax rate of estimation over [math]\cB_{1}(R)[/math] in the Gaussian sequence model is
Moreover, it is attained by the constrained least squares estimator [math]\thetals_{\cB_1(R)}[/math] if [math]R\ge \sigma\frac{\log d}{n}[/math] and by the trivial estimator [math]\hat \theta =0[/math] otherwise.
To complete the proof of the statement, we need to study the risk of the trivial estimator equal to zero for small [math]R[/math]. Note that if [math]|\theta^*|_1\le R[/math], we have
Note that the inequality [math]|\theta^*|_2^2\le |\theta^*|_1^2[/math] appears to be quite loose. Nevertheless, it is tight up to a multiplicative constant for the vectors of the form [math]\theta_j=\omega_j\frac{R}{k}[/math] that are employed in the lower bound. Indeed, if [math]R\le \sigma\frac{\log d}{n}[/math], we have [math]k\le 2/\beta[/math]
Lower bounds for sparse estimation via \texorpdfstring{[math] \chi^2 [/math]}{χ2
divergence} In this section, we will show how to derive lower bounds by directly controlling the distance between a simple and a composite hypothesis, which is useful when investigating lower rates for decision problems instead of estimation problems. Define the [math] \chi^2 [/math] divergence between two probability distributions [math] \p, \q [/math] as
Secondly, we note that we can bound the KL divergence from above by the [math] \chi^2 [/math] divergence, via Jensen’s inequality, as
Consider the detection problem
We introduce the mixture hypothesis
Now, we need to take the expectation over two uniform draws of support sets [math] S [/math] and [math] T [/math], which reduces via conditioning and exploiting the independence of the two draws to
If instead of the [math] \chi^2 [/math] divergence, we try to compare the [math] \KL [/math] divergence between [math] \p_0 [/math] and [math] \bar{\p} [/math], we are tempted to exploit its convexity properties to get
This rate for detection implies lower rates for estimation of both [math] \theta^\ast [/math] and [math] \| \theta^\ast \|_2 [/math]: If given an estimator [math] \widehat{T}(X) [/math] for [math] T(\theta) = \| \theta \|_2 [/math] that achieves an error less than [math] \mu/2 [/math] for some [math] X [/math], then we would get a correct decision rule for those [math] X [/math] by setting
Lower bounds for estimating the \texorpdfstring{[math] \ell_1 [/math]}{ℓ1
norm via moment matching} So far, we have seen many examples of rates that scale with [math] n^{-1} [/math] for the squared error. In this section, we are going to see an example for estimating a functional that can only be estimated at a much slower rate of [math] 1/\log n [/math], following [5]. Consider the normalized [math] \ell_1 [/math] norm,
A constrained risk inequality
We start with a general principle that is in the same spirit as the Neyman-Pearson lemma, (Lemma Minimax Lower Bounds), but deals with expectations rather than probabilities and uses the [math] \chi^2 [/math] distance to estimate the closeness of distributions. Moreover, contrary to what we have seen so far, it allows us to compare two mixture distributions over candidate hypotheses, also known as priors. In the following, we write [math] X [/math] for a random observation coming from a distribution indexed by [math] \theta \in \Theta [/math], [math] \widehat{T} = \widehat{T}(X) [/math] an estimator for a function [math] T(\theta) [/math], and write [math] B(\theta) = \E_\theta \widehat{T} - T(\theta) [/math] for the bias of [math] \widehat{T} [/math]. Let [math] \mu_0 [/math] and [math] \mu_1 [/math] be two prior distributions over [math] \Theta [/math] and denote by [math] m_i [/math], [math] v_i^2 [/math] the means and variance of [math] T(\theta) [/math] under the priors [math] \mu_i [/math], [math] i \in \{0, 1\} [/math],
(i) If [math] \int \E_\theta (\widehat{T}(X) - T(\theta))^2 d\mu_0(\theta) \leq \varepsilon^2 [/math],
(ii) If [math] | m_1 - m_0 | \gt v_0 I[/math] and [math] 0 \leq \lambda \leq 1 [/math],
Without loss of generality, assume [math] m_1 \geq m_0 [/math]. We start by considering the term
Finally, since the minimax risk is bigger than any mixture risk, we can bound it from below by the maximum value of this bound, which is obtained at [math] \lambda = (I+1)/(I + 2) [/math] to get \eqref{eq:minimax_risk_mixture}, and by the same argument \eqref{eq:minimax_risk}.
Unfavorable priors via polynomial approximation
In order to construct unfavorable priors for the [math] \ell_1 [/math] norm \eqref{eq:aw}, we resort to polynomial approximation theory. For a continuous function [math] f [/math] on [math] [-1, 1] [/math], denote the maximal approximation error by polynomials of degree [math] k [/math], written as [math] \mathcal{P}_k [/math], by
For an integer [math] k \gt 0 [/math], there are two probability measures [math] \nu_0 [/math] and [math] \nu_1 [/math] on [math] [-1, 1] [/math] that fulfill the following:
- [math] \nu_0 [/math] and [math] \nu_1 [/math] are symmetric about 0;
- [math] \int t^l d \nu_1 (t) = \int t^l d \nu_0(t) [/math] for all [math] l \in \{0, 1, \dots, k \} [/math];
- [math]\int |t| d \nu_1(t) - \int | t | d \nu_0(t) = 2 \delta_k [/math].
The idea is to construct the measures via the Hahn-Banach theorem and the Riesz representation theorem. First, consider [math] f(t) = |t| [/math] as an element of the space [math] C([-1, 1]) [/math] equipped with the supremum norm and define [math] \mathcal{P}_k [/math] as the space of polynomials of degree up to [math] k [/math], and [math] \mathcal{F}_k := \operatorname{span}(\mathcal{P}_k \cup \{ f \}) [/math]. On [math] \mathcal{F}_k [/math], define the functional [math] T(c f + p_k) = c \delta_k [/math], which is well-defined because [math] f [/math] is not a polynomial. Claim: [math] \| T \| = \sup \{ T(g) : g \in \mathcal{F}_k, \| g \|_\infty \leq 1\} = 1 [/math]. [math] \| T \| \geq 1 [/math]: Let [math] G_k^\ast [/math] be the best-approximating polynomial to [math] f [/math] in [math] \mathcal{P}_k [/math]. Then, [math] \| f - G_k^\ast \|_\infty = \delta_k [/math], [math] \| (f - G^\ast_k)/\delta_k \|_\infty = 1 [/math], and [math] T((f - G^\ast_k)/\delta_k) = 1 [/math]. [math] \| T \| \leq 1 [/math]: Suppose [math] g = cf + p_k [/math] with [math] p_k \in \mathcal{P}_k [/math], [math] \| g \|_\infty = 1 [/math] and [math] T(g) \gt 1 [/math]. Then [math] c \gt 1/\delta_k [/math] and
Minimax lower bounds
We have the following bounds on the minimax rate for [math] T(\theta) = \frac{1}{n} \sum_{i = 1}^{n} | \theta_i| [/math]: For [math] \theta \in \Theta_n(M) = \{ \theta \in \R^n : | \theta_i | \leq M \} [/math],
We want to use Theorem Minimax Lower Bounds. To construct the prior measures, we scale the measures from Lemma Minimax Lower Bounds appropriately: Let [math] k_n [/math] be an even integer that is to be determined, and [math] \nu_0 [/math], [math] \nu_1 [/math] the two measures given by Lemma Minimax Lower Bounds. Define [math] g(x) = Mx [/math] and define the measures [math] \mu_i [/math] by dilating [math] \nu_i [/math], [math] \mu_i(A) = \nu_i(g^{-1}(A)) [/math] for every Borel set [math] A \subseteq [-M, M] [/math] and [math] i \in \{0, 1\} [/math]. Hence,
- [math] \mu_0 [/math] and [math] \mu_1 [/math] are symmetric about 0;
- [math] \int t^l d \mu_1 (t) = \int t^l d \mu_0(t) [/math] for all [math] l \in \{0, 1, \dots, k_n \} [/math];
- [math]\int |t| d \mu_1(t) - \int | t | d \mu_0(t) = 2 M \delta_{k_n} [/math].
To get priors for [math] n [/math] \iid samples, consider the product priors [math] \mu_i^n = \prod_{j=1}^n \mu_i [/math]. With this, we have
To prove the lower bound over [math] \R^n [/math], take [math] M = \sqrt{\log n} [/math] and [math] k_n [/math] to be the smallest integer such that [math] k_n \geq 2 \e \log n [/math] and plug this into \eqref{eq:ax}, yielding
Note that [5] also complement these lower bounds with upper bounds that are based on polynomial approximation, completing the picture of estimating [math] T(\theta) [/math].
Problem Set
General references
Rigollet, Philippe; Hütter, Jan-Christian (2023). "High-Dimensional Statistics". arXiv:2310.19244 [math.ST].
References
- BirgTemplate:\'e, Lucien (1983). "Approximation dans les espaces m{\'e}triques et th{\'e}orie de l'estimation". Z. Wahrsch. Verw. Gebiete 65.
- BirgTemplate:\'e, Lucien (1983). "Approximation dans les espaces m{\'e}triques et th{\'e}orie de l'estimation". Z. Wahrsch. Verw. Gebiete 65.
- BirgTemplate:\'e, Lucien (1983). "Approximation dans les espaces m{\'e}triques et th{\'e}orie de l'estimation". Z. Wahrsch. Verw. Gebiete 65.
- 4.0 4.1 LeCam, L. (1973). "Convergence of estimates under dimensionality restrictions". Ann. Statist. 1.
- 5.0 5.1 "Testing composite hypotheses, Hermite polynomials and optimal estimation of a nonsmooth functional" (2011). Ann. Statist. 39. The Institute of Mathematical Statistics.
- "Testing Composite Hypotheses, Template:Hermite Polynomials and Optimal Estimation of a Nonsmooth Functional" (2011). The Annals of Statistics 39.