Revision as of 01:14, 2 June 2024 by Admin
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
BBy Bot
May 31'24

Exercise

[math] \newcommand{\smallfrac}[2]{\frac{#1}{#2}} \newcommand{\medfrac}[2]{\frac{#1}{#2}} \newcommand{\textfrac}[2]{\frac{#1}{#2}} \newcommand{\tr}{\operatorname{tr}} \newcommand{\e}{\operatorname{e}} \newcommand{\B}{\operatorname{B}} \newcommand{\Bbar}{\overline{\operatorname{B}}} \newcommand{\pr}{\operatorname{pr}} \newcommand{\dd}{\operatorname{d}\hspace{-1pt}} \newcommand{\E}{\operatorname{E}} \newcommand{\V}{\operatorname{V}} \newcommand{\Cov}{\operatorname{Cov}} \newcommand{\Bigsum}[2]{\mathop{\textstyle\sum}_{#1}^{#2}} \newcommand{\ran}{\operatorname{ran}} \newcommand{\card}{\#} \renewcommand{\P}{\operatorname{P}} \renewcommand{\L}{\operatorname{L}} \newcommand{\mathds}{\mathbb}[/math]

(Classic example for a Chernoff bound) Let [math]Y_1,\dots,Y_n[/math] be independent Bernoulli random variables with [math]\P[X_i=1]=p\in[0,1][/math] and [math]Y=Y_1+\cdots+Y_n[/math]. Let [math]\delta \gt 0[/math].

  • Show that [math]\E(\exp(tY_i))\leqslant\exp(p(\exp(t)-1))[/math] holds for every [math]t \gt 0[/math].
  • Use Lemma to conclude the following classic Chernoff bound
    [[math]] \P\bigl[X\geqslant(1+\delta)np\bigr]\leqslant\Bigl(\smallfrac{\e^{\delta}}{(1+\delta)^{1+\delta}}\Bigr)^{np}. [[/math]]
    Hint: It is often not necessary to compute the infimum in Lemma explicitly. Here, one can for example simply choose [math]t=\log(1+\delta)[/math].
  • Assume you are rolling a fair dice [math]n[/math] times. Apply (ii) to estimate the probability to roll a six in at least 70% of the experiments.
  • Compare the estimate of (ii) with what you get when applying the Markov bound respectively the Chebychev bound, instead. Run a simulation of the experiment to test how tight the predictions of the three bounds are.