⧼exchistory⧽
2 exercise(s) shown, 0 hidden
BBy Bot
May 31'24
[math]
\newcommand{\smallfrac}[2]{\frac{#1}{#2}}
\newcommand{\medfrac}[2]{\frac{#1}{#2}}
\newcommand{\textfrac}[2]{\frac{#1}{#2}}
\newcommand{\tr}{\operatorname{tr}}
\newcommand{\e}{\operatorname{e}}
\newcommand{\B}{\operatorname{B}}
\newcommand{\Bbar}{\overline{\operatorname{B}}}
\newcommand{\pr}{\operatorname{pr}}
\newcommand{\dd}{\operatorname{d}\hspace{-1pt}}
\newcommand{\E}{\operatorname{E}}
\newcommand{\V}{\operatorname{V}}
\newcommand{\Cov}{\operatorname{Cov}}
\newcommand{\Bigsum}[2]{\mathop{\textstyle\sum}_{#1}^{#2}}
\newcommand{\ran}{\operatorname{ran}}
\newcommand{\card}{\#}
\renewcommand{\P}{\operatorname{P}}
\renewcommand{\L}{\operatorname{L}}
\newcommand{\mathds}{\mathbb}[/math]
(Classic example for a Chernoff bound) Let [math]Y_1,\dots,Y_n[/math] be independent Bernoulli random variables with [math]\P[X_i=1]=p\in[0,1][/math] and [math]Y=Y_1+\cdots+Y_n[/math]. Let [math]\delta \gt 0[/math].
- Show that [math]\E(\exp(tY_i))\leqslant\exp(p(\exp(t)-1))[/math] holds for every [math]t \gt 0[/math].
- Use Lemma to conclude the following classic Chernoff bound
[[math]] \P\bigl[X\geqslant(1+\delta)np\bigr]\leqslant\Bigl(\smallfrac{\e^{\delta}}{(1+\delta)^{1+\delta}}\Bigr)^{np}. [[/math]]Hint: It is often not necessary to compute the infimum in Lemma explicitly. Here, one can for example simply choose [math]t=\log(1+\delta)[/math].
- Assume you are rolling a fair dice [math]n[/math] times. Apply (ii) to estimate the probability to roll a six in at least 70% of the experiments.
- Compare the estimate of (ii) with what you get when applying the Markov bound respectively the Chebychev bound, instead. Run a simulation of the experiment to test how tight the predictions of the three bounds are.
BBy Bot
May 31'24
[math]
\newcommand{\smallfrac}[2]{\frac{#1}{#2}}
\newcommand{\medfrac}[2]{\frac{#1}{#2}}
\newcommand{\textfrac}[2]{\frac{#1}{#2}}
\newcommand{\tr}{\operatorname{tr}}
\newcommand{\e}{\operatorname{e}}
\newcommand{\B}{\operatorname{B}}
\newcommand{\Bbar}{\overline{\operatorname{B}}}
\newcommand{\pr}{\operatorname{pr}}
\newcommand{\dd}{\operatorname{d}\hspace{-1pt}}
\newcommand{\E}{\operatorname{E}}
\newcommand{\V}{\operatorname{V}}
\newcommand{\Cov}{\operatorname{Cov}}
\newcommand{\Bigsum}[2]{\mathop{\textstyle\sum}_{#1}^{#2}}
\newcommand{\ran}{\operatorname{ran}}
\newcommand{\card}{\#}
\renewcommand{\P}{\operatorname{P}}
\renewcommand{\L}{\operatorname{L}}
\newcommand{\mathds}{\mathbb}[/math]
(‘naive exponential estimate’) Let [math]Y_1,\dots,Y_d[/math] be independent random variables and assume that [math]|\E(Y_i^k)|\leqslant k![/math] holds for all [math]k\geqslant0[/math] and [math]i=1,\dots,d[/math].
- Use the series expansion of [math]\exp(\cdot)[/math] and the assumption to get
[[math]] \E(\exp(tY_i))\leqslant \Bigsum{k=0}{\infty}t^k=\left\{\begin{array}{cl}\textfrac{1}{1-t} &\text{ for } t\in(-1,1),\\\infty & \text{otherwise.}\end{array}\right. [[/math]]
- Show by means of calculus that
[[math]] \inf_{t\in(0,1)}\exp(-ta)\prod_{i=1}^d\smallfrac{1}{1-t}=\left\{\begin{array}{cl}(\textfrac{a}{d})^d\exp(d-a) &\text{ if } a \gt d,\\ 1 & \text{otherwise.}\end{array}\right. [[/math]]
- Derive an estimate for [math]\P\bigl[|Y_1+\cdots+Y_d|\geqslant a\bigr][/math] from the above.
- Compare the bound in (iii) with the bound of Theorem.