[math]
\newcommand{\R}{\mathbb{R}}
\newcommand{\A}{\mathcal{A}}
\newcommand{\B}{\mathcal{B}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\Rbar}{\overline{\mathbb{R}}}
\newcommand{\Bbar}{\overline{\mathcal{B}}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\E}{\mathbb{E}}
\newcommand{\p}{\mathbb{P}}
\newcommand{\one}{\mathds{1}}
\newcommand{\0}{\mathcal{O}}
\newcommand{\mat}{\textnormal{Mat}}
\newcommand{\sign}{\textnormal{sign}}
\newcommand{\CP}{\mathcal{P}}
\newcommand{\CT}{\mathcal{T}}
\newcommand{\CY}{\mathcal{Y}}
\newcommand{\F}{\mathcal{F}}
\newcommand{\mathds}{\mathbb}[/math]
Backward Martingales and the law of large numbers
Let [math](\Omega,\F,\p)[/math] be a probability space. A backward filtration is a family [math](\F_n)_{n\in-\N}[/math] of [math]\sigma[/math]-Algebras indexed by the negative integers, which we will denote by [math](\F_n)_{n\leq 0}[/math], such that for all [math]n\leq m\leq 0[/math] we have
[[math]]
\F_n\subset\F_m.
[[/math]]
We will write
[[math]]\F_{-\infty}:=\bigcap_{n\leq 0}\F_n.
[[/math]]
It is clear that
[math]\F_{-\infty}[/math] is also a
[math]\sigma[/math]-Algebra included in
[math]\F[/math]. A stochastic process
[math](X_n)_{n\leq 0}[/math], indexed by the negative integers, is called a backwards martingale (resp. backwards sub- or supermartingale) if for all
[math]n\leq 0[/math],
[math]X_n[/math] is
[math]\F_n[/math]-measurable,
[math]\E[\vert X_n\vert] \lt \infty[/math] and for all
[math]n\leq m[/math] we have
[[math]]
\E[X_m\mid \F_n]=X_n\text{(resp. $\E[X_m\mid \F_n] \lt X_n$ or $\E[X_m\mid \F_n]\geq X_n$).}
[[/math]]
Let [math](\Omega,\F,(\F_n)_{n\leq 0},\p)[/math] be a backward filtered probability space. Let [math](X_n)_{n\leq 0}[/math] be a backward supermartingale. Assume that
[[math]]
\begin{equation}
\sup_{n\leq 0}\E[\vert X_n\vert] \lt \infty.
\end{equation}
[[/math]]
Then [math](X_n)_{n\leq 0}[/math] is u.i. and converges a.s. and in [math]L^1[/math] to [math]X_{-\infty}[/math] as [math]n\to-\infty[/math]. Moreover, for all [math]n\leq 0[/math], we have
[[math]]
\E[X_n\mid \F_{-\infty}]\leq X_{-\infty}a.s.
[[/math]]
Show Proof
First we show a.s. convergence. Let therefore [math]k\geq 1[/math] be a fixed integer. For [math]n\in\{1,...,k\}[/math], let [math]Y_n^k=X_{n-k}[/math] and [math]\mathcal{G}_n^k=\F_{n-k}[/math]. For [math]n \gt k[/math], we take [math]Y_n^k=X_0[/math] and [math]\mathcal{G}_n^k=\F_0[/math]. Then [math](Y_n^k)_{n\geq 0}[/math] is a supermartingale with respect to [math](\mathcal{G}_n^k)_{n\geq 0}[/math]. We now apply Doob's upcrossing inequality to the submartingale [math](-Y_n^k)_{n\geq 0}[/math] to obtain that for [math]a \lt b[/math]
[[math]]
(b-a)\E[N_k([a,b],-Y^k)]\leq \E[(-Y_n^k-a)^+]=\E[(-X_0-a)^+]\leq \vert a\vert +\E[\vert X_0\vert].
[[/math]]
We note that when
[math]k\uparrow \infty[/math],
[math]N_k([a,b],-Y^k)[/math] increases and
[[math]]
\begin{multline*}
N([a,b],-X):=\sup\{k\in\N\mid \exists m_1 \lt n_1 \lt \dotsm \lt m_k \lt n_k\leq 0;\\
-X_{m_1}\leq a,-X_{n_1}\geq b,...,-X_{m_k}\leq a,-X_{n_k}\geq b\}.
\end{multline*}
[[/math]]
With monotone convergence we get
[[math]]
(b-a)\E[N([a,b],-X)]\leq \vert a\vert +\E[\vert X_0\vert] \lt \infty.
[[/math]]
One can easily show that
[math](X_n)_{n\leq 0}[/math] converges a.s. to
[math]X_\infty[/math] as
[math]n\to-\infty[/math] and Fatou implies then that
[math]\E[\vert X_{-\infty}\vert] \lt \infty[/math]. We want to show that
[math](X_n)_{n\leq 0}[/math] is u.i. Thus, let
[math]\varepsilon \gt 0[/math]. The sequence
[math](\E[X_n])_{n\geq 0}[/math] is increasing and bounded; we can take
[math]k\leq 0[/math] small enough to get for
[math]n\leq k[/math],
[[math]]
\E[X_n]\leq \E[X_k]+\frac{\varepsilon}{2}.
[[/math]]
Moreover, the finite family
[math]\{X_k,X_{k+1},...,X_{-1},X_0\}[/math] is u.i. and one can then choose
[math]\alpha \gt 0[/math] large enough such that for all
[math]k\leq n\leq 0[/math]
[[math]]
\E[\vert X_n\vert\one_{\{\vert X_n\vert \gt \alpha}] \lt \varepsilon.
[[/math]]
We can also choose
[math]\delta \gt 0[/math] sufficiently small such that for all
[math]A\in\F[/math],
[math]\p[A] \lt \delta[/math] implies that
[math]\E[\vert X_n\mid \one_A] \lt \frac{\varepsilon}{2}[/math]. Now if
[math]n \lt k[/math], we get
[[math]]
\begin{align*}
\E[\vert X_n\vert \one_{\{ \vert X_n\vert \gt \alpha}]&=\E[-X_n\one_{\{ X_n \lt -\alpha\}}]+\E[X_n\one_{\{ X_n \gt \alpha\}}]=-\E[X_n\one_{\{ X_n \lt -\alpha\}}]+\E[X_n]-\E[X_n\one_{\{X_n\leq \alpha\}}]\\
&\leq -\E[\E[X_k\mid \F_n]\one_{\{ X_n \lt -\alpha\}}]+\E[X_k]+\frac{\varepsilon}{2}-\E[\E[X_k\mid \F_n]\one_{X_\leq \alpha}]\\
&=-\E[X_k\one{\{ X_n \lt -\alpha\}}]+\E[X_k]+\frac{\varepsilon}{2}-\E[X_k\one_{\{ X_n\leq \alpha\}}]\\
&=-\E[X_n\one_{\{ X_n \lt -\alpha\}}]+\E[X_n\one_{X_n \gt \alpha\}}]+\frac{\varepsilon}{2}\leq \E[\vert X_k\vert \one_{\{ \vert X_n\vert \gt \alpha\}}]+\frac{\varepsilon}{2}.
\end{align*}
[[/math]]
Next, we observe that
[[math]]
\p[\vert X_n\vert \gt \alpha]\leq \frac{1}{\alpha}\E[X_n]\leq \frac{C}{\alpha},
[[/math]]
where
[math]C=\sup_{n\leq 0}\E[\vert X_n\vert] \lt \infty[/math]. Choose
[math]\alpha[/math] such that
[math]\frac{C}{\alpha} \lt \delta[/math]. Consequently, we get
[[math]]
\E[\vert X_k\vert\one_{\{\vert X_n\vert \gt \alpha\}}] \lt \frac{\varepsilon}{2}.
[[/math]]
Hence, for all
[math]n \lt k[/math],
[math]\E[\vert X_n\vert \one_{\{ \vert X_n\vert \gt \alpha\}}] \lt \varepsilon[/math]. This inequality is also true for
[math]k\leq n\leq 0[/math] and thus we have that
[math](X_n)_{n\leq 0}[/math] is u.i. To conclude, we note that u.i. and a.s. convergence implies
[math]L^1[/math] convergence. Then, for
[math]m\leq n[/math] and
[math]A\in\F_{-\infty}\subset \F_m[/math], we have
[[math]]
\E[X_n\one_A]\leq \E[X_m\one_A]\xrightarrow{m\to-\infty}\E[X_{-\infty}\one_A].
[[/math]]
Therefore,
[math]\E[\E[X_n\mid \F_{-\infty}]\one_A]\leq \E[X_{-\infty}\one_A][/math] and hence
[[math]]
\E[X_n\mid \F_{-\infty}]\leq X_{-\infty}.
[[/math]]
■
Equation [math](1)[/math] is always satisfied for backward martingales. Indeed, for all [math]n\leq 0[/math] we get
[[math]]
\E[X_0\mid \F_n]=X_n,
[[/math]]
which implies that
[math]\E[\vert X_n\vert]\leq \E[\vert X_0\vert][/math] and thus
[[math]]
\sup_{n\leq 0}\E[\vert X_n\vert] \lt \infty.
[[/math]]
Backward martingales are therefore always u.i.
Let [math](\Omega,\F,\p)[/math] be a probability space. Let [math]Z[/math] be a r.v. in [math]L^1(\Omega,\F,\p)[/math] and let [math](\mathcal{G}_n)_{n\geq 0}[/math] be a decreasing family of [math]\sigma[/math]-Algebras. Then
[[math]]
\E[Z\mid \mathcal{G}_n]\xrightarrow{n\to\infty\atop\text{a.s. and $L^1$}}\E[Z\mid \mathcal{G}_\infty],
[[/math]]
where
[[math]]
\mathcal{G}_\infty:=\bigcap_{n\geq 0}\mathcal{G}_n.
[[/math]]
Show Proof
For [math]n\geq 0[/math] define [math]X_{-n}:=\E[Z\mid \F_n][/math], where [math]\F_{-n}=\mathcal{G}_n[/math]. Then [math](X_n)_{n\leq 0}[/math] is a backward martingale with respect to [math](\F_n)_{n\leq 0}[/math]. Hence theorem 14.1. implies that [math](X_n)_{n\leq 0}[/math] converges a.s. and in [math]L^1[/math] for [math]n\to\infty[/math]. Moreover[a],
[[math]]
X_\infty=\E[X_0\mid \F_{-\infty}]=\E[\E[Z\mid \F_0]\mid \F_{-\infty}]=\E[Z\mid \F_{-\infty}]=\E[Z\mid \mathcal{G}_\infty].
[[/math]]
■
Let [math](\Omega,\F,\p)[/math] be a probability space. Let [math](X_n)_{n\geq 1}[/math] be a sequence of independent r.v.'s with values in arbitrary measure spaces. For [math]n\geq 1[/math], define the [math]\sigma[/math]-Algebra
[[math]]
\B_n:=\sigma(\{X_k\mid k\geq n\}).
[[/math]]
The tail [math]\sigma[/math]-Algebra [math]\B_\infty[/math] is defined as
[[math]]
\B_\infty:=\bigcap_{n=1}^\infty\B_n.
[[/math]]
Then [math]\B_\infty[/math] is trivial in the sense that for all [math]B\in \B_\infty[/math] we have [math]\p[B]\in\{0,1\}[/math].
Show Proof
This proof can be found in the stochastics I notes.
■
Let [math](\Omega,\F,\p)[/math] be a probability space. Let [math]Z\in L^1(\Omega,\F,\p)[/math] and [math]\mathcal{H}_1[/math] and [math]\mathcal{H}_2[/math] two [math]\sigma[/math]-Algebras included in [math]\F[/math]. Assume that [math]\mathcal{H}_2[/math] is independent of [math]\sigma(Z)\lor \mathcal{H}_1[/math]. Then
[[math]]
\E[Z\mid \mathcal{H}_1\lor \mathcal{H}_2]=\E[Z\mid \mathcal{H}_1].
[[/math]]
Show Proof
Let [math]A\in \mathcal{H}_1\lor \mathcal{H}_2[/math] such that [math]A=B\cap C[/math], where [math]B\in \mathcal{H}_1[/math] and [math]C\in\mathcal{H}_2[/math]. Then
[[math]]
\begin{multline*}
\E[Z\one_A]=\E[Z\one_B\one_C]=\E[Z\one_B]\E[\one_C]=\E[\one_A]\E[\E[Z\mid \mathcal{H}_1]\one_B]\\=\E[\E[Z\mid \mathcal{H}_1]\one_B\one_C]=\E[\E[Z\mid \mathcal{H}_1]\mid \one_A].
\end{multline*}
[[/math]]
Now we note that
[[math]]
\sigma(W=\{ B\cap C\mid B\in\mathcal{H}_1,C\in \mathcal{H}_2\})=\mathcal{H}_1\lor \mathcal{H}_2
[[/math]]
and
[math]W[/math] is stable under finite intersections. Thus the monotone class theorem implies that for all
[math]A\in \mathcal{H}_1\lor \mathcal{H}_2[/math] we have
[[math]]
\E[Z\one_A]=\E[\E[Z\mid\mathcal{H}_1]\one_A]
[[/math]]
■
Let [math](\Omega,\F,\p)[/math] be a probability space. Let [math](\xi_n)_{n\geq 1}[/math] be a sequence of iid r.v.'s such that for all [math]n\geq 1[/math] we have [math]\E[\vert \xi_n\vert] \lt \infty[/math]. Moreover, let [math]S_0=0[/math] and [math]S_n=\sum_{j=1}^n\xi_j[/math]. Then
[[math]]
\frac{S_n}{n}\xrightarrow{n\to\infty\atop\text{a.s. and $L^1$}}\E[\xi_1].
[[/math]]
Show Proof
At first, we want to show that [math]\E[\xi_1\mid S_n]=\frac{S_n}{n}[/math]. Indeed, we know that there is a measurable map [math]g[/math] such that
[[math]]
\E[\xi_1\mid S_n]=g(S_n).
[[/math]]
Moreover, we know that for
[math]k\in\{1,...,n\}[/math] we have
[math](\xi_1,S_n)[/math] and
[math](\xi_k,S_n)[/math] have the same law. Now for all bounded and Borel measurable maps
[math]h[/math] we have
[[math]]
\E[\xi_k h(S_n)]=\E[\xi_1 h(S_n)]=\E[g(S_n)h(S_n)].
[[/math]]
Thus
[math]\E[\xi_k\mid S_n]=g(S_n)[/math]. We have
[[math]]
\sum_{j=1}^n\E[\xi_j\mid S_n]=\E\left[\sum_{j=1}^n \xi_j\mid S_n\right]=S_n,
[[/math]]
but on the other hand we have that
[[math]]
\sum_{j=1}^n\E[\xi_j\mid S_n]=ng(s_n).
[[/math]]
Hence
[math]g(S_n)=\frac{S_n}{n}[/math]. Now take
[math]\mathcal{H}_1=\sigma(S_n)[/math] and
[math]\mathcal{H}_2=\sigma(S_n,\xi_{n+1},\xi_{n+2},...)[/math]. Thus, by
lemma, we get
[[math]]
\E[\xi_1\mid S_n,\xi_{n+1},\xi_{n+2},...]=\E[\xi_1\mid S_n].
[[/math]]
Now define
[math]\mathcal{G}:=\sigma(S_n,\xi_{n+1},\xi_{n+2},...)[/math]. Then we have
[math]\mathcal{G}_{n+1}\subset\mathcal{G}_n[/math] because
[math]S_{n+1}=S_n+\xi_{n+1}[/math]. Hence, it follows that
[math]\E[\xi_1\mid \mathcal{G}_n]=\E[\xi_1\mid S_n]=\frac{S_n}{n}[/math] converges a.s. and in
[math]L^1[/math] to some r.v., but Kolmogorov's 0-1 law implies that this limit is a.s. constant. In particular,
[math]\E\left[\frac{S_n}{n}\right]=\E[\xi_1][/math] converges in
[math]L^1[/math] to this limit, which is thus
[math]\E[\xi_1][/math].
■
\begin{exer}[Hewitt-Savage 0-1 law]
Let [math](\xi_n)_{n\geq 1}[/math] be iid r.v.'s with values in some measurable space [math](E,\mathcal{E})[/math]. The map [math]\omega\mapsto (\xi_1(\omega),\xi_2(\omega),...)[/math] defines a r.v. without values in [math]E^{\N^\times}[/math]. A measurable map [math]F[/math] defined on [math]E^{\N^\times}[/math] is said to be symmetric if
[[math]]
F(x_1,x_2,...)=F(x_{\pi(1)},x_{\pi(2)},...)
[[/math]]
for all permutations [math]\pi[/math] of [math]\N^\times[/math] with finite support.
Prove that if [math]F[/math] is a symmetric function on [math]E^{\N^\times}[/math], then [math]F(\xi_1,\xi_2,...)[/math] is a.s. constant.
[math]Hint:[/math] Consider [math]\F_n=\sigma(\xi_1,x_2,...,\xi_n)[/math], [math]\mathcal{G}_n=\sigma(\xi_{n+1},\xi_{n+2},...)[/math], [math]Y=F(\xi_1,\xi_2,...)[/math], [math]X=\E[Y\mid \F_n][/math] and [math]Z_n=\E[Y\mid \mathcal{G}_n][/math].
\end{exer}
Martingales bounded in [math]L^2[/math] and random series
Let [math](\Omega,\F,(\F_n)_{n\geq 0},\p)[/math] be a filtered probability space. Let [math](M_n)_{n\geq 0}[/math] be a martingale in [math]L^2(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math], i.e. [math]\E[M_n^2] \lt \infty[/math] for all [math]n\geq 0[/math]. We say that [math](M_n)_{n\geq 0}[/math] is bounded in [math]L^2(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math] if [math]\sup_{n\geq 0}\E[M_n^2] \lt \infty[/math]. For [math]n\leq \nu[/math], we have that [math]\E[M_\nu\mid \F_n]=M_n[/math] implies that [math](M_\nu-M_n)[/math] is orthogonal to [math]L^2(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math]. Hence, for all [math]s\leq t\leq n\leq \nu[/math], [math](M_\nu-M_n)[/math] is orthogonal to [math](M_t-M_s)[/math].
[[math]]
\left\langle M_\nu-M_n,M_t-M_s\right\rangle=0\Longleftrightarrow \E[(M_\nu-M_n)(M_t-M_s)]=0.
[[/math]]
Now write [math]M_n=M_0+\sum_{k=1}^n(M_k-M_{k-1})[/math]. [math]M_n[/math] is then a sum of orthogonal terms and therefore
[[math]]
\E[M_n^2]=\E[M_0^2]+\sum_{k=1}^n\E[(M_k-M_{k-1})^2].
[[/math]]
Let [math](M_n)_{n\geq 0}[/math] be a martingale in [math]L^2(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math]. Then [math](M_n)_{n\geq 0}[/math] is bounded in [math]L^2(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math] if and only if [math]\sum_{k\geq 1}\E[(M_k-M_{k-1})^2] \lt \infty[/math] and in this case
[[math]]
M_n\xrightarrow{n\to\infty\atop \text{a.s. and $L^1$}}M_\infty.
[[/math]]
Suppose that [math](X_n)_{n\geq 1}[/math] is a sequence of independent r.v.'s such that for all [math]k\geq 1[/math], [math]\E[X_k]=0[/math] and [math]\sigma_k^2=Var(X_k) \lt \infty[/math]. Then
- [math]\sum_{k\geq1}\sigma^2_k \lt \infty[/math] implies that [math]\sum_{k\geq 1}X_k[/math] converges a.s.
- If there is a [math]C \gt 0[/math] such that for all [math]\omega\in\Omega[/math] and [math]k\geq1[/math], [math]\vert X_k(\omega)\vert\leq C[/math], then [math]\sum_{k\geq 1}X_k[/math] converges a.s. implies that [math]\sum_{k\geq 1}\sigma_k^2 \lt \infty[/math].
Show Proof
Consider [math]\F_n=\sigma(X_1,...,x_n)[/math] with [math]F_0=\{\varnothing,\Omega\}[/math], [math]S_n=\sum_{j=1}^nX_j[/math] with [math]S_0=0[/math] and [math]A_n=\sum_{k=1}^n\sigma_k^2[/math] with [math]A_0=0[/math]. Moreover, set [math]M_n=S_n^2-A_n[/math]. Then [math](S_n)_{n\geq 0}[/math] is a martingale and
[[math]]
\E[(S_n-S_{n-1})^2]=\E[X_n^2]=\sigma_n^2.
[[/math]]
Thus
[math]\sum_{n\geq 1}\sigma_n^2 \lt \infty[/math] inplies
[math]\sum_{n\geq 1}\E[(S_n-S_{n-1})^2] \lt \infty[/math] and hence
[math](S_n)_{n\geq 0}[/math]
is bounded in
[math]L^2(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math], which means that
[math]S_n[/math] converges a.s. Next we show that
[math](M_n)_{n\geq 0}[/math] is a martingale. We have
[[math]]
\E[(S_n-S_{n-1})^2\mid \F_{n-1}]=\E[X_n^2\mid \F_{n-1}]=\E[X_n^2]=\sigma_n^2.
[[/math]]
Hence we get
[[math]]
\begin{multline*}
\sigma_n^2=\E[(S_n-S_{n-1})^2\mid \F_{n-1}]=\E[S_n^2-2S_{n-1}S_n+S_{n-1}^2\mid F_{n-1}]=\\
=\E[S_n^2\mid\F_{n-1}]-2S_{n-1}^2+S_{n-1}^2=\E[S_n^2\mid \F_{n-1}]-S_{n-1}^2,
\end{multline*}
[[/math]]
which implies that [math](M_n)_{n\geq 0}[/math] is a martingale. Let [math]T:=\inf\{n\in\N\mid \vert S_n\vert \gt \alpha\}[/math] for some constant [math]\alpha[/math]. Then [math]T[/math] is a stopping time. [math](M_{n\land T})_{n\geq 1}[/math] is a martingale and hence
[[math]]
\E[M_{n\land T}]=\E[S_{n\land T}^2]-\E[A_{n\land T}]=0.
[[/math]]
Therefore
[math]\E[S_{n\land T}^2]=\E[A_{n\land T}][/math] and if
[math]T[/math] is finite,
[math]\vert S_T-S_{T-1}\vert \lt \vert X_T\vert\leq C[/math] for some constant
[math]C[/math], thus
[math]\vert S_{n\land T}\vert \leq C+\alpha[/math] and hence
[math]\E[A_{n\land T}]\leq (C+\alpha)^2[/math] for all
[math]n\geq 0[/math]. Now since
[math]A_n[/math] is increasing we get that
[math]\E[A_{n\land T}]\leq (C+\alpha)^2 \lt \infty.[/math] Since
[math]\sum_{n\geq 0}X_n[/math] converges a.s.,
[math]\sum_{k=1}^n X_k[/math] is bounded and there exists
[math]\alpha \gt 0[/math] such that
[math]\p[T=\alpha] \gt 0[/math]. Choosing
[b] [math]\alpha[/math] right yields
[math]\sum_{k\geq 1}\sigma_k^2 \lt \infty[/math].
■
Example
Let [math](a_n)_{n\geq 1}[/math] be a sequence of real numbers and let [math](\xi_n)_{n\geq 1}[/math] be iid r.v.'s with [math]\p[\xi=\pm 1]=\frac{1}{2}[/math]. Then [math]\sum_{n\geq 1}a_n\xi_n[/math] converges a.s. if and only if [math]\sum_{n\geq 1}a_n^2 \lt \infty[/math]. Indeed, we get [math]\vert a_n\xi_n\vert=\vert a_n\vert\xrightarrow{n\to\infty}0[/math] and therefore there exists a [math]C \gt 0[/math] such that for all [math]n\geq 1[/math], [math]\vert a_n\vert \leq C[/math]. Now for a r.v. [math]X[/math] recall that we write [math]\Phi_X(t)=\E\left[e^{itX}\right][/math]. We also know
[[math]]
e^{itx}=\sum_{n\geq 0}\frac{i^nt^nx^n}{n!},e^{itX}=\sum_{n\geq 0}\frac{i^nt^nX^n}{n!}.
[[/math]]
Moreover, define [math]R_n(x)=e^{ix}-\sum_{k=0}^n\frac{i^kx^k}{k!}[/math]. Therefore we get
[[math]]
\vert R_n(x)\vert\leq \min \left(2\frac{\vert x\vert^n}{n!},\frac{\vert x\vert^{n+1}}{(n+1)!}\right).
[[/math]]
Indeed, [math]\vert R_0(x)\vert=\vert e^{ix}-1\vert=\left\vert\int_0^x ie^{iy}dy\right\vert\leq \min(2,\vert x\vert).[/math] Moreover, we have [math]\vert R_n(x)\vert=\left\vert \int_0^x iR_{n-1}(y)dy\right\vert[/math]. Hence the claim follows by a simple induction on [math]n[/math]. If [math]X[/math] is such that [math]\E[X]=0[/math], [math]\E[X^2]=\sigma^2 \lt \infty[/math] and [math]e^{itX}-\left( 1+itX-\frac{t^2X^2}{2}\right)=R_2(tX)[/math] we get
[[math]]
\E\left[e^{itX}\right]=1-\frac{\sigma^2t^2}{2}+\E[R_2(tX)]
[[/math]]
and [math]\E[R_2(tX)]\leq t^2\E[\vert X\vert^2\land tX^3][/math]. With dominated convergence it follows that [math]\Phi(t)=1-\frac{t^2\sigma^2}{2}+o(t^2)[/math] as [math]t\to 0[/math].
Let [math](\Omega,\F,(\F_n)_{n\geq 0},\p)[/math] be a filtered probability space. Let [math](X_n)_{n\geq 0}[/math] be a sequence of independent r.v.'s bounded by [math]k \gt 0[/math]. Then if [math]\sum_{n\geq 0}X_n[/math] converges a.s., [math]\sum_{n\geq 0}\E[X_n][/math] and [math]\sum_{n\geq 0}Var(X_n)[/math] both converge.
Show Proof
If [math]Z[/math] is a r.v. such that [math]\vert Z\vert \lt k[/math], [math]\E[Z]=0[/math] and [math]\sigma^2=Var(Z) \lt \infty[/math], then for [math]\vert t\vert\leq \frac{1}{k}[/math] we get
[[math]]
\vert \Phi_Z(t)\vert\leq 1-\frac{t^2\sigma^2}{2}+\frac{t^3k\E[Z^2]}{6}\leq 1-\frac{t^2\sigma^2}{2}+\frac{t^2\sigma^2}{6}=1-\frac{t^2\sigma^2}{3}\leq \exp\left(-\frac{t^2\sigma^2}{3}\right).
[[/math]]
Let
[math]Z_n=X_n-\E[X_n][/math]. Then
[math]\vert \Phi_{Z_n}(t)\vert=\vert \Phi_{X_n}(t)\vert[/math] and
[math]\vert Z_n\vert\leq 2k[/math]. If
[math]\sum_{n\geq 0}X_n=\infty[/math], we get
[[math]]
\prod_{n\geq0}\vert\Phi_{X_n}(t)\vert=\prod_{n\geq 0}\vert\Phi_{Z_n}(t)\vert\leq \exp\left(-\frac{1}{3}t^2\sum_{n\geq 0}Var(X_n)\right)=0.
[[/math]]
This is a contradiction, since
[math]\vert \Phi_{\sum_{n\geq 0}X_n}(t)\vert\xrightarrow{n\to\infty}\vert\Phi(t)\vert[/math] with
[math]\Phi[/math] continuous and
[math]\Phi(0)=1[/math]. Hence
[math]\sum_{n\geq 0}Var(X_n)=\sum_{n\geq 0}Var(Z_n) \lt \infty[/math]. Since
[math]\E[Z_n]=0[/math] and
[math]\sum_{n\geq 0}Var(Z_n) \lt \infty[/math], we have
[math]\sum_{n\geq 0}Z_n[/math] converges a.s., but
[math]\sum_{n\geq 0}-Z_n=\sum_{n\geq 0}X_n-\sum_{n\geq 0}\E[X_n][/math] and thus since
[math]\sum_{n\geq 0}X_n[/math] converges a.s. it follows that
[math]\sum_{n\geq 0}\E[X_n][/math] converges.
■
Let [math](\Omega,\F,\p)[/math] be a probability space. Let [math](X_n)_{n\geq 0}[/math] be a sequence of independent r.v.'s. Then [math]\sum_{n\geq 0}X_n[/math] converges a.s. if and only if for some [math]k \gt 0[/math] (then for every [math]k \gt 0[/math]) the following properties hold.
- [math]\sum_{n\geq 0}\p[\vert X_n\vert \gt k] \lt \infty[/math]
- [math]\sum_{n\geq 0}\E\left[X_n^{(k)}\right][/math] converges, where [math]X_n^{(k)}=X_n\one_{\{\vert X_n\vert\leq k\}}[/math]
- [math]\sum_{n\geq 0}Var\left(X_n^{(k)}\right) \lt \infty[/math]
Show Proof
Suppose that for some [math]k \gt 0[/math], [math](i)[/math], [math](ii)[/math] and [math](iii)[/math] hold. Then
[[math]]
\sum_{n\geq 0}\p\left[X_n\not=X_n^{(k)}\right]=\sum_{n\geq 0}\p[\vert X_n\vert \gt k] \lt \infty.
[[/math]]
It follows from the Borel-Cantelli lemma that
[math]\p\left[X_n=X_n^{(k)}\text{for all but finitely many $n$}\right]=1[/math]. Hence we only need to show that
[math]\sum_{n\geq 0}X_n^{k}[/math] converges a.s. Because of
[math](ii)[/math] it is enough to show that
[math]\sum_{n\geq 0}Y_n^{(k)}[/math] converges, where
[math]Y_n^{(k)}=X_n^{(k)}-\E\left[X_n^{(k)}\right][/math]. The convergence of
[math]\sum_{n\geq 0}Y_n^{(k)}[/math] follows then from
[math](iii)[/math]. Conversely assume that
[math]\sum_{n\geq 0}X_n[/math] converges a.s. and that
[math]k\in(0,\infty)[/math]. Since
[math]X\xrightarrow{n\to\infty}0[/math] a.s., we have that
[math]\vert X_n\vert \gt k[/math] for only finitely many
[math]n[/math]. Therefore, the Borel-Cantelli lemma implies
[math](i)[/math]. Since
[math]X_n=X_n^{(k)}[/math] for all but finitely many
[math]n[/math],
[math]\sum_{n\geq 0}X_n^{(k)}[/math] converges and it follows from lemma 14.8. that
[math](ii)[/math] and
[math](iii)[/math] have to hold.
■
Suppose [math](b_n)_{n\geq 1}[/math] is a sequence of strictly positive real numbers with [math]b_n\uparrow\infty[/math] as [math]n\to\infty[/math]. Let [math](v_n)_{n\geq 1}[/math] be a sequence of real numbers such that [math]v_n\xrightarrow{n\to\infty}v[/math]. Then
[[math]]
\frac{1}{b_n}\sum_{k=1}^n(b_k-b_{k-1})v_k\xrightarrow{n\to\infty}v(b_0=0)
[[/math]]
Show Proof
Note that
[[math]]
\begin{multline*}
\left\vert \frac{1}{b_n}\sum_{k=1}^n(b_k-b_{k-1})v_k-v\right\vert=\left\vert \frac{1}{b_n}\sum_{k=1}^n(b_k-b_{k-1})(v_k-v)\right\vert\leq \frac{1}{b_n}\sum_{k=1}^N(b_k-b_{k-1})\vert v_k-v\vert\\+\frac{1}{b_n}\sum_{k=N+1}^n(b_k-b_{k-1})\vert v_k-v\vert.
\end{multline*}
[[/math]]
Now we only have to choose [math]N[/math] such that [math]n\geq N[/math] and [math]\vert v_k-v\vert \lt \varepsilon[/math] for any [math]\varepsilon \gt 0[/math].
■
Let [math](b_n)_{n\geq 1}[/math] be a sequence of real numbers, strictly positive with [math]b_n\uparrow \infty[/math] as [math]n\to\infty[/math]. Let [math](x_n)_{n\geq 1}[/math] be a sequence of real numbers. Then if [math]\sum_{n\geq 1}\frac{x_n}{b_n}[/math] converges, we get that
[[math]]
\frac{x_1+\dotsm +x_n}{b_n}\xrightarrow{n\to\infty}0.
[[/math]]
Show Proof
Let [math]v_n=\sum_{k=1}^n\frac{x_n}{b_n}[/math] and [math]v=\lim_{n\to\infty}v_n[/math]. Then [math]v_n-v_{n-1}=\frac{x_n}{b_n}[/math]. Moreover, we note that
[[math]]
\sum_{k=1}^nx_k=\sum_{k=1}^nb_k(v_k-v_{k-1})=b_nv_n-\sum_{k=1}^n(b_k-b_{k-1})v_k,
[[/math]]
which implies that
[[math]]
\frac{x_1+\dotsm+x_n}{b_n}=v_n-\frac{1}{b_n}\sum_{k=1}^n(b_k-b_{k-1})v_k\xrightarrow{n\to\infty}v-v=0.
[[/math]]
■
Let [math](\Omega,\F,\p)[/math] be a probability space. Let [math](w_n)_{n\geq 1}[/math] be a sequence of r.v.’s such that [math]\E[w_n]=0[/math] for all [math]n\geq 1[/math] and [math]\sum_{n\geq 1}\frac{Var(w_n)}{n^2} \lt \infty[/math]. Then
[[math]]
\frac{1}{n}\sum_{n\geq 1}w_n\xrightarrow{n\to\infty \atop a.s.}0.
[[/math]]
Show Proof
Let [math](\Omega,\F,\p)[/math] be a probability space. Let [math](X_n)_{n\geq 1}[/math] be independent and non-negative r.v.'s such that [math]\E[X_n]=1[/math] for all [math]n\geq 1[/math]. Define [math]M_0=1[/math] and for [math]n\in\N[/math], let
[[math]]
M_n=\prod_{j=1}^nX_j.
[[/math]]
Then
[math](M_n)_{n\geq 1}[/math] is a non-negative martingale, so that
[math]M_\infty:=\lim_{n\to\infty}M_n[/math] exists a.s. Then the following are equivalent.
- [math]\E[M_\infty]=1[/math].
- [math]M_n\xrightarrow{n\to\infty\atop L^1}M_\infty[/math].
- [math](M_n)_{n\geq 1}[/math] is u.i.
- [math]\prod_{n}a_n \gt 0[/math], where [math]0 \lt a_n=\E[X_n^{1/2}]\leq 1[/math].
- [math]\sum_{n}(1-a_n) \lt \infty[/math].
Moreover, if one of the following (then every one) statements hold, then
[[math]]
\p[M_\infty=0]=1.
[[/math]]
Show Proof
A martingale central limit theorem
Let [math](\Omega,\F,(\F_n)_{n\geq 0},\p)[/math] be a filtered probability space. Let [math](X_n)_{n\geq 0}[/math] be a sequence of real valued r.v.'s such that for all [math]n\geq 1[/math]
- [math]\E[X_n\mid \F_{n-1}]=0[/math].
- [math]\E[X_n^2\mid \F_{n-1}]=1[/math].
- [math]\E[\vert X_n\vert^3\mid \F_{n-1}]\leq k \lt \infty[/math].
Let [math]S_n=\sum_{j=1}^nX_j[/math]. Then
[[math]]
\frac{S_n}{\sqrt{n}}\xrightarrow{n\to\infty\atop law}\mathcal{N}(0,1).
[[/math]]
Show Proof
Define [math]\Phi_{n,j}(u)=\E\left[e^{iu\frac{X_j}{\sqrt{n}}}\mid \F_{j-1}\right][/math]. A Taylor expansion yields
[[math]]
\exp\left(iu\frac{X_j}{\sqrt{n}}\right)=1+iu\frac{X_j}{\sqrt{n}}-\frac{u^2}{2n}X_j^2-\frac{iu^3}{6n^{3/2}}\bar X_j^3,
[[/math]]
where
[math]\bar X_j[/math] is a random number between 0 and
[math]X_j[/math]. Therefore we get
[[math]]
\Phi_{n,j}(u)=1+iu\frac{1}{\sqrt{n}}\E[X_j\mid \F_{j-1}]-\frac{u^2}{2n}\E[X_j^2\mid \F_{j-1}]-\frac{iu^3}{6n^{3/2}}\E[\bar X_j^3\mid \F_{j-1}]
[[/math]]
and thus
[[math]]
\Phi_{n,j}(u)-1+\frac{u^2}{2n}=-\frac{iu^3}{6n^{3/2}}\E[\bar X_j^2\mid \F_{j-1}].
[[/math]]
Hence we get
[[math]]
\E\left[e^{iu\frac{S_p}{\sqrt{n}}}\right]=\E\left[e^{iu\frac{S_{p-1}}{\sqrt{n}}}e^{iu\frac{X_p}{\sqrt{n}}}\right]=\E\left[e^{iu\frac{S_{p-1}}{\sqrt{n}}}\E\left[e^{iu\frac{X_p}{\sqrt{n}}}\mid \F_{p-1}\right]\right]=\E\left[e^{iu\frac{S_{p-1}}{\sqrt{n}}}\Phi_{n,p}(u)\right].
[[/math]]
Consequently, we get
[[math]]
\E\left[e^{iu\frac{S_p}{\sqrt{n}}}\right]=\E\left[e^{iu\frac{S_{p-1}}{\sqrt{n}}}\left(1+\frac{u^2}{2n}-\frac{iu^3}{6n^{3/2}}\bar X_p^3\right)\right].
[[/math]]
Thus we get that
[[math]]
\E\left[e^{iu\frac{S_p}{\sqrt{n}}}-\left(1-\frac{u^2}{2n}\right)e^{iu\frac{S_{p-1}}{\sqrt{n}}}\right]=\E\left[e^{iu\frac{S_{p-1}}{\sqrt{n}}}\frac{iu^3}{6n^{3/2}}\bar X_p^3\right],
[[/math]]
which implies that
[[math]]
\left\vert \E\left[e^{iu \frac{S_p}{\sqrt{n}}}-\left(1-\frac{u^2}{2n}\right)e^{iu\frac{S_{p-1}}{\sqrt{n}}}\right]\right\vert\leq \frac{K\vert u\vert ^3}{6n^{3/2}}.(\star)
[[/math]]
Let us fix
[math]n\in\N[/math]. For
[math]n[/math] large enough, we have
[math]0\leq 1-\frac{u^2}{2n}\leq 1[/math]. Multiplying both sides of
[math](\star)[/math] by
[math]\left(1-\frac{u^2}{2n}\right)^{n-p}[/math], we get
[[math]]
\left\vert \left(1-\frac{u^2}{2n}\right)^{n-p}\E\left[e^{iu\frac{S_p}{\sqrt{n}}}\right]-\left(1-\frac{u^2}{2n}\right)^{n-p+1}\E\left[e^{iu\frac{S_{p-1}}{\sqrt{n}}}\right]\right\vert\leq \frac{K\vert u\vert^3}{6n^{3/2}}.(\star\star)
[[/math]]
By taking
[math]K[/math] sufficiently large, we can assume that
[math](\star\star)[/math] holds for all
[math]n[/math]. Now we note that
[[math]]
\E\left[e^{iu\frac{S_n}{\sqrt{n}}}\right]-\left(1-\frac{u^2}{2n}\right)^n=\sum_{p=1}^n\left\{\left(1-\frac{u^2}{2n}\right)^{n-p}\E\left[e^{iu\frac{S_p}{\sqrt{n}}}\right]-\left(1-\frac{u^2}{2n}\right)^{n-p+1}\E\left[e^{iu\frac{S_{p-1}}{\sqrt{n}}}\right]\right\}.
[[/math]]
Therefore we get
[[math]]
\left\vert\E\left[e^{iu\frac{S_n}{\sqrt{n}}}\right]-\left(1-\frac{u^2}{2n}\right)^n\right\vert\leq n\frac{K\vert u\vert^3}{n^{3/2}}=\frac{K\vert u\vert^3}{6\sqrt{n}},
[[/math]]
which implies that
[[math]]
\lim_{n\to\infty}\E\left[e^{iu\frac{S_n}{\sqrt{n}}}\right]=e^{-\frac{u^2}{2}}.
[[/math]]
■
General references
Moshayedi, Nima (2020). "Lectures on Probability Theory". arXiv:2010.16280 [math.PR].
Notes
- This follows from the last part of theorem 14.1.
- Note that [math]\E\left[\sum_{k\geq 1}\sigma_k^2\one_{\{ T=\infty\}}\right]=\sum_{k\geq 1}\sigma_k^2\p[T=\infty][/math]
- From Kronecker's lemma it is enough to prove that [math]\sum_{k\geq 1}\frac{w_k}{k}[/math] converges a.s.