guide:B08ccce02f: Difference between revisions

From Stochiki
No edit summary
 
mNo edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
<div class="d-none"><math>
\newcommand{\R}{\mathbb{R}}
\newcommand{\A}{\mathcal{A}}
\newcommand{\B}{\mathcal{B}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\Rbar}{\overline{\mathbb{R}}}
\newcommand{\Bbar}{\overline{\mathcal{B}}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\E}{\mathbb{E}}
\newcommand{\p}{\mathbb{P}}
\newcommand{\one}{\mathds{1}}
\newcommand{\0}{\mathcal{O}}
\newcommand{\mat}{\textnormal{Mat}}
\newcommand{\sign}{\textnormal{sign}}
\newcommand{\CP}{\mathcal{P}}
\newcommand{\CT}{\mathcal{T}}
\newcommand{\CY}{\mathcal{Y}}
\newcommand{\F}{\mathcal{F}}
\newcommand{\mathds}{\mathbb}</math></div>
Let <math>(\Omega,\F,(\F_n)_{n\geq0},\p)</math> be a filtered probability space. Let <math>X=(X_n)_{n\geq 0}</math> be a stochastic process, such that <math>X_n</math> is <math>\F_n</math>-measurable for all <math>n\geq 0</math>. We denote


<math display="block">
X_n^*:=\sup_{j\leq  n}\vert X_j\vert.
</math>
Note that <math>(X_n^*)_{n\geq 0}</math> is increasing and <math>\F_n</math>-adapted. Therefore if <math>X_n\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)</math> for all <math>n\geq 0</math>, then <math>(X_n^*)_{n\geq 0}</math> is a submartingale.
===Maximal inequality and Doob's inequality===
Recall Markov's inequality in terms of <math>(X_n^*)_{n\geq0}</math>, which is given by
<math display="block">
\p[X_n^*\geq \alpha]\leq  \frac{\E[X_n^*]}{\alpha},
</math>
with the obvious bound
<math display="block">
\E[X_n^*]\leq  \sum_{j=1}^n\E[\vert X_j\vert].
</math>
We shall see for instance that when <math>(X_n)_{n\geq 0}</math> is a martingale, one can replace <math>\E[X_n^*]</math> by <math>\E[\vert X_n\vert]</math>.
{{proofcard|Proposition|prop1|Let <math>(\Omega,\F,(\F_n)_{n\geq0},\p)</math> be a filtered probability space. Let <math>(X_n)_{n\geq 0}</math> be a submartingale and let <math>\lambda > 0,k\in\N</math>. Define
<math display="block">
\begin{align*}
A&:=\left\{\max_{0\leq  n\leq  k}X_n\geq \lambda\right\}\\
B&:=\left\{\min_{0\leq  n\leq  k}X_n\leq  -\lambda\right\}.
\end{align*}
</math>
Then the following hold.
<ul style{{=}}"list-style-type:lower-roman"><li>
<math display="block">
\lambda\p[A]\leq  \E[X_k\one_A],
</math>
</li>
<li>
<math display="block">
\lambda\p[B]\leq  \E[X_k\one_{B^C}]-\E[X_0].
</math>
</li>
</ul>
{{alert-info |
If <math>(X_n)_{n\geq 0}</math> is a martingale, then <math>(\vert X_n\vert)_{n\geq 0}</math> is a submartingale. Moreover, from <math>(i)</math> we get
<math display="block">
\lambda\p[X^*_k\geq \alpha]\leq  \E[\vert X_k\vert\one_A]\leq  \E[\vert X_k\vert]
</math>
and hence
<math display="block">
\p[X_k^*\geq \alpha]\leq  \frac{\E[\vert X_k\vert]}{\alpha}.
</math>
}}
|We need to show both points.
<ul style{{=}}"list-style-type:lower-roman"><li>Let us introduce
<math display="block">
T=\inf\{n\in\N\mid X_n\leq  \lambda\}\land k.
</math>
Then <math>T</math> is a stopping time, which is bounded by <math>k</math>. We thus have
<math display="block">
\E[X_T]\leq  \E[X_k].
</math>
We note that <math>X_T=X_k</math> if <math>T=k</math>, which happens for <math>\omega\in A^C</math>. Hence we get
<math display="block">
\E[X_T]=\E[X_T\one_A+X_T\one_{A^C}]=\E[X_T\one_A]+\E[X_k\one_{A^C}]\leq  \underbrace{\E[X_k]}_{\E[X_k(\one_A+\one_{A^C})]}.
</math>
Now we note that
<math display="block">
\E[X_T\one_A]\geq \lambda \E[\one_A]=\lambda \p[A].
</math>
Therefore we get
<math display="block">
\lambda\p[A]\leq  \E[X_k\one_A].
</math>
</li>
<li>Let us define
<math display="block">
S=\inf\{n\in\N\mid X_n\leq  -\lambda\}\land k.
</math>
Again <math>S</math> is a stopping time, which is bounded by <math>k</math>. We hence have
<math display="block">
\E[X_S]\geq \E[X_0].
</math>
Thus
<math display="block">
\E[X_0]\leq  \E[X_S\one_B]+\E[X_S\one_{B^C}]\leq  -\lambda \p[B]+\E[X_k\one_{B^C}].
</math>
Therefore we get
<math display="block">
\lambda\p[B]\leq  \E[X_k\one_{B^C}]-\E[X_0].
</math>
</li>
</ul>}}
{{proofcard|Proposition (Kolmogorov's inequality)|prop-1|Let <math>(\Omega,\F,(\F_n)_{n\geq0},\p)</math> be a filtered probability space. Let <math>(X_n)_{n\geq 0}</math> be a martingale, such that for all <math>n\geq 0</math> we have <math>\E[X_n^2] < \infty</math>. Then
<math display="block">
\p\left[\max_{0\leq  k\leq  n}\vert X_k\vert\geq \lambda\right]\leq  \frac{\E[X_k^2]}{\lambda^2}.
</math>
|We use the fact that <math>(X_n^2)_{n\geq 0}</math> is a positive submartingale. Therefore we get
<math display="block">
\lambda^2\underbrace{\p\left[\max_{0\leq  k\leq  n}\vert X_k\vert^2\geq \lambda^2\right]}_{\p\left[\max_{0\leq  k\leq  k}\vert X_k\vert\geq \lambda\right]}\leq  \E\left[X_k^2\one_{\left\{\max_{0\leq  k\leq  n}\vert X_k\vert^2\geq \lambda^2\right\}}\right]\leq  \E[X_k^2]
</math>}}
{{proofcard|Theorem (Maximal inequality)|thm-1|Let <math>(\Omega,\F,(\F_n)_{n\geq0},\p)</math> be a filtered probability space. Let <math>(X_n)_{n\geq 0}</math> be a submartingale. Then for all <math>\lambda\geq 0</math> and <math>n\in\N</math>, we get
<math display="block">
\lambda\p\left[\max_{0\leq  k\leq  n}\vert X_k\vert\geq \lambda\right]\leq  \E[X_0]+2\E[\vert X_n\vert].
</math>
|Let <math>A</math> and <math>B</math> be defined as in Proposition 9.1. Then
<math display="block">
\begin{align*}
\lambda \p\left[\max_{0\leq  k\leq  n}\vert X_k\vert \geq \lambda\right]&=\lambda\p[A\cup B]\leq \E[X_k\one_A]-\E[X_0]+\E[X_k\one_{B^C}]\\
&\leq \E[\vert X_0\vert]+\E[\vert X_n\vert]+\E[\vert X_n\vert]=\E[\vert X_0\vert]+2\E[\vert X_n\vert].
\end{align*}
</math>}}
{{proofcard|Theorem (Doob's inequality)|thm2|Let <math>(\Omega,\F,(\F_n)_{n\geq0},\p)</math> be a filtered probability space. Let <math>p > 1</math> and <math>q > 1</math>, such that <math>\frac{1}{p}+\frac{1}{q}=1</math>.
<ul style{{=}}"list-style-type:lower-roman"><li>If <math>(X_n)_{n\geq 0}</math> is a submartingale, then for all <math>n\geq 0</math> we have
<math display="block">
\left\|\max_{0\leq  k\leq  n}X_k^+\right\|_p\leq  q\left\|X_n^+\right\|_p.
</math>
</li>
<li>If <math>(X_n)_{n\geq 0}</math> is a martingale, then for all <math>n\geq 0</math> we have
<math display="block">
\left\|\max_{0\leq  k\leq  n}\vert X_k\vert\right\|_p\leq  q\| X_n\|_p.
</math>
</li>
</ul>
{{alert-info |
Recall that if <math>X\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)</math>, then
<math display="block">
\|X\|_p=\E[\vert X\vert^p]^{1/p}.
</math>
Moreover, if <math>p=q=2</math> and <math>(X_n)_{n\geq 0}</math> is a martingale, then for all <math>n\geq 0</math> we have
<math display="block">
\E\left[\max_{0\leq  k\leq  n} X_k^2\right]\leq  4\E[X_n^2].
</math>
In general, we have
<math display="block">
\vert X_n\vert^2\leq  \max_{0\leq  k\leq  n}\vert X_k\vert^p.
</math>
Therefore we get
<math display="block">
\E[\vert X_n\vert^p]\leq  \E\left[\max_{0\leq  k\leq  n}\vert X_k\vert^p\right]\leq ^{Doob}q^p\E[\vert X_n\vert^p].
</math>
We shall also recall that for <math>X\in L^p(\Omega,\F,(\F_n)_{n\geq 0},\p)</math>, we can write
<math display="block">
\E[\vert X\vert^p]=\E\left[\int_0^{\vert X\vert}p\lambda^{p-1}d\lambda\right]=\E\left[\int_0^\infty \one_{\{\vert X\vert\geq \lambda\}}p\lambda^{p-1}d\lambda\right]=\int_0^\infty p\lambda^{p-1}\p[\vert X\vert \geq \lambda]d\lambda
</math>
by using Fubini's theorem.
}}
|It is enough to prove <math>(ii)</math>. Since <math>(X_n)_{n\geq 0}</math> is a submartingale, we know that <math>(X_n^+)_{n\geq 0}</math> is a submartingale. Hence
<math display="block">
\lambda\p\left[\max_{0\leq  k\leq  n}X_k^+\geq \lambda\right]\geq\E\left[X_n^+\one_{\{\max_{0\leq  k\leq  n}X_k^+\geq \lambda\}}\right].
</math>
Now ler <math>Y_n:=\max_{0\leq  k\leq  n}X_k^+</math>. Then for any <math>k > 0</math>, we have
<math display="block">
\begin{align*}
\E[(Y_n\land k)^p]&=\int_0^\infty p\lambda^{p-1}\p[Y_n\land k\geq \lambda]d\lambda=\int_0^np\lambda^{p-1}\p[Y_n\geq\lambda]d\lambda\\
&\leq  \int_0^n p\lambda^{p-1}\left(\frac{1}{\lambda}\E[X_n^+\one_{\{ Y_n\geq \lambda\}}]\right)d\lambda\\
&=\E\left[\int_0^n p\lambda^{p-1}X_n^+\one_{\{Y_n\geq\lambda\}}d\lambda\right]=\E\left[\int_0^{Y_n\land k}p\lambda^{p-2}X_n^+d\lambda\right]\\
&=\E\left[\frac{p}{p-1}(Y_n\land k)^{p-1}X_n^+\right]\\
&\leq  q\E[(X_n^+)^p]^{1/p}\E[(Y_n\land k)^p]^{1/q},
\end{align*}
</math>
where we have used that <math>q=\frac{p}{p-1}</math> and Markov's inequality. Therefore we obtain
<math display="block">
\E[(Y_n\land k)^p]\leq  q\E[(X_n^+)^p]^{1/p}\E[(Y_n\land k)^p]^{1/q}.
</math>
Since <math>\E[(Y_n\land k)^p]\not=0</math>, we can divide by it to get
<math display="block">
\E[(Y_n\land k)^p]^{1-1/q=1/p}\leq  q\E[(X_n^+)^p]^{1/p}
</math>
and thus
<math display="block">
\| (Y_k\land n)\|_p\leq  q\| X_n^+\|_p.
</math>
Now for <math>k\to\infty</math>, monotone convergence implies that
<math display="block">
\|Y_n\|_p\leq  q\|X_n^+\|_p.
</math>}}
{{proofcard|Corollary|cor-1|Let <math>(\Omega,\F,(\F_n)_{n\geq0},\p)</math> be a filtered probability space. Let <math>(X_n)_{n\geq 0}</math> be a martingale and <math>p > 1</math>, <math>q > 1</math> such that <math>\frac{1}{p}+\frac{1}{q}=1</math>. Then
<math display="block">
\left\| \sup_{n\geq 0}\vert X_n\vert\right\|_p\leq  q\sup_{n\geq 0}\| X_n\|_p.
</math>
|Exercise{{efn|Use Doob's inequality.}}}}
==General references==
{{cite arXiv|last=Moshayedi|first=Nima|year=2020|title=Lectures on Probability Theory|eprint=2010.16280|class=math.PR}}
==Notes==
{{notelist}}

Latest revision as of 23:38, 8 May 2024

[math] \newcommand{\R}{\mathbb{R}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\Rbar}{\overline{\mathbb{R}}} \newcommand{\Bbar}{\overline{\mathcal{B}}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\E}{\mathbb{E}} \newcommand{\p}{\mathbb{P}} \newcommand{\one}{\mathds{1}} \newcommand{\0}{\mathcal{O}} \newcommand{\mat}{\textnormal{Mat}} \newcommand{\sign}{\textnormal{sign}} \newcommand{\CP}{\mathcal{P}} \newcommand{\CT}{\mathcal{T}} \newcommand{\CY}{\mathcal{Y}} \newcommand{\F}{\mathcal{F}} \newcommand{\mathds}{\mathbb}[/math]

Let [math](\Omega,\F,(\F_n)_{n\geq0},\p)[/math] be a filtered probability space. Let [math]X=(X_n)_{n\geq 0}[/math] be a stochastic process, such that [math]X_n[/math] is [math]\F_n[/math]-measurable for all [math]n\geq 0[/math]. We denote

[[math]] X_n^*:=\sup_{j\leq n}\vert X_j\vert. [[/math]]

Note that [math](X_n^*)_{n\geq 0}[/math] is increasing and [math]\F_n[/math]-adapted. Therefore if [math]X_n\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math] for all [math]n\geq 0[/math], then [math](X_n^*)_{n\geq 0}[/math] is a submartingale.

Maximal inequality and Doob's inequality

Recall Markov's inequality in terms of [math](X_n^*)_{n\geq0}[/math], which is given by

[[math]] \p[X_n^*\geq \alpha]\leq \frac{\E[X_n^*]}{\alpha}, [[/math]]

with the obvious bound

[[math]] \E[X_n^*]\leq \sum_{j=1}^n\E[\vert X_j\vert]. [[/math]]

We shall see for instance that when [math](X_n)_{n\geq 0}[/math] is a martingale, one can replace [math]\E[X_n^*][/math] by [math]\E[\vert X_n\vert][/math].

Proposition

Let [math](\Omega,\F,(\F_n)_{n\geq0},\p)[/math] be a filtered probability space. Let [math](X_n)_{n\geq 0}[/math] be a submartingale and let [math]\lambda \gt 0,k\in\N[/math]. Define

[[math]] \begin{align*} A&:=\left\{\max_{0\leq n\leq k}X_n\geq \lambda\right\}\\ B&:=\left\{\min_{0\leq n\leq k}X_n\leq -\lambda\right\}. \end{align*} [[/math]]


Then the following hold.

  • [[math]] \lambda\p[A]\leq \E[X_k\one_A], [[/math]]
  • [[math]] \lambda\p[B]\leq \E[X_k\one_{B^C}]-\E[X_0]. [[/math]]

If [math](X_n)_{n\geq 0}[/math] is a martingale, then [math](\vert X_n\vert)_{n\geq 0}[/math] is a submartingale. Moreover, from [math](i)[/math] we get

[[math]] \lambda\p[X^*_k\geq \alpha]\leq \E[\vert X_k\vert\one_A]\leq \E[\vert X_k\vert] [[/math]]
and hence

[[math]] \p[X_k^*\geq \alpha]\leq \frac{\E[\vert X_k\vert]}{\alpha}. [[/math]]


Show Proof

We need to show both points.

  • Let us introduce
    [[math]] T=\inf\{n\in\N\mid X_n\leq \lambda\}\land k. [[/math]]
    Then [math]T[/math] is a stopping time, which is bounded by [math]k[/math]. We thus have
    [[math]] \E[X_T]\leq \E[X_k]. [[/math]]
    We note that [math]X_T=X_k[/math] if [math]T=k[/math], which happens for [math]\omega\in A^C[/math]. Hence we get
    [[math]] \E[X_T]=\E[X_T\one_A+X_T\one_{A^C}]=\E[X_T\one_A]+\E[X_k\one_{A^C}]\leq \underbrace{\E[X_k]}_{\E[X_k(\one_A+\one_{A^C})]}. [[/math]]
    Now we note that
    [[math]] \E[X_T\one_A]\geq \lambda \E[\one_A]=\lambda \p[A]. [[/math]]
    Therefore we get
    [[math]] \lambda\p[A]\leq \E[X_k\one_A]. [[/math]]
  • Let us define
    [[math]] S=\inf\{n\in\N\mid X_n\leq -\lambda\}\land k. [[/math]]
    Again [math]S[/math] is a stopping time, which is bounded by [math]k[/math]. We hence have
    [[math]] \E[X_S]\geq \E[X_0]. [[/math]]
    Thus
    [[math]] \E[X_0]\leq \E[X_S\one_B]+\E[X_S\one_{B^C}]\leq -\lambda \p[B]+\E[X_k\one_{B^C}]. [[/math]]
    Therefore we get
    [[math]] \lambda\p[B]\leq \E[X_k\one_{B^C}]-\E[X_0]. [[/math]]
Proposition (Kolmogorov's inequality)

Let [math](\Omega,\F,(\F_n)_{n\geq0},\p)[/math] be a filtered probability space. Let [math](X_n)_{n\geq 0}[/math] be a martingale, such that for all [math]n\geq 0[/math] we have [math]\E[X_n^2] \lt \infty[/math]. Then

[[math]] \p\left[\max_{0\leq k\leq n}\vert X_k\vert\geq \lambda\right]\leq \frac{\E[X_k^2]}{\lambda^2}. [[/math]]


Show Proof

We use the fact that [math](X_n^2)_{n\geq 0}[/math] is a positive submartingale. Therefore we get

[[math]] \lambda^2\underbrace{\p\left[\max_{0\leq k\leq n}\vert X_k\vert^2\geq \lambda^2\right]}_{\p\left[\max_{0\leq k\leq k}\vert X_k\vert\geq \lambda\right]}\leq \E\left[X_k^2\one_{\left\{\max_{0\leq k\leq n}\vert X_k\vert^2\geq \lambda^2\right\}}\right]\leq \E[X_k^2] [[/math]]

Theorem (Maximal inequality)

Let [math](\Omega,\F,(\F_n)_{n\geq0},\p)[/math] be a filtered probability space. Let [math](X_n)_{n\geq 0}[/math] be a submartingale. Then for all [math]\lambda\geq 0[/math] and [math]n\in\N[/math], we get

[[math]] \lambda\p\left[\max_{0\leq k\leq n}\vert X_k\vert\geq \lambda\right]\leq \E[X_0]+2\E[\vert X_n\vert]. [[/math]]


Show Proof

Let [math]A[/math] and [math]B[/math] be defined as in Proposition 9.1. Then

[[math]] \begin{align*} \lambda \p\left[\max_{0\leq k\leq n}\vert X_k\vert \geq \lambda\right]&=\lambda\p[A\cup B]\leq \E[X_k\one_A]-\E[X_0]+\E[X_k\one_{B^C}]\\ &\leq \E[\vert X_0\vert]+\E[\vert X_n\vert]+\E[\vert X_n\vert]=\E[\vert X_0\vert]+2\E[\vert X_n\vert]. \end{align*} [[/math]]

Theorem (Doob's inequality)

Let [math](\Omega,\F,(\F_n)_{n\geq0},\p)[/math] be a filtered probability space. Let [math]p \gt 1[/math] and [math]q \gt 1[/math], such that [math]\frac{1}{p}+\frac{1}{q}=1[/math].

  • If [math](X_n)_{n\geq 0}[/math] is a submartingale, then for all [math]n\geq 0[/math] we have
    [[math]] \left\|\max_{0\leq k\leq n}X_k^+\right\|_p\leq q\left\|X_n^+\right\|_p. [[/math]]
  • If [math](X_n)_{n\geq 0}[/math] is a martingale, then for all [math]n\geq 0[/math] we have
    [[math]] \left\|\max_{0\leq k\leq n}\vert X_k\vert\right\|_p\leq q\| X_n\|_p. [[/math]]

Recall that if [math]X\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math], then

[[math]] \|X\|_p=\E[\vert X\vert^p]^{1/p}. [[/math]]
Moreover, if [math]p=q=2[/math] and [math](X_n)_{n\geq 0}[/math] is a martingale, then for all [math]n\geq 0[/math] we have

[[math]] \E\left[\max_{0\leq k\leq n} X_k^2\right]\leq 4\E[X_n^2]. [[/math]]
In general, we have

[[math]] \vert X_n\vert^2\leq \max_{0\leq k\leq n}\vert X_k\vert^p. [[/math]]
Therefore we get

[[math]] \E[\vert X_n\vert^p]\leq \E\left[\max_{0\leq k\leq n}\vert X_k\vert^p\right]\leq ^{Doob}q^p\E[\vert X_n\vert^p]. [[/math]]
We shall also recall that for [math]X\in L^p(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math], we can write

[[math]] \E[\vert X\vert^p]=\E\left[\int_0^{\vert X\vert}p\lambda^{p-1}d\lambda\right]=\E\left[\int_0^\infty \one_{\{\vert X\vert\geq \lambda\}}p\lambda^{p-1}d\lambda\right]=\int_0^\infty p\lambda^{p-1}\p[\vert X\vert \geq \lambda]d\lambda [[/math]]
by using Fubini's theorem.


Show Proof

It is enough to prove [math](ii)[/math]. Since [math](X_n)_{n\geq 0}[/math] is a submartingale, we know that [math](X_n^+)_{n\geq 0}[/math] is a submartingale. Hence

[[math]] \lambda\p\left[\max_{0\leq k\leq n}X_k^+\geq \lambda\right]\geq\E\left[X_n^+\one_{\{\max_{0\leq k\leq n}X_k^+\geq \lambda\}}\right]. [[/math]]

Now ler [math]Y_n:=\max_{0\leq k\leq n}X_k^+[/math]. Then for any [math]k \gt 0[/math], we have

[[math]] \begin{align*} \E[(Y_n\land k)^p]&=\int_0^\infty p\lambda^{p-1}\p[Y_n\land k\geq \lambda]d\lambda=\int_0^np\lambda^{p-1}\p[Y_n\geq\lambda]d\lambda\\ &\leq \int_0^n p\lambda^{p-1}\left(\frac{1}{\lambda}\E[X_n^+\one_{\{ Y_n\geq \lambda\}}]\right)d\lambda\\ &=\E\left[\int_0^n p\lambda^{p-1}X_n^+\one_{\{Y_n\geq\lambda\}}d\lambda\right]=\E\left[\int_0^{Y_n\land k}p\lambda^{p-2}X_n^+d\lambda\right]\\ &=\E\left[\frac{p}{p-1}(Y_n\land k)^{p-1}X_n^+\right]\\ &\leq q\E[(X_n^+)^p]^{1/p}\E[(Y_n\land k)^p]^{1/q}, \end{align*} [[/math]]


where we have used that [math]q=\frac{p}{p-1}[/math] and Markov's inequality. Therefore we obtain

[[math]] \E[(Y_n\land k)^p]\leq q\E[(X_n^+)^p]^{1/p}\E[(Y_n\land k)^p]^{1/q}. [[/math]]
Since [math]\E[(Y_n\land k)^p]\not=0[/math], we can divide by it to get

[[math]] \E[(Y_n\land k)^p]^{1-1/q=1/p}\leq q\E[(X_n^+)^p]^{1/p} [[/math]]
and thus

[[math]] \| (Y_k\land n)\|_p\leq q\| X_n^+\|_p. [[/math]]
Now for [math]k\to\infty[/math], monotone convergence implies that

[[math]] \|Y_n\|_p\leq q\|X_n^+\|_p. [[/math]]

Corollary

Let [math](\Omega,\F,(\F_n)_{n\geq0},\p)[/math] be a filtered probability space. Let [math](X_n)_{n\geq 0}[/math] be a martingale and [math]p \gt 1[/math], [math]q \gt 1[/math] such that [math]\frac{1}{p}+\frac{1}{q}=1[/math]. Then

[[math]] \left\| \sup_{n\geq 0}\vert X_n\vert\right\|_p\leq q\sup_{n\geq 0}\| X_n\|_p. [[/math]]


Show Proof

Exercise[a]

General references

Moshayedi, Nima (2020). "Lectures on Probability Theory". arXiv:2010.16280 [math.PR].

Notes

  1. Use Doob's inequality.