guide:2f48fd1e89: Difference between revisions

From Stochiki
No edit summary
 
No edit summary
 
Line 1: Line 1:
<div class="d-none"><math>
\newcommand{\R}{\mathbb{R}}
\newcommand{\A}{\mathcal{A}}
\newcommand{\B}{\mathcal{B}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\Rbar}{\overline{\mathbb{R}}}
\newcommand{\Bbar}{\overline{\mathcal{B}}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\E}{\mathbb{E}}
\newcommand{\p}{\mathbb{P}}
\newcommand{\one}{\mathds{1}}
\newcommand{\0}{\mathcal{O}}
\newcommand{\mat}{\textnormal{Mat}}
\newcommand{\sign}{\textnormal{sign}}
\newcommand{\CP}{\mathcal{P}}
\newcommand{\CT}{\mathcal{T}}
\newcommand{\CY}{\mathcal{Y}}
\newcommand{\F}{\mathcal{F}}
\newcommand{\mathds}{\mathbb}</math></div>
If <math>S\leq  T</math> are two bounded stopping times and <math>(X_n)_{n\geq 0}</math> a martingale, then


<math display="block">
\E[X_T\mid \F_S]=X_Sa.s.
</math>
If <math>(X_n)_{n\geq 0}</math> is an adapted process, which converges a.s. to <math>X_\infty</math>, we can define <math>X_T</math> for all stopping times (finite or not) by
<math display="block">
X_T=\sum_{n=0}^\infty X_n\one_{\{T=n\}}+X_\infty\one_{\{ T=\infty\}}.
</math>
{{proofcard|Theorem|thm-1|Let <math>(\Omega,\F,(\F_n)_{n\geq0},\p)</math> be a filtered probability space. Let <math>(X_n)_{n\geq 0}</math> be u.i. martingale. Then for any stopping time <math>T</math>, we have that
<math display="block">
\E[X_\infty\mid \F_T]=X_Ta.s.
</math>
In particular
<math display="block">
\E[X_T]=\E[X_\infty]=\E[X_n]
</math>
for all <math>n\geq 0</math>. If <math>S</math> and <math>T</math> are two stopping times, such that <math>S\leq  T</math>, then
<math display="block">
\E[X_T\mid \F_S]=X_Sa.s.
</math>
|We first want to check that <math>X_T</math> is in <math>L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)</math>. Therefore we have
<math display="block">
\begin{align*}
\E[\vert X_T\vert]&=\sum_{n=0}^\infty \E[\vert X_n\vert\one_{\{ T=n\}}]+\E[\vert X_\infty\vert\one_{\{ T=\infty\}}]\leq  \sum_{n=0}^\infty \E[\E[\vert X_\infty\vert\mid \F_n]\one_{\{T=n\}}]+\E[\vert X_\infty\vert \one_{\{ T=\infty\}}]\\
&=\sum_{n=0}^\infty\E[\E[\vert X_\infty\vert \one_{\{ T=n\}}\mid \F_n]]+\E[\vert X_\infty\vert\one_{\{ T=\infty\}}]=\sum_{n=0}^\infty \E[\vert X_\infty\vert \one_{\{ T=n\}}]+\E[\vert X_\infty\vert \one_{\{T=\infty\}}]\\
&=\E[\vert X_\infty\vert] < \infty
\end{align*}
</math>
Now let <math>A\in\F_T</math>. Then
<math display="block">
\begin{align*}
\E[X_T\one_A]&=\sum_{n\in\N\cup\{\infty\}}\E[X_T\one_{A\cap\{T=n\}}]=\sum_{n\in\N\cup\{\infty\}}\E[X_n\one_{A\cap\{T=n\}}]=\sum_{n\in\N\cup\{\infty\}}\E[\E[X_\infty\mid \F_n]\one_{A\cap\{ T=n\}}]\\
&=\sum_{n\in\N\cup\{\infty\}}\E[X_\infty\one_{A\cap\{ T=n\}}]=\E[X_\infty\one_A]
\end{align*}
</math>
where we have used that <math>X_\infty\in L^1(\Omega,\F_\infty,(\F_n)_{n\geq 0},\p)</math> and Fubini for the first and last equation. Now since <math>X_T</math> is <math>\F_T</math>-measurable, we get that <math>\E[X_\infty\mid \F_T]=X_T</math> a.s. Finally for <math>S\leq  T</math>, we have <math>\F_S\subset\F_T</math> and thus
<math display="block">
X_S=\E[X_\infty\mid \F_S]=\E[\E[X_\infty\mid \F_T]\mid \F_S]=\E[X_T\mid \F_S].
</math>}}
{{alert-info |
If <math>(X_n)_{n\geq 0}</math> is a u.i. martingale, then the family
<math display="block">
\{X_T\mid \text{$T$ a stopping time}\}
</math>
is u.i. Indeed, we note that
<math display="block">
\{X_T\mid \text{$T$ a stopping time}\}=\{\E[X_\infty\mid \F_T]\mid \text{$T$ a stopping time}\}\subset \E[X_\infty\mid \mathcal{G}]\mid \text{$\mathcal{G}$ a $\sigma$-Algebra, $\mathcal{G}\subset\F$}\},
</math>
where the superset is u.i. If <math>N\in \N</math>, then <math>(X_{n\land N})_{n\geq 0}</math> is u.i. Indeed, if <math>Y_n=X_{n\land N}</math>, then <math>(Y_n)_{n\geq 0}</math> is a martingale and
<math display="block">
Y_n=\E[X_N\mid \F_n]=\E[Y_\infty\mid \F_n].
</math>
}}
'''Example'''
[Another random walk]
Consider a simple random walk with <math>X_0=k\geq 0</math>. Let <math>m\geq 0</math>, <math>0\leq  k\leq  m</math> and
<math display="block">
T=\inf\{n\geq 1\mid \text{$X_n=0$ or $X_n=m$}\}
</math>
with <math>X_n=k+Y_1+\dotsm +Y_n</math>, where <math>(Y_n)_{n\geq 1}</math> are iid and <math>\p[Y_n=\pm 1]=\frac{1}{2}</math>. We have already seen that <math>T < \infty</math> a.s. Now let <math>Z_n=X_{n\land T}</math>. Then <math>(Z_n)_{n\geq 0}</math> is a martingale and <math>0\leq  Z_n\leq  m</math>. Therefore, <math>(Z_n)_{n\geq 0}</math> is u.i. and hence <math>Z_n</math> converges a.s. and in <math>L^1</math> to <math>Z_\infty</math> with
<math display="block">
\E[Z_\infty]=\E[Z_0]=k.
</math>
Moreover,
<math display="block">
\E[Z_\infty]=\E[X_T]=\E[m\one_{\{ X_T=m\}}]+\E[0\one_{\{X_T=0\}}]=m\p[X_T=m],
</math>
which implies that <math>\p[X_T=m]=\frac{k}{m}</math> and thus <math>\p[X_T=0]=1-\p[X_T=m]=\frac{m-k}{m}</math>.
Now let us assume that <math>\p[Y_n=1]=p</math> and <math>\p[Y_n=-1]=1-p=q</math> for <math>p\in (0,1)</math> and <math>p\not=\frac{1}{2}</math>. Let us consider
<math display="block">
Z_n=\left(\frac{q}{p}\right)^{X_n}.
</math>
Then <math>(Z_n)_{n\geq 0}</math> is a martingale. Indeed, by definition <math>Z_n</math> is adapted and <math>Z_n\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)</math> because
<math display="block">
k-n\leq  X_n\leq  k+n
</math>
and thus <math>Z_n</math> is bounded. Moreover,
<math display="block">
\E[Z_{n+1}\mid \F_n]=\E\left[\left(\frac{q}{p}\right)^{X_n}\left(\frac{q}{p}\right)^{Y_n}\mid \F_n\right]=Z_n\E\left[\left(\frac{q}{p}\right)^{Y_n}\mid \F_n\right]=Z_n\E\left[\left(\frac{q}{p}\right)^{Y_n}\right]=Z_n\{q+p\}=Z_n,
</math>
and therefore <math>(Z_n)_{n\geq 0}</math> is a martingale. Now <math>(Z_{n\land T})_{n\geq 0}</math> is bounded and hence u.i., which implies that <math>(Z_{n\land T})_{n\geq 0}</math> converges a.s. and in <math>L^1</math>. We also have that
<math display="block">
\E[Z_T]=\E[Z_0]=\left(\frac{q}{p}\right)^k.
</math>
On the other hand we have
<math display="block">
\E[Z_T]=\left(\frac{q}{p}\right)^m\p[X_T=m]+(1-\p[X_T=m]).
</math>
Hence we get
<math display="block">
\p[X_T=m]=\frac{\left(\frac{q}{p}\right)^k-1}{\left(\frac{q}{p}\right)^m-1}.
</math>
{{proofcard|Theorem|thm-2|Let <math>(\Omega,\F,(\F_n)_{n\geq0},\p)</math> be a filtered probability space. Let <math>(X_n)_{n\geq 0}</math> be a supermartingale. Assume that one of the following two conditions is satisfied
<ul style{{=}}"list-style-type:lower-roman"><li><math>X_n\geq 0</math> for all <math>n\geq 0</math>.
</li>
<li><math>(X_n)_{n\geq 0}</math> is u.i.
</li>
</ul>
Then for every stopping time <math>T</math> (finite or not) we get that <math>X_T\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)</math>. Moreover, if <math>S</math> and <math>T</math> are two stopping time, such that <math>S\leq  T</math>, then in case
<ul style{{=}}"list-style-type:lower-roman"><li><math>\one_{\{S < \infty}X_S\geq \E[X_T\one_{\{ T < \infty\}}\mid \F_S]</math> a.s.
</li>
<li><math>X_S\geq \E[X_T\mid \F_S]</math> a.s.
</li>
</ul>
|We first deal with the case <math>(i)</math>. We have already seen that if <math>T</math> is a bounded stopping time, we have
<math display="block">
\E[X_T]\leq  \E[X_0].
</math>
With Fatou we get
<math display="block">
\E\left[\liminf_{n\to\infty}X_{n\land T}\right]\leq \liminf_{n\to\infty}\E[X_{n\land T}]\leq  \E[X_0],
</math>
which implies that <math>X_T\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)</math>. Now let <math>S\leq  T</math> be two stopping times. First assume that <math>S\leq  T\leq  N\in\N</math>. Then we know that
<math display="block">
\E[X_S]\geq \E[X_T].
</math>
Now let <math>A\in\F_S</math> and consider the stopping times
<math display="block">
\begin{align*}
S^A(\omega)&=\begin{cases}S(\omega)&\omega\in A\\ 0&\omega\not\in A\end{cases}\\
T^A(\omega)&=\begin{cases}T(\omega)&\omega\in A\\ 0&\omega\not\in A\end{cases}
\end{align*}
</math>
Then we get that
<math display="block">
S^A\leq  T^A\leq  N,
</math>
thus <math>\E[X_{S^A}]\geq \E[X_{T^A}]</math> and therefore <math>\E[X_S\one_A]\geq \E[X_T\one_A]</math> for all <math>A\in \F_S</math>.
Let us now go back to the general case <math>S\leq  T</math> and let <math>B\in \F_S</math>. Let us now apply the above to <math>S\land k</math>, <math>T\land k</math> and <math>A=B\cap\{S\leq  k\}\in\F_S</math>. Then we get
<math display="block">
\E[X_{S\land k}\one_{B\cap \{ S\leq  k\}}]\geq \E[X_{T\land k}\one_{B\cap\{S\leq  k\}}]\geq \E[X_{T\land k}\one_{B\cap\{T\leq  k\}}].
</math>
Hence we get
<math display="block">
\E[X_{S\land k}\one_{B\cap\{S\leq  k\}}]\geq \E[X_{T\land k}\one_{B\cap \{ T\leq  k\}}]
</math>
and thus
<math display="block">
\E[X_S\one_B\one_{\{S\leq  k\}}]\geq \E[X_T\one_B\one_{\{ T\leq  k\}}].
</math>
By dominated convergence, we obtain
<math display="block">
\E[X_S\one_B\one_{\{ S < \infty\}}]\geq \E[X_T\one_B\one_{\{T < \infty\}}].
</math>
Now let <math>\tilde X_S=\one_{\{S < \infty\}}X_S</math> and <math>\tilde X_T=\one_{\{ T < \infty\}}X_T</math>. Then for any <math>B\in \F_S</math>, we get
<math display="block">
\E[\tilde X_S\one_B]\geq \E[\tilde X_T\one_B]=\E[\one_B\E[\tilde X_T\mid \F_S]].
</math>
Since the last equality is true for all <math>B\in \F_S</math>, we can conclude that
<math display="block">
\tilde X_S\geq \E[\tilde X_T\mid \F_S].
</math>
Now let us prove <math>(ii)</math>. We know from previous results that in this case <math>X_n\xrightarrow{n\to\infty\atop \text{a.s. and $L^1$}}X_\infty</math>. We have <math>X_n\geq\E[X_m\mid \F_n]</math> for all <math>m\geq n</math>. The <math>L^1</math>-convergence as <math>n\to\infty</math> gives
<math display="block">
X_n\geq \E[X_\infty\mid \F_n].
</math>
Moreover, the martingale <math>Z_n:=\E[X_\infty\mid \F_n]</math> converges a.s. to <math>X_\infty</math>. Set <math>Y_n:=X_n-Z_n</math>. Then <math>(Y_n)_{n\geq 0}</math> is a positive supermartingale and hence it converges a.s. to <math>X_\infty-Z_\infty=0</math>. We now apply case <math>(i)</math> to deduce that <math>X_T=Y_T+Z_T\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)</math> and <math>Y_S\geq \E[Y_T\mid \F_S]</math>{{efn|We use the fact that <math>Y_S\one_{\{S=\infty\}}=0</math> and <math>Y_T\one_{\{T=\infty\}}=0</math>, since <math>Y_\infty=0</math>.}}. The stopping theorem applied to <math>(Z_n)_{n\geq 0}</math> gives
<math display="block">
Z_S=\E[Z_T\mid \F_S].
</math>
This implies that
<math display="block">
Y_S+Z_S\geq \E[Z_T+Y_T\mid \F_S]
</math>
and thus
<math display="block">
X_S\geq \E[X_T\mid \F_S].
</math>}}
==General references==
{{cite arXiv|last=Moshayedi|first=Nima|year=2020|title=Lectures on Probability Theory|eprint=2010.16280|class=math.PR}}
==Notes==
{{notelist}}

Latest revision as of 01:53, 8 May 2024

[math] \newcommand{\R}{\mathbb{R}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\Rbar}{\overline{\mathbb{R}}} \newcommand{\Bbar}{\overline{\mathcal{B}}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\E}{\mathbb{E}} \newcommand{\p}{\mathbb{P}} \newcommand{\one}{\mathds{1}} \newcommand{\0}{\mathcal{O}} \newcommand{\mat}{\textnormal{Mat}} \newcommand{\sign}{\textnormal{sign}} \newcommand{\CP}{\mathcal{P}} \newcommand{\CT}{\mathcal{T}} \newcommand{\CY}{\mathcal{Y}} \newcommand{\F}{\mathcal{F}} \newcommand{\mathds}{\mathbb}[/math]

If [math]S\leq T[/math] are two bounded stopping times and [math](X_n)_{n\geq 0}[/math] a martingale, then

[[math]] \E[X_T\mid \F_S]=X_Sa.s. [[/math]]

If [math](X_n)_{n\geq 0}[/math] is an adapted process, which converges a.s. to [math]X_\infty[/math], we can define [math]X_T[/math] for all stopping times (finite or not) by

[[math]] X_T=\sum_{n=0}^\infty X_n\one_{\{T=n\}}+X_\infty\one_{\{ T=\infty\}}. [[/math]]

Theorem

Let [math](\Omega,\F,(\F_n)_{n\geq0},\p)[/math] be a filtered probability space. Let [math](X_n)_{n\geq 0}[/math] be u.i. martingale. Then for any stopping time [math]T[/math], we have that

[[math]] \E[X_\infty\mid \F_T]=X_Ta.s. [[/math]]
In particular

[[math]] \E[X_T]=\E[X_\infty]=\E[X_n] [[/math]]
for all [math]n\geq 0[/math]. If [math]S[/math] and [math]T[/math] are two stopping times, such that [math]S\leq T[/math], then

[[math]] \E[X_T\mid \F_S]=X_Sa.s. [[/math]]


Show Proof

We first want to check that [math]X_T[/math] is in [math]L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math]. Therefore we have


[[math]] \begin{align*} \E[\vert X_T\vert]&=\sum_{n=0}^\infty \E[\vert X_n\vert\one_{\{ T=n\}}]+\E[\vert X_\infty\vert\one_{\{ T=\infty\}}]\leq \sum_{n=0}^\infty \E[\E[\vert X_\infty\vert\mid \F_n]\one_{\{T=n\}}]+\E[\vert X_\infty\vert \one_{\{ T=\infty\}}]\\ &=\sum_{n=0}^\infty\E[\E[\vert X_\infty\vert \one_{\{ T=n\}}\mid \F_n]]+\E[\vert X_\infty\vert\one_{\{ T=\infty\}}]=\sum_{n=0}^\infty \E[\vert X_\infty\vert \one_{\{ T=n\}}]+\E[\vert X_\infty\vert \one_{\{T=\infty\}}]\\ &=\E[\vert X_\infty\vert] \lt \infty \end{align*} [[/math]]


Now let [math]A\in\F_T[/math]. Then


[[math]] \begin{align*} \E[X_T\one_A]&=\sum_{n\in\N\cup\{\infty\}}\E[X_T\one_{A\cap\{T=n\}}]=\sum_{n\in\N\cup\{\infty\}}\E[X_n\one_{A\cap\{T=n\}}]=\sum_{n\in\N\cup\{\infty\}}\E[\E[X_\infty\mid \F_n]\one_{A\cap\{ T=n\}}]\\ &=\sum_{n\in\N\cup\{\infty\}}\E[X_\infty\one_{A\cap\{ T=n\}}]=\E[X_\infty\one_A] \end{align*} [[/math]]


where we have used that [math]X_\infty\in L^1(\Omega,\F_\infty,(\F_n)_{n\geq 0},\p)[/math] and Fubini for the first and last equation. Now since [math]X_T[/math] is [math]\F_T[/math]-measurable, we get that [math]\E[X_\infty\mid \F_T]=X_T[/math] a.s. Finally for [math]S\leq T[/math], we have [math]\F_S\subset\F_T[/math] and thus

[[math]] X_S=\E[X_\infty\mid \F_S]=\E[\E[X_\infty\mid \F_T]\mid \F_S]=\E[X_T\mid \F_S]. [[/math]]

If [math](X_n)_{n\geq 0}[/math] is a u.i. martingale, then the family

[[math]] \{X_T\mid \text{$T$ a stopping time}\} [[/math]]
is u.i. Indeed, we note that

[[math]] \{X_T\mid \text{$T$ a stopping time}\}=\{\E[X_\infty\mid \F_T]\mid \text{$T$ a stopping time}\}\subset \E[X_\infty\mid \mathcal{G}]\mid \text{$\mathcal{G}$ a $\sigma$-Algebra, $\mathcal{G}\subset\F$}\}, [[/math]]
where the superset is u.i. If [math]N\in \N[/math], then [math](X_{n\land N})_{n\geq 0}[/math] is u.i. Indeed, if [math]Y_n=X_{n\land N}[/math], then [math](Y_n)_{n\geq 0}[/math] is a martingale and

[[math]] Y_n=\E[X_N\mid \F_n]=\E[Y_\infty\mid \F_n]. [[/math]]

Example

[Another random walk] Consider a simple random walk with [math]X_0=k\geq 0[/math]. Let [math]m\geq 0[/math], [math]0\leq k\leq m[/math] and

[[math]] T=\inf\{n\geq 1\mid \text{$X_n=0$ or $X_n=m$}\} [[/math]]

with [math]X_n=k+Y_1+\dotsm +Y_n[/math], where [math](Y_n)_{n\geq 1}[/math] are iid and [math]\p[Y_n=\pm 1]=\frac{1}{2}[/math]. We have already seen that [math]T \lt \infty[/math] a.s. Now let [math]Z_n=X_{n\land T}[/math]. Then [math](Z_n)_{n\geq 0}[/math] is a martingale and [math]0\leq Z_n\leq m[/math]. Therefore, [math](Z_n)_{n\geq 0}[/math] is u.i. and hence [math]Z_n[/math] converges a.s. and in [math]L^1[/math] to [math]Z_\infty[/math] with

[[math]] \E[Z_\infty]=\E[Z_0]=k. [[/math]]

Moreover,

[[math]] \E[Z_\infty]=\E[X_T]=\E[m\one_{\{ X_T=m\}}]+\E[0\one_{\{X_T=0\}}]=m\p[X_T=m], [[/math]]

which implies that [math]\p[X_T=m]=\frac{k}{m}[/math] and thus [math]\p[X_T=0]=1-\p[X_T=m]=\frac{m-k}{m}[/math].

Now let us assume that [math]\p[Y_n=1]=p[/math] and [math]\p[Y_n=-1]=1-p=q[/math] for [math]p\in (0,1)[/math] and [math]p\not=\frac{1}{2}[/math]. Let us consider

[[math]] Z_n=\left(\frac{q}{p}\right)^{X_n}. [[/math]]

Then [math](Z_n)_{n\geq 0}[/math] is a martingale. Indeed, by definition [math]Z_n[/math] is adapted and [math]Z_n\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math] because

[[math]] k-n\leq X_n\leq k+n [[/math]]

and thus [math]Z_n[/math] is bounded. Moreover,

[[math]] \E[Z_{n+1}\mid \F_n]=\E\left[\left(\frac{q}{p}\right)^{X_n}\left(\frac{q}{p}\right)^{Y_n}\mid \F_n\right]=Z_n\E\left[\left(\frac{q}{p}\right)^{Y_n}\mid \F_n\right]=Z_n\E\left[\left(\frac{q}{p}\right)^{Y_n}\right]=Z_n\{q+p\}=Z_n, [[/math]]

and therefore [math](Z_n)_{n\geq 0}[/math] is a martingale. Now [math](Z_{n\land T})_{n\geq 0}[/math] is bounded and hence u.i., which implies that [math](Z_{n\land T})_{n\geq 0}[/math] converges a.s. and in [math]L^1[/math]. We also have that

[[math]] \E[Z_T]=\E[Z_0]=\left(\frac{q}{p}\right)^k. [[/math]]

On the other hand we have

[[math]] \E[Z_T]=\left(\frac{q}{p}\right)^m\p[X_T=m]+(1-\p[X_T=m]). [[/math]]

Hence we get

[[math]] \p[X_T=m]=\frac{\left(\frac{q}{p}\right)^k-1}{\left(\frac{q}{p}\right)^m-1}. [[/math]]

Theorem

Let [math](\Omega,\F,(\F_n)_{n\geq0},\p)[/math] be a filtered probability space. Let [math](X_n)_{n\geq 0}[/math] be a supermartingale. Assume that one of the following two conditions is satisfied

  • [math]X_n\geq 0[/math] for all [math]n\geq 0[/math].
  • [math](X_n)_{n\geq 0}[/math] is u.i.

Then for every stopping time [math]T[/math] (finite or not) we get that [math]X_T\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math]. Moreover, if [math]S[/math] and [math]T[/math] are two stopping time, such that [math]S\leq T[/math], then in case

  • [math]\one_{\{S \lt \infty}X_S\geq \E[X_T\one_{\{ T \lt \infty\}}\mid \F_S][/math] a.s.
  • [math]X_S\geq \E[X_T\mid \F_S][/math] a.s.


Show Proof

We first deal with the case [math](i)[/math]. We have already seen that if [math]T[/math] is a bounded stopping time, we have

[[math]] \E[X_T]\leq \E[X_0]. [[/math]]
With Fatou we get

[[math]] \E\left[\liminf_{n\to\infty}X_{n\land T}\right]\leq \liminf_{n\to\infty}\E[X_{n\land T}]\leq \E[X_0], [[/math]]
which implies that [math]X_T\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math]. Now let [math]S\leq T[/math] be two stopping times. First assume that [math]S\leq T\leq N\in\N[/math]. Then we know that

[[math]] \E[X_S]\geq \E[X_T]. [[/math]]
Now let [math]A\in\F_S[/math] and consider the stopping times

[[math]] \begin{align*} S^A(\omega)&=\begin{cases}S(\omega)&\omega\in A\\ 0&\omega\not\in A\end{cases}\\ T^A(\omega)&=\begin{cases}T(\omega)&\omega\in A\\ 0&\omega\not\in A\end{cases} \end{align*} [[/math]]


Then we get that

[[math]] S^A\leq T^A\leq N, [[/math]]
thus [math]\E[X_{S^A}]\geq \E[X_{T^A}][/math] and therefore [math]\E[X_S\one_A]\geq \E[X_T\one_A][/math] for all [math]A\in \F_S[/math]. Let us now go back to the general case [math]S\leq T[/math] and let [math]B\in \F_S[/math]. Let us now apply the above to [math]S\land k[/math], [math]T\land k[/math] and [math]A=B\cap\{S\leq k\}\in\F_S[/math]. Then we get

[[math]] \E[X_{S\land k}\one_{B\cap \{ S\leq k\}}]\geq \E[X_{T\land k}\one_{B\cap\{S\leq k\}}]\geq \E[X_{T\land k}\one_{B\cap\{T\leq k\}}]. [[/math]]
Hence we get

[[math]] \E[X_{S\land k}\one_{B\cap\{S\leq k\}}]\geq \E[X_{T\land k}\one_{B\cap \{ T\leq k\}}] [[/math]]
and thus

[[math]] \E[X_S\one_B\one_{\{S\leq k\}}]\geq \E[X_T\one_B\one_{\{ T\leq k\}}]. [[/math]]
By dominated convergence, we obtain

[[math]] \E[X_S\one_B\one_{\{ S \lt \infty\}}]\geq \E[X_T\one_B\one_{\{T \lt \infty\}}]. [[/math]]
Now let [math]\tilde X_S=\one_{\{S \lt \infty\}}X_S[/math] and [math]\tilde X_T=\one_{\{ T \lt \infty\}}X_T[/math]. Then for any [math]B\in \F_S[/math], we get

[[math]] \E[\tilde X_S\one_B]\geq \E[\tilde X_T\one_B]=\E[\one_B\E[\tilde X_T\mid \F_S]]. [[/math]]
Since the last equality is true for all [math]B\in \F_S[/math], we can conclude that

[[math]] \tilde X_S\geq \E[\tilde X_T\mid \F_S]. [[/math]]
Now let us prove [math](ii)[/math]. We know from previous results that in this case [math]X_n\xrightarrow{n\to\infty\atop \text{a.s. and $L^1$}}X_\infty[/math]. We have [math]X_n\geq\E[X_m\mid \F_n][/math] for all [math]m\geq n[/math]. The [math]L^1[/math]-convergence as [math]n\to\infty[/math] gives

[[math]] X_n\geq \E[X_\infty\mid \F_n]. [[/math]]
Moreover, the martingale [math]Z_n:=\E[X_\infty\mid \F_n][/math] converges a.s. to [math]X_\infty[/math]. Set [math]Y_n:=X_n-Z_n[/math]. Then [math](Y_n)_{n\geq 0}[/math] is a positive supermartingale and hence it converges a.s. to [math]X_\infty-Z_\infty=0[/math]. We now apply case [math](i)[/math] to deduce that [math]X_T=Y_T+Z_T\in L^1(\Omega,\F,(\F_n)_{n\geq 0},\p)[/math] and [math]Y_S\geq \E[Y_T\mid \F_S][/math][a]. The stopping theorem applied to [math](Z_n)_{n\geq 0}[/math] gives

[[math]] Z_S=\E[Z_T\mid \F_S]. [[/math]]
This implies that

[[math]] Y_S+Z_S\geq \E[Z_T+Y_T\mid \F_S] [[/math]]
and thus

[[math]] X_S\geq \E[X_T\mid \F_S]. [[/math]]

General references

Moshayedi, Nima (2020). "Lectures on Probability Theory". arXiv:2010.16280 [math.PR].

Notes

  1. We use the fact that [math]Y_S\one_{\{S=\infty\}}=0[/math] and [math]Y_T\one_{\{T=\infty\}}=0[/math], since [math]Y_\infty=0[/math].