guide:49ad1c72ef: Difference between revisions

From Stochiki
No edit summary
 
mNo edit summary
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
<div class="d-none"><math>
\newcommand{\R}{\mathbb{R}}
\newcommand{\A}{\mathcal{A}}
\newcommand{\B}{\mathcal{B}}
\newcommand{\N}{\mathbb{N}}
\newcommand{\C}{\mathbb{C}}
\newcommand{\Rbar}{\overline{\mathbb{R}}}
\newcommand{\Bbar}{\overline{\mathcal{B}}}
\newcommand{\Q}{\mathbb{Q}}
\newcommand{\E}{\mathbb{E}}
\newcommand{\p}{\mathbb{P}}
\newcommand{\one}{\mathds{1}}
\newcommand{\0}{\mathcal{O}}
\newcommand{\mat}{\textnormal{Mat}}
\newcommand{\sign}{\textnormal{sign}}
\newcommand{\CP}{\mathcal{P}}
\newcommand{\CT}{\mathcal{T}}
\newcommand{\CY}{\mathcal{Y}}
\newcommand{\F}{\mathcal{F}}
\newcommand{\mathds}{\mathbb}</math></div>
{{definitioncard|Invariant measure|
Let <math>\mu</math> be a positive measure on <math>E</math> such that <math>\mu(x) < \infty</math> for all <math>x\in E</math> and <math>\mu\not\equiv 0</math>. We say that <math>\mu</math> is invariant for the transition matrix <math>Q</math> if for all <math>y\in E</math> we get


<math display="block">
\mu(y)=\sum_{x\in E}\mu(x)Q(x,y).
</math>
With matrix notation, this means
<math display="block">
\mu Q=\mu.
</math>
Since for all <math>n\in\N</math>, <math>Q_n=Q^n</math>, we have <math>\mu Q_n\mu=\mu</math>.
}}
===Interpretation===
Assume that <math>\mu(E) < \infty</math>, which is always the case when <math>E</math> is finite. We can assume that <math>\mu(E)=1</math>. Then for all measurable maps <math>f:E\to\R_+</math>, we get that
<math display="block">
\E_\mu[f(X_1)]=\sum_{x\in E}\mu(x)\sum_{y\in E}Q(x,y)f(y)=\sum_{y\in E}f(y)\sum_{x\in E}\mu(x)Q(x,y)=\sum_{y\in E}\mu(y)f(y).
</math>
Hence under <math>\p_\mu</math>, we get that <math>X_1</math> <math>{law}\atop{=}</math> <math>X_0\sim \mu</math>. Using the fact that <math>\mu Q_n=\mu</math>, we show that under <math>\p_\mu</math>, we get that <math>X_n\sim \mu</math>. For all measurable maps <math>F:\Omega\to\R_+</math> we have
<math display="block">
\E_\mu[F\circ \Theta_\one]=\E_\mu[\E_{X_1}[F]]=\sum_{x\in E}\mu(x)\E_x[F]=\E_\mu[F],
</math>
which implies that under <math>\p_\mu</math>, we get that <math>(X_{1+n})_{n\geq 0}</math> has the same law{{efn|Same holds for <math>(X_{k+n})_{n\geq 0}</math> for all <math>k\geq 0</math>}} as <math>(X_n)_{n\geq 0}</math>.
'''Example'''
For the r.v. on <math>\mathbb{Z}^d</math>, we get <math>Q(x,y)=(y,x)</math>. One then immediately checks that the counting measure on <math>\mathbb{Z}^d</math> is invariant.
{{definitioncard|Reversible measure|
Let <math>\mu</math> be  positive measure on <math>E</math> such that <math>\mu(x) < \infty</math> for all <math>x\in E</math>. <math>\mu</math> is said to be reversible if for all <math>x,y\in E</math> we get that
<math display="block">
\mu(x)Q(x,y)=\mu(y)Q(y,x).
</math>
}}
{{proofcard|Proposition|prop-1|A reversible measure is invariant.
|If <math>\mu</math> is reversible, we get that
<math display="block">
\sum_{x\in E}\mu(x)Q(x,y)=\sum_{x\in E}\mu(y)Q(y,x)=\mu(y).
</math>}}
{{alert-info |
There exists invariant measures which are not reversible, for example the counting measure is not reversible if the <math>\mathcal{G}</math> is not symmetric, i.e. <math>\mathcal{G}(x)=\mathcal{G}(-x)</math>. 
}}
'''Example'''
[Random walk on a graph]
The measure <math>\mu(x)=\vert A_x\vert</math> is reversible. Indeed, if <math>\{x,y\}\in A</math>, we get
<math display="block">
\mu(x)Q(x,y)=\vert A_x\vert\frac{1}{\vert A_x\vert}=1=\mu(y)Q(y,x).
</math>
'''Example'''
[Ehrenfest's model]
This is the Markov chain on <math>\{1,...,k\}</math> with transition matrix
<math display="block">
\begin{cases}Q(j,j+1)=\frac{k-j}{k},& 0\leq  j\leq  k-1\\ Q(j,j-1)=\frac{j}{k},&1\leq  j\leq  k\end{cases}
</math>
A measure <math>\mu</math> is reversible if and only if for <math>0\leq  j\leq  k-1</math> we have
<math display="block">
\mu(j)\frac{k-j}{k}=\mu(j+1)\frac{j+1}{k}.
</math>
One can check that <math>\mu(j)=\binom{k}{j}</math> is a solution.
{{proofcard|Theorem|thm-1|Let <math>x\in E</math> be recurrent. The formula
<math display="block">
\mu(y)=\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{ X_k=y\}}\right]
</math>
defines an invariant measure. Moreover, <math>\mu(y) > 0</math> if and only if <math>y</math> is in the same recurrence class as <math>x</math>.
|Let us first note that if <math>y</math> is not in the same recurrence class as <math>x</math>, then
<math display="block">
\E_x[N_y]=U(x,y)=0,
</math>
which implies that <math>\mu(y)=0</math>. For <math>y\in E</math>, we get that
<math display="block">
\begin{multline*}
\mu(y)=\E_x\left[\sum_{k=1}^{H_x}\one_{\{ X_k=y\}}\right]=\sum_{z\in E}\E_x\left[\sum_{k=1}^{H_x}\one_{\{ X_{k-1}=z,X_k=y\}}\right]=\sum_{z\in E}\sum_{k\geq 1}\E_x\left[\one_{\{k\leq  H_x,X_{k-1}=z\}}\one_{\{ X_k=y\}}\right]\\=\sum_{z\in E}\sum_{k\geq 1}\E_x\left[\one_{\{k\leq  H_x,X_{k-1}=z\}}\right]=\sum_{z\in E}\E_x\left[\sum_{k=1}^{H_x}\one_{X_{k-1}=z}\right]Q(z,y)=\sum_{z\in E}\mu(z)Q(z,y).
\end{multline*}
</math>
We showed that <math>\mu Q=\mu</math>. Thus, it follows that <math>\mu Q_n=\mu</math> for all <math>n\geq 0</math>. In particular, we get
<math display="block">
\mu(x)=1=\sum_{z\in E}\mu(z)Q_n(z,x).
</math>
Let <math>y</math> be in the same recurrence class as <math>x</math>. Then there exists some <math>n\geq 0</math> such that <math>Q_n(y,x) > 0</math>, which implies that
<math display="block">
\mu(y) < \infty.
</math>
We can also find <math>m\geq 0</math> such that <math>Q_m(x,y) > 0</math> and
<math display="block">
\mu(y)=\sum_{z\in E}\mu(z)Q_m(z,y)\geq Q_m(x,x) > 0.
</math>}}
{{alert-info |
If there exists several recurrence classes <math>R_i</math> with <math>i\in I</math> for some index set <math>I</math> and if we set
<math display="block">
\mu_i(y)=\E_{x_i}\left[\sum_{k=0}^{H_{x_i}-1}\one_{\{ X_k=y\}}\right],
</math>
we obtain a invariant measure with disjoint supports.
}}
{{proofcard|Theorem|thm-2|Let us assume that the Markov chain is irreducible and recurrent. Then the invariant measure is unique up to a multiplicative constant.
|Let <math>\mu</math> be an invariant measure. We can show by induction that for <math>p\geq 0</math> and for all <math>x,y\in E</math>, we get
<math display="block">
\mu(y)\geq \mu(x)\E_x\left[\sum_{k=0}^{p\land (H_x-1)}\one_{\{ X_k=y\}}\right].(\star)
</math>
First, if <math>x=y</math>, this is obvious. Let us thus suppose that <math>x\not=y</math>. If <math>p=0</math>, the inequality is immediate. Let us assume <math>(\star)</math> holds for <math>p</math>. Then
<math display="block">
\begin{align*}
\mu(y)&=\sum_{z\in E}\mu(z)Q(z,y)\geq \mu(x)\sum_{z\in E}\E_x\left[\sum_{k=0}^{p\land (H_x-1)}\one_{\{ X_k=z\}}\right]Q(z,y)\\
&=\mu(x)\sum_{z\in E}\sum_{k=0}^p\E_x[\one_{\{ X_k=z,k\leq  H_x-1\}}]Q(z,y)\\
&=\mu(x)\sum_{z\in E}\sum_{k=0}^p
\E_x[\one_{\{ X_k=z,k\leq  H_x-1\}}]\one_{\{ X_{k-1}=y\}}\\
&=\mu(x)\E_x\left[\sum_{k=0}^{p\land (H_x-1)}\one_{\{ X_{k-1}=y\}}\right]=\mu(x)\E_x\left[\sum_{k=1}^{(p+1)\land (H_x)}\one_{\{ X_k=y\}}\right].
\end{align*}
</math>
This establishes the result for <math>p+1</math>. Now, if we let <math>p\to\infty</math> in <math>(\star)</math>, we get
<math display="block">
\mu(y)\geq \mu(x)\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{X_k=y\}}\right].
</math>
Let us fix <math>x\in E</math>. The measure <math>\nu_x(y)=\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{ X_k=y\}}\right]</math> is invariant and we have <math>\mu(y)\geq \mu(x)\nu_x(y)</math>. Hence for all <math>n\geq 1</math> we get
<math display="block">
\mu(x)=\sum_{z\in E}\mu(z)Q_n(z,x)\geq \sum_{x\in E}\mu(x)\nu_x(z)Q_n(z,x)=\mu(x)\nu_x(x)=\mu(x).
</math>
Therefore we get that <math>\mu(z)=\mu(x)=\nu_x(z)</math> for all <math>z</math> such that <math>Q_n(z,x) > 0</math>. Since the chain is irreducible, there exists some <math>n\geq 0</math> such that <math>Q_n(z,x) > 0</math>. This implies finally that
<math display="block">
\mu(z)=\mu(x)\nu_x(z).
</math>}}
{{proofcard|Corollary|cor2|Let us assume the chain is irreducible and recurrent. Then one of the following hold.
<ul style{{=}}"list-style-type:lower-roman"><li>There exists an invariant probability measure <math>\mu</math> and for <math>x\in E</math> we have
<math display="block">
\E_x[H_x]=\frac{1}{\mu(x)}.
</math>
</li>
<li>
All invariant measures have infinite total mass and for <math>x\in E</math> we get
<math display="block">
\E_x[H_x]=\infty.
</math>
</li>
</ul>
In the first case, the basis is said to be positive recurrent and in the second case it is said to be negative recurrent.
{{alert-info |
If <math>E</math> is finite, then only the first case can occur.
}}
|We know that in this situation all invariant measures are proportional. Hence they all have finite mass or infinite mass. For case <math>(i)</math>, let <math>\mu</math> be the invariant probability measure and let <math>x\in E</math>. Moreover, let
<math display="block">
\nu_x(y)=\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{ X_k=y\}}\right].
</math>
Then for some <math>C > 0</math> we get <math>\mu=C\nu</math>. We can determine <math>C</math> by
<math display="block">
1=\mu(E)=C\nu_x(E),
</math>
which implies that <math>C=\frac{1}{\nu_x(E)}</math> and thus
<math display="block">
\mu(x)=\frac{\nu_x(x)}{\nu_x(E)}=\frac{1}{\nu_x(E)}.
</math>
But on the other hand we have
<math display="block">
\nu_x(E)=\sum_{y\in E}\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{ X_k=y\}}\right]=\E_x\left[\sum_{k=0}^{H_x-1}\left(\sum_{y\in E}\one_{\{ X_k=y\}}\right)\right]=\E_x[H_x].
</math>
In case <math>(ii)</math>, <math>\nu_x</math> is infinite and thus
<math display="block">
\E_x[H_x]=\nu_x(E)=\infty.
</math>}}
==General references==
{{cite arXiv|last=Moshayedi|first=Nima|year=2020|title=Lectures on Probability Theory|eprint=2010.16280|class=math.PR}}
==Notes==
{{notelist}}

Latest revision as of 00:10, 9 May 2024

[math] \newcommand{\R}{\mathbb{R}} \newcommand{\A}{\mathcal{A}} \newcommand{\B}{\mathcal{B}} \newcommand{\N}{\mathbb{N}} \newcommand{\C}{\mathbb{C}} \newcommand{\Rbar}{\overline{\mathbb{R}}} \newcommand{\Bbar}{\overline{\mathcal{B}}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\E}{\mathbb{E}} \newcommand{\p}{\mathbb{P}} \newcommand{\one}{\mathds{1}} \newcommand{\0}{\mathcal{O}} \newcommand{\mat}{\textnormal{Mat}} \newcommand{\sign}{\textnormal{sign}} \newcommand{\CP}{\mathcal{P}} \newcommand{\CT}{\mathcal{T}} \newcommand{\CY}{\mathcal{Y}} \newcommand{\F}{\mathcal{F}} \newcommand{\mathds}{\mathbb}[/math]
Definition (Invariant measure)

Let [math]\mu[/math] be a positive measure on [math]E[/math] such that [math]\mu(x) \lt \infty[/math] for all [math]x\in E[/math] and [math]\mu\not\equiv 0[/math]. We say that [math]\mu[/math] is invariant for the transition matrix [math]Q[/math] if for all [math]y\in E[/math] we get

[[math]] \mu(y)=\sum_{x\in E}\mu(x)Q(x,y). [[/math]]
With matrix notation, this means

[[math]] \mu Q=\mu. [[/math]]
Since for all [math]n\in\N[/math], [math]Q_n=Q^n[/math], we have [math]\mu Q_n\mu=\mu[/math].

Interpretation

Assume that [math]\mu(E) \lt \infty[/math], which is always the case when [math]E[/math] is finite. We can assume that [math]\mu(E)=1[/math]. Then for all measurable maps [math]f:E\to\R_+[/math], we get that

[[math]] \E_\mu[f(X_1)]=\sum_{x\in E}\mu(x)\sum_{y\in E}Q(x,y)f(y)=\sum_{y\in E}f(y)\sum_{x\in E}\mu(x)Q(x,y)=\sum_{y\in E}\mu(y)f(y). [[/math]]

Hence under [math]\p_\mu[/math], we get that [math]X_1[/math] [math]{law}\atop{=}[/math] [math]X_0\sim \mu[/math]. Using the fact that [math]\mu Q_n=\mu[/math], we show that under [math]\p_\mu[/math], we get that [math]X_n\sim \mu[/math]. For all measurable maps [math]F:\Omega\to\R_+[/math] we have

[[math]] \E_\mu[F\circ \Theta_\one]=\E_\mu[\E_{X_1}[F]]=\sum_{x\in E}\mu(x)\E_x[F]=\E_\mu[F], [[/math]]

which implies that under [math]\p_\mu[/math], we get that [math](X_{1+n})_{n\geq 0}[/math] has the same law[a] as [math](X_n)_{n\geq 0}[/math].

Example


For the r.v. on [math]\mathbb{Z}^d[/math], we get [math]Q(x,y)=(y,x)[/math]. One then immediately checks that the counting measure on [math]\mathbb{Z}^d[/math] is invariant.

Definition (Reversible measure)

Let [math]\mu[/math] be positive measure on [math]E[/math] such that [math]\mu(x) \lt \infty[/math] for all [math]x\in E[/math]. [math]\mu[/math] is said to be reversible if for all [math]x,y\in E[/math] we get that

[[math]] \mu(x)Q(x,y)=\mu(y)Q(y,x). [[/math]]

Proposition

A reversible measure is invariant.


Show Proof

If [math]\mu[/math] is reversible, we get that

[[math]] \sum_{x\in E}\mu(x)Q(x,y)=\sum_{x\in E}\mu(y)Q(y,x)=\mu(y). [[/math]]

There exists invariant measures which are not reversible, for example the counting measure is not reversible if the [math]\mathcal{G}[/math] is not symmetric, i.e. [math]\mathcal{G}(x)=\mathcal{G}(-x)[/math].


Example

[Random walk on a graph]

The measure [math]\mu(x)=\vert A_x\vert[/math] is reversible. Indeed, if [math]\{x,y\}\in A[/math], we get

[[math]] \mu(x)Q(x,y)=\vert A_x\vert\frac{1}{\vert A_x\vert}=1=\mu(y)Q(y,x). [[/math]]


Example

[Ehrenfest's model] This is the Markov chain on [math]\{1,...,k\}[/math] with transition matrix


[[math]] \begin{cases}Q(j,j+1)=\frac{k-j}{k},& 0\leq j\leq k-1\\ Q(j,j-1)=\frac{j}{k},&1\leq j\leq k\end{cases} [[/math]]

A measure [math]\mu[/math] is reversible if and only if for [math]0\leq j\leq k-1[/math] we have

[[math]] \mu(j)\frac{k-j}{k}=\mu(j+1)\frac{j+1}{k}. [[/math]]

One can check that [math]\mu(j)=\binom{k}{j}[/math] is a solution.

Theorem

Let [math]x\in E[/math] be recurrent. The formula

[[math]] \mu(y)=\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{ X_k=y\}}\right] [[/math]]

defines an invariant measure. Moreover, [math]\mu(y) \gt 0[/math] if and only if [math]y[/math] is in the same recurrence class as [math]x[/math].


Show Proof

Let us first note that if [math]y[/math] is not in the same recurrence class as [math]x[/math], then

[[math]] \E_x[N_y]=U(x,y)=0, [[/math]]
which implies that [math]\mu(y)=0[/math]. For [math]y\in E[/math], we get that

[[math]] \begin{multline*} \mu(y)=\E_x\left[\sum_{k=1}^{H_x}\one_{\{ X_k=y\}}\right]=\sum_{z\in E}\E_x\left[\sum_{k=1}^{H_x}\one_{\{ X_{k-1}=z,X_k=y\}}\right]=\sum_{z\in E}\sum_{k\geq 1}\E_x\left[\one_{\{k\leq H_x,X_{k-1}=z\}}\one_{\{ X_k=y\}}\right]\\=\sum_{z\in E}\sum_{k\geq 1}\E_x\left[\one_{\{k\leq H_x,X_{k-1}=z\}}\right]=\sum_{z\in E}\E_x\left[\sum_{k=1}^{H_x}\one_{X_{k-1}=z}\right]Q(z,y)=\sum_{z\in E}\mu(z)Q(z,y). \end{multline*} [[/math]]


We showed that [math]\mu Q=\mu[/math]. Thus, it follows that [math]\mu Q_n=\mu[/math] for all [math]n\geq 0[/math]. In particular, we get

[[math]] \mu(x)=1=\sum_{z\in E}\mu(z)Q_n(z,x). [[/math]]

Let [math]y[/math] be in the same recurrence class as [math]x[/math]. Then there exists some [math]n\geq 0[/math] such that [math]Q_n(y,x) \gt 0[/math], which implies that

[[math]] \mu(y) \lt \infty. [[/math]]
We can also find [math]m\geq 0[/math] such that [math]Q_m(x,y) \gt 0[/math] and

[[math]] \mu(y)=\sum_{z\in E}\mu(z)Q_m(z,y)\geq Q_m(x,x) \gt 0. [[/math]]

If there exists several recurrence classes [math]R_i[/math] with [math]i\in I[/math] for some index set [math]I[/math] and if we set

[[math]] \mu_i(y)=\E_{x_i}\left[\sum_{k=0}^{H_{x_i}-1}\one_{\{ X_k=y\}}\right], [[/math]]
we obtain a invariant measure with disjoint supports.

Theorem

Let us assume that the Markov chain is irreducible and recurrent. Then the invariant measure is unique up to a multiplicative constant.


Show Proof

Let [math]\mu[/math] be an invariant measure. We can show by induction that for [math]p\geq 0[/math] and for all [math]x,y\in E[/math], we get

[[math]] \mu(y)\geq \mu(x)\E_x\left[\sum_{k=0}^{p\land (H_x-1)}\one_{\{ X_k=y\}}\right].(\star) [[/math]]

First, if [math]x=y[/math], this is obvious. Let us thus suppose that [math]x\not=y[/math]. If [math]p=0[/math], the inequality is immediate. Let us assume [math](\star)[/math] holds for [math]p[/math]. Then


[[math]] \begin{align*} \mu(y)&=\sum_{z\in E}\mu(z)Q(z,y)\geq \mu(x)\sum_{z\in E}\E_x\left[\sum_{k=0}^{p\land (H_x-1)}\one_{\{ X_k=z\}}\right]Q(z,y)\\ &=\mu(x)\sum_{z\in E}\sum_{k=0}^p\E_x[\one_{\{ X_k=z,k\leq H_x-1\}}]Q(z,y)\\ &=\mu(x)\sum_{z\in E}\sum_{k=0}^p \E_x[\one_{\{ X_k=z,k\leq H_x-1\}}]\one_{\{ X_{k-1}=y\}}\\ &=\mu(x)\E_x\left[\sum_{k=0}^{p\land (H_x-1)}\one_{\{ X_{k-1}=y\}}\right]=\mu(x)\E_x\left[\sum_{k=1}^{(p+1)\land (H_x)}\one_{\{ X_k=y\}}\right]. \end{align*} [[/math]]


This establishes the result for [math]p+1[/math]. Now, if we let [math]p\to\infty[/math] in [math](\star)[/math], we get

[[math]] \mu(y)\geq \mu(x)\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{X_k=y\}}\right]. [[/math]]
Let us fix [math]x\in E[/math]. The measure [math]\nu_x(y)=\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{ X_k=y\}}\right][/math] is invariant and we have [math]\mu(y)\geq \mu(x)\nu_x(y)[/math]. Hence for all [math]n\geq 1[/math] we get

[[math]] \mu(x)=\sum_{z\in E}\mu(z)Q_n(z,x)\geq \sum_{x\in E}\mu(x)\nu_x(z)Q_n(z,x)=\mu(x)\nu_x(x)=\mu(x). [[/math]]

Therefore we get that [math]\mu(z)=\mu(x)=\nu_x(z)[/math] for all [math]z[/math] such that [math]Q_n(z,x) \gt 0[/math]. Since the chain is irreducible, there exists some [math]n\geq 0[/math] such that [math]Q_n(z,x) \gt 0[/math]. This implies finally that

[[math]] \mu(z)=\mu(x)\nu_x(z). [[/math]]

Corollary

Let us assume the chain is irreducible and recurrent. Then one of the following hold.

  • There exists an invariant probability measure [math]\mu[/math] and for [math]x\in E[/math] we have
    [[math]] \E_x[H_x]=\frac{1}{\mu(x)}. [[/math]]
  • All invariant measures have infinite total mass and for [math]x\in E[/math] we get
    [[math]] \E_x[H_x]=\infty. [[/math]]

In the first case, the basis is said to be positive recurrent and in the second case it is said to be negative recurrent.

If [math]E[/math] is finite, then only the first case can occur.


Show Proof

We know that in this situation all invariant measures are proportional. Hence they all have finite mass or infinite mass. For case [math](i)[/math], let [math]\mu[/math] be the invariant probability measure and let [math]x\in E[/math]. Moreover, let

[[math]] \nu_x(y)=\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{ X_k=y\}}\right]. [[/math]]
Then for some [math]C \gt 0[/math] we get [math]\mu=C\nu[/math]. We can determine [math]C[/math] by

[[math]] 1=\mu(E)=C\nu_x(E), [[/math]]
which implies that [math]C=\frac{1}{\nu_x(E)}[/math] and thus

[[math]] \mu(x)=\frac{\nu_x(x)}{\nu_x(E)}=\frac{1}{\nu_x(E)}. [[/math]]

But on the other hand we have

[[math]] \nu_x(E)=\sum_{y\in E}\E_x\left[\sum_{k=0}^{H_x-1}\one_{\{ X_k=y\}}\right]=\E_x\left[\sum_{k=0}^{H_x-1}\left(\sum_{y\in E}\one_{\{ X_k=y\}}\right)\right]=\E_x[H_x]. [[/math]]

In case [math](ii)[/math], [math]\nu_x[/math] is infinite and thus

[[math]] \E_x[H_x]=\nu_x(E)=\infty. [[/math]]

General references

Moshayedi, Nima (2020). "Lectures on Probability Theory". arXiv:2010.16280 [math.PR].

Notes

  1. Same holds for [math](X_{k+n})_{n\geq 0}[/math] for all [math]k\geq 0[/math]