⧼exchistory⧽
24 exercise(s) shown, 0 hidden
BBy Bot
Jun 09'24

Calculate the reverse transition matrix for the Land of Oz example (Example). Is this chain reversible?

BBy Bot
Jun 09'24

Give an example of a three-state ergodic Markov chain that is not reversible.

BBy Bot
Jun 09'24

Let [math]\mat{P}[/math] be the transition matrix of an ergodic Markov chain and [math]\mat{P}^*[/math] the reverse transition matrix. Show that they have the same fixed probability vector [math]\mat{w}[/math].

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

If [math]\mat{P}[/math] is a reversible Markov chain, is it necessarily true that the mean time to go from state [math]i[/math] to state [math]j[/math] is equal to the mean time to go from state [math]j[/math] to state [math]i[/math]? Hint: Try the Land of Oz example (Example).

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Show that any ergodic Markov chain with a symmetric transition matrix (i.e., [math]p_{ij} = p_{ji})[/math] is reversible.

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

(Crowell[Notes 1]) Let [math]\mat{P}[/math]

be the transition matrix of an ergodic Markov chain. Show that

[[math]] (\mat {I} + \mat {P} +\cdots+ \mat {P}^{n - 1})(\mat {I} - \mat {P} + \mat {W}) = \mat {I} - \mat {P}^n + n\mat {W}\ , [[/math]]

and from this show that

[[math]] \frac{\mat {I} + \mat {P} +\cdots+ \mat {P}^{n - 1}}n \to \mat {W}\ , [[/math]]

as [math]n \rightarrow \infty[/math].

Notes

  1. Private communication.
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

An ergodic Markov chain is started in equilibrium (i.e., with initial probability vector [math]\mat{w}[/math]). The mean time until the next occurrence of state [math]s_i[/math] is [math]\bar{m_i} = \sum_k w_k m_{ki} + w_i r_i[/math]. Show that [math]\bar {m_i} = z_{ii}/w_i[/math], by using the facts that [math]\mat {w}\mat {Z}= \mat {w}[/math] and [math]m_{ki} = (z_{ii} - z_{ki})/w_i[/math].

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

A perpetual craps game goes on at Charley's. Jones comes into Charley's on an evening when there have already been 100 plays. He plans to play until the next time that snake eyes (a pair of ones) are rolled. Jones wonders how many times he will play. On the one hand he realizes that the average time between snake eyes is 36 so he should play about 18 times as he is equally likely to have come in on either side of the halfway point between occurrences of snake eyes. On the other hand, the dice have no memory, and so it would seem that he would have to play for 36 more times no matter what the previous outcomes have been. Which, if either, of Jones's arguments do you believe? Using the result of Exercise, calculate the expected to reach snake eyes, in equilibrium, and see if this resolves the apparent paradox. If you are still in doubt, simulate the experiment to decide which argument is correct. Can you give an intuitive argument which explains this result?

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Show that, for an ergodic Markov chain (see Theorem),

[[math]] \sum_j m_{ij} w_j = \sum_j z_{jj} - 1 = K\ . [[/math]]

The second expression above shows that the number [math]K[/math] is independent of [math]i[/math]. The number [math]K[/math] is called Kemeny's constant. A prize was offered to the first person to give an intuitively plausible reason for the above sum to be independent of [math]i[/math]. (See also Exercise \ref{exer 11.5.27}.)

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Consider a game played as follows: You are given a regular Markov chain with transition matrix [math]\mat P[/math], fixed probability vector [math]\mat{w}[/math], and a payoff function [math]\mat f[/math] which assigns to each state [math]s_i[/math] an amount [math]f_i[/math] which may be positive or negative. Assume that [math]\mat {w}\mat {f} =0[/math]. You watch this Markov chain as it evolves, and every time you are in state [math]s_i[/math] you receive an amount [math]f_i[/math]. Show that your expected winning after [math]n[/math] steps can be represented by a column vector [math]\mat{g}^{(n)}[/math], with

[[math]] \mat{g}^{(n)} = (\mat {I} + \mat {P} + \mat {P}^2 +\cdots+ \mat {P}^n) \mat {f}. [[/math]]

Show that as [math]n \to \infty[/math], [math]\mat {g}^{(n)} \to \mat {g}[/math] with [math]\mat {g} = \mat {Z} \mat {f}[/math].