Give an example of a three-state ergodic Markov chain that is not reversible.
Let [math]\mat{P}[/math] be the transition matrix of an ergodic Markov chain and [math]\mat{P}^*[/math] the reverse transition matrix. Show that they have the same fixed probability vector [math]\mat{w}[/math].
If [math]\mat{P}[/math] is a reversible Markov chain, is it necessarily true that the mean time to go from state [math]i[/math] to state [math]j[/math] is equal to the mean time to go from state [math]j[/math] to state [math]i[/math]? Hint: Try the Land of Oz example (Example).
Show that any ergodic Markov chain with a symmetric transition matrix (i.e., [math]p_{ij} = p_{ji})[/math] is reversible.
(Crowell[Notes 1]) Let [math]\mat{P}[/math]
be the transition matrix of an ergodic Markov chain. Show that
and from this show that
as [math]n \rightarrow \infty[/math].
Notes
An ergodic Markov chain is started in equilibrium (i.e., with initial probability vector [math]\mat{w}[/math]). The mean time until the next occurrence of state [math]s_i[/math] is [math]\bar{m_i} = \sum_k w_k m_{ki} + w_i r_i[/math]. Show that [math]\bar {m_i} = z_{ii}/w_i[/math], by using the facts that [math]\mat {w}\mat {Z}= \mat {w}[/math] and [math]m_{ki} = (z_{ii} - z_{ki})/w_i[/math].
A perpetual craps game goes on at Charley's. Jones comes into Charley's on an evening when there have already been 100 plays. He plans to play until the next time that snake eyes (a pair of ones) are rolled. Jones wonders how many times he will play. On the one hand he realizes that the average time between snake eyes is 36 so he should play about 18 times as he is equally likely to have come in on either side of the halfway point between occurrences of snake eyes. On the other hand, the dice have no memory, and so it would seem that he would have to play for 36 more times no matter what the previous outcomes have been. Which, if either, of Jones's arguments do you believe? Using the result of Exercise, calculate the expected to reach snake eyes, in equilibrium, and see if this resolves the apparent paradox. If you are still in doubt, simulate the experiment to decide which argument is correct. Can you give an intuitive argument which explains this result?
Show that, for an ergodic Markov chain (see Theorem),
The second expression above shows that the number [math]K[/math] is independent of [math]i[/math]. The number [math]K[/math] is called Kemeny's constant. A prize was offered to the first person to give an intuitively plausible reason for the above sum to be independent of [math]i[/math]. (See also Exercise \ref{exer 11.5.27}.)
Consider a game played as follows: You are given a regular Markov chain with transition matrix [math]\mat P[/math], fixed probability vector [math]\mat{w}[/math], and a payoff function [math]\mat f[/math] which assigns to each state [math]s_i[/math] an amount [math]f_i[/math] which may be positive or negative. Assume that [math]\mat {w}\mat {f} =0[/math]. You watch this Markov chain as it evolves, and every time you are in state [math]s_i[/math] you receive an amount [math]f_i[/math]. Show that your expected winning after [math]n[/math] steps can be represented by a column vector [math]\mat{g}^{(n)}[/math], with
Show that as [math]n \to \infty[/math], [math]\mat {g}^{(n)} \to \mat {g}[/math] with [math]\mat {g} = \mat {Z} \mat {f}[/math].