exercise:6fff21e1d0: Difference between revisions

From Stochiki
(Created page with "<div class="d-none"><math> \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}</math></div> In his book, ''Wahrscheinlichkeitsrechnung und Statistik,''<ref group="Notes" >A. Engle, ''Wahrscheinlichkeitsrechnung und Statistik,'' vol. 2 (Stuttgart: Klett Verlag, 1976).</ref> A. Engle proposes an algorithm for finding the fixed vector for...")
 
No edit summary
 
Line 5: Line 5:
\newcommand{\secstoprocess}{\all}
\newcommand{\secstoprocess}{\all}
\newcommand{\NA}{{\rm NA}}
\newcommand{\NA}{{\rm NA}}
\newcommand{\mathds}{\mathbb}</math></div> In his book, ''Wahrscheinlichkeitsrechnung und
\newcommand{\mathds}{\mathbb}</math></div> In his book, ''Wahrscheinlichkeitsrechnung und Statistik,''<ref group="Notes" >A. Engle, ''Wahrscheinlichkeitsrechnung und Statistik,'' vol. 2 (Stuttgart: Klett Verlag, 1976).</ref> A. Engle proposes an algorithm for finding the fixed vector for an ergodic Markov chain when the transition probabilities are rational numbers. Here is his algorithm: For each state <math>i</math>, let <math>a_i</math> be the  least common multiple of the denominators of the non-zero entries in the <math>i</math>th  row.  Engle describes his algorithm in terms of moving chips around on the states---indeed, for small examples, he recommends implementing the algorithm this way.  Start by putting <math>a_i</math> chips on state <math>i</math> for all <math>i</math>.  Then, at each state, redistribute the <math>a_i</math> chips, sending <math>a_ip_{ij}</math> to state <math>j</math>.  The number of chips at state <math>i</math> after this redistribution need not be a multiple of <math>a_i</math>.  For each state <math>i</math>, add just enough chips to bring the number of chips at state <math>i</math> up to a multiple of <math>a_i</math>.  Then redistribute the chips in the same manner.  This process will eventually reach a point where the number of chips at each state, after the redistribution, is the same as before redistribution.  At this point, we have found a fixed vector.  Here is an example:
Statistik,''<ref group="Notes" >A. Engle, ''Wahrscheinlichkeitsrechnung und
Statistik,'' vol. 2  
(Stuttgart: Klett Verlag, 1976).</ref> A. Engle proposes an
algorithm for finding the fixed
vector for an ergodic Markov chain when the transition probabilities are
rational numbers.  
Here is his algorithm: For each state <math>i</math>, let <math>a_i</math> be the  least common
multiple of the
denominators of the non-zero entries in the <math>i</math>th  row.  Engle describes his
algorithm in
terms of moving chips around on the states---indeed, for small examples, he
recommends
implementing the algorithm this way.  Start by putting <math>a_i</math> chips on state <math>i</math>
for
all <math>i</math>.  Then, at each state, redistribute the <math>a_i</math> chips, sending <math>a_i
p_{ij}</math> to
state <math>j</math>.  The number of chips at state <math>i</math> after this redistribution need not
be a
multiple of <math>a_i</math>.  For each state <math>i</math>, add just enough chips to bring the
number of chips at state <math>i</math> up to a multiple of <math>a_i</math>.  Then redistribute the
chips in the same manner.  This process will eventually reach a point where the
number of chips at each state, after the redistribution, is the same as before
redistribution.  At this point, we have found a fixed vector.  Here is an
example:


<math display="block">
<div class="d-flex justify-content-center">
\mat {P} = \bordermatrix{
{|
& 1 & 2 & 3 \cr
|-
1 & 1/2 & 1/4 & 1/4 \cr
| || || <span style="width:40px;text-align:center;display:inline-block;margin-left:5px;"><math>1</math></span> <span style="width:30px;;text-align:center;display:inline-block;"><math>2</math></span><span style="width:45px;;text-align:center;display:inline-block;"><math>3</math></span>  
2 & 1/2 & 0 & 1/2 \cr
|-
3 & 1/2 & 1/4 & 1/4}\ .
| <math>\mat{P}=\,\,</math>
</math>
| style= "padding:0px"  |<math>\begin{array}{c c c c}
We start with <math>\mat {a} = (4,2,4)</math>.  The chips after successive redistributions
  1 \\
are
  2\\
shown in [[guide:E5c38a1e8a#table 11.4 |Table]].
3
\begin{table}
    \end{array}</math> || <math>\begin{pmatrix}
\centering
1 & 2 & 3 \\
1/2 & 1/4 & 1/4 \\
1/2 & 0 & 1/2 \\
1/2 & 1/4 & 1/4\\
    \end{pmatrix};</math>
|}
</div>
 
 
We start with <math>\mat {a} = (4,2,4)</math>.  The chips after successive redistributions are shown in [[#table 11.4 |Table]].


<math display="block">
<span id="table 11.4 "/>
<div class="d-flex justify-content-center">
<math>
\begin{array}{lll}
\begin{array}{lll}
(4 & 2\;\; & 4)\\
(4 & 2\;\; & 4)\\
Line 66: Line 53:
\end{array}
\end{array}
</math>
</math>
\caption{Distribution of  chips.}
</div>


\end{table}
We find that <math>\mat {a} = (20,8,12)</math> is a fixed vector.
We find that <math>\mat {a} = (20,8,12)</math> is a fixed vector.
<ul><li> Write a computer program to implement this algorithm.
<ul style="list-style-type:lower-alpha"><li> Write a computer program to implement this algorithm.
</li>
</li>
<li> Prove that the algorithm will stop.  '' Hint'': Let <math>\mat{b}</math> be a
<li> Prove that the algorithm will stop.  '' Hint'': Let <math>\mat{b}</math> be a

Latest revision as of 22:36, 17 June 2024

[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

In his book, Wahrscheinlichkeitsrechnung und Statistik,[Notes 1] A. Engle proposes an algorithm for finding the fixed vector for an ergodic Markov chain when the transition probabilities are rational numbers. Here is his algorithm: For each state [math]i[/math], let [math]a_i[/math] be the least common multiple of the denominators of the non-zero entries in the [math]i[/math]th row. Engle describes his algorithm in terms of moving chips around on the states---indeed, for small examples, he recommends implementing the algorithm this way. Start by putting [math]a_i[/math] chips on state [math]i[/math] for all [math]i[/math]. Then, at each state, redistribute the [math]a_i[/math] chips, sending [math]a_ip_{ij}[/math] to state [math]j[/math]. The number of chips at state [math]i[/math] after this redistribution need not be a multiple of [math]a_i[/math]. For each state [math]i[/math], add just enough chips to bring the number of chips at state [math]i[/math] up to a multiple of [math]a_i[/math]. Then redistribute the chips in the same manner. This process will eventually reach a point where the number of chips at each state, after the redistribution, is the same as before redistribution. At this point, we have found a fixed vector. Here is an example:

[math]1[/math] [math]2[/math][math]3[/math]
[math]\mat{P}=\,\,[/math] [math]\begin{array}{c c c c} 1 \\ 2\\ 3 \end{array}[/math] [math]\begin{pmatrix} 1 & 2 & 3 \\ 1/2 & 1/4 & 1/4 \\ 1/2 & 0 & 1/2 \\ 1/2 & 1/4 & 1/4\\ \end{pmatrix};[/math]


We start with [math]\mat {a} = (4,2,4)[/math]. The chips after successive redistributions are shown in Table.

[math] \begin{array}{lll} (4 & 2\;\; & 4)\\ (5 & 2 & 3)\\ (8 & 2 & 4)\\ (7 & 3 & 4)\\ (8 & 4 & 4)\\ (8 & 3 & 5)\\ (8 & 4 & 8)\\ (10 & 4 & 6)\\ (12 & 4 & 8)\\ (12 & 5 & 7)\\ (12 & 6 & 8)\\ (13 & 5 & 8)\\ (16 & 6 & 8)\\ (15 & 6 & 9)\\ (16 & 6 & 12)\\ (17 & 7 & 10)\\ (20 & 8 & 12)\\ (20 & 8 & 12)\ . \end{array} [/math]

We find that [math]\mat {a} = (20,8,12)[/math] is a fixed vector.

  • Write a computer program to implement this algorithm.
  • Prove that the algorithm will stop. Hint: Let [math]\mat{b}[/math] be a vector with integer components that is a fixed vector for [math]\mat{P}[/math] and such that each coordinate of the starting vector [math]\mat{a}[/math] is less than or equal to the corresponding component of [math]\mat{b}[/math]. Show that, in the iteration, the components of the vectors are always increasing, and always less than or equal to the corresponding component of [math]\mat{b}[/math].

Notes

  1. A. Engle, Wahrscheinlichkeitsrechnung und Statistik, vol. 2 (Stuttgart: Klett Verlag, 1976).