guide:30289c2ffb: Difference between revisions
No edit summary |
No edit summary |
||
Line 1: | Line 1: | ||
<div class="d-none"><math> | |||
\newcommand{\NA}{{\rm NA}} | |||
\newcommand{\mat}[1]{{\bf#1}} | |||
\newcommand{\exref}[1]{\ref{##1}} | |||
\newcommand{\secstoprocess}{\all} | |||
\newcommand{\NA}{{\rm NA}} | |||
\newcommand{\mathds}{\mathbb}</math></div> | |||
label{sec 12.2} | |||
In the last section, the simplest kind of symmetric random walk in <math>{\mathbf R}^1</math> was studied. | |||
In this section, we remove the assumption that the random walk is symmetric. Instead, we assume | |||
that | |||
<math>p</math> and <math>q</math> are non-negative real numbers with <math>p+q = 1</math>, and that the common distribution function | |||
of the jumps of the random walk is | |||
<math display="block"> | |||
f_X(x) = \left \{ \begin{array}{ll} | |||
p, & \mbox{if $x = 1$},\\ | |||
q, & \mbox{if <math>x = -1</math>}. | |||
\end{array} | |||
\right. | |||
</math> | |||
One can imagine the random walk as representing a sequence of tosses of a weighted coin, with a | |||
head appearing with probability <math>p</math> and a tail appearing with probability <math>q</math>. An alternative | |||
formulation of this situation is that of a gambler playing a sequence of games against an | |||
adversary (sometimes thought of as another person, sometimes called “the house”) where, in | |||
each game, the gambler has probability <math>p</math> of winning. | |||
===The Gambler's Ruin Problem=== | |||
The above formulation of this type of random walk leads to a problem known as the Gambler's Ruin | |||
problem. This problem was introduced in [[guide:87cf36f969#exer 11.2.22 |Exercise]], but | |||
we will give the description of the problem again. A gambler starts with a “stake” of size <math>s</math>. | |||
She plays until her capital reaches the value <math>M</math> or the value 0. In the language of Markov | |||
chains, these two values correspond to absorbing states. We | |||
are interested in studying the probability of occurrence of each of these two outcomes. | |||
One can also assume that the gambler is playing against an “infinitely rich” adversary. In this | |||
case, we would say that there is only one absorbing state, namely when the gambler's stake is 0. | |||
Under this assumption, one can ask for the probability that the gambler is eventually ruined. | |||
We begin by defining <math>q_k</math> to be the probability that the gambler's stake reaches 0, i.e., | |||
she is ruined, before it reaches <math>M</math>, given that the initial stake is <math>k</math>. We note that <math>q_0 = | |||
1</math> and <math>q_M = 0</math>. The fundamental relationship among the <math>q_k</math>'s is the following: | |||
<math display="block"> | |||
q_k = pq_{k+1} + qq_{k-1}\ , | |||
</math> | |||
where <math>1 \le k \le M-1</math>. This holds because if her stake equals <math>k</math>, and she plays one game, | |||
then her stake becomes <math>k+1</math> with probability <math>p</math> and <math>k-1</math> with probability <math>q</math>. In the first | |||
case, the probability of eventual ruin is <math>q_{k+1}</math> and in the second case, it is <math>q_{k-1}</math>. | |||
We note that since <math>p + q = 1</math>, we can write the above equation as | |||
<math display="block"> | |||
p(q_{k+1} - q_k) = q(q_k - q_{k-1})\ , | |||
</math> | |||
or | |||
<math display="block"> | |||
q_{k+1} - q_k = {q\over p}(q_k - q_{k-1})\ . | |||
</math> | |||
From this equation, it is easy to see that | |||
<span id{{=}}"eq 12.2.2"/> | |||
<math display="block"> | |||
\begin{equation} | |||
q_{k+1} - q_k = \biggl({q\over p}\biggr)^k(q_1 - q_0)\ . | |||
\label{eq 12.2.2} | |||
\end{equation} | |||
</math> | |||
We now use telescoping sums to obtain an equation in which the only unknown is <math>q_1</math>: | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
-1 &=& q_M - q_0 \\ | |||
&=& \sum_{k = 0}^{M-1} (q_{k+1} - q_k)\ , | |||
\end{eqnarray*} | |||
</math> | |||
so | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
-1 &=& \sum_{k = 0}^{M-1} \biggl({q\over p}\biggr)^k(q_1 - q_0)\\ | |||
&=& (q_1 - q_0) \sum_{k = 0}^{M-1} \biggl({q\over p}\biggr)^k\ . | |||
\end{eqnarray*} | |||
</math> | |||
If <math>p \ne q</math>, then the above expression equals | |||
<math display="block"> | |||
(q_1 - q_0) {{(q/p)^M - 1}\over{(q/p) - 1}}\ , | |||
</math> | |||
while if <math>p = q = 1/2</math>, then we obtain the equation | |||
<math display="block"> | |||
-1 = (q_1 - q_0) M\ . | |||
</math> | |||
For the moment we shall assume that <math>p \ne q</math>. Then we have | |||
<math display="block"> | |||
q_1 - q_0 = -{{(q/p) - 1}\over{(q/p)^M - 1}}\ . | |||
</math> | |||
Now, for any <math>z</math> with <math>1 \le z \le M</math>, we have | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
q_z - q_0 &=& \sum_{k = 0}^{z-1} (q_{k+1} - q_k)\\ | |||
&=& (q_1 - q_0)\sum_{k = 0}^{z-1} \biggl({q\over p}\biggr)^k\\ | |||
&=& -(q_1 - q_0){{(q/p)^z - 1}\over{(q/p) - 1}}\\ | |||
&=& -{{(q/p)^z - 1}\over{(q/p)^M - 1}}\ . | |||
\end{eqnarray*} | |||
</math> | |||
Therefore, | |||
<span id{{=}}"eq 12.2.3"/> | |||
<math display="block"> | |||
\begin{eqnarray*} | |||
q_z &=& 1 - {{(q/p)^z - 1}\over{(q/p)^M - 1}}\\ | |||
&=& {{(q/p)^M - (q/p)^z}\over{(q/p)^M - 1}}\ . | |||
\label{eq 12.2.3} | |||
\end{eqnarray*} | |||
</math> | |||
Finally, if <math>p = q = 1/2</math>, it is easy to show that (see Exercise [[exercise:870fedb4e2 |Exercise]]) | |||
<math display="block"> | |||
q_z = {{M - z}\over M}\ . | |||
</math> | |||
We note that both of these formulas hold if <math>z = 0</math>. | |||
We define, for <math>0 \le z \le M</math>, the quantity <math>p_z</math> to be the probability that the gambler's | |||
stake reaches <math>M</math> without ever having reached 0. Since the game might continue indefinitely, it | |||
is not obvious that <math>p_z + q_z = 1</math> for all <math>z</math>. However, one can use the same method as above | |||
to show that if <math>p \ne q</math>, then | |||
<math display="block"> | |||
q_z = {{(q/p)^z - 1}\over{(q/p)^M - 1}}\ , | |||
</math> | |||
and if <math>p = q = 1/2</math>, then | |||
<math display="block"> | |||
q_z = {z\over M}\ . | |||
</math> | |||
Thus, for all <math>z</math>, it is the case that <math>p_z + q_z = 1</math>, so the game ends with probability 1. | |||
===Infinitely Rich Adversaries=== | |||
We now turn to the problem of finding the probability of eventual ruin if the gambler is playing | |||
against an infinitely rich adversary. This probability can be obtained by letting <math>M</math> go to | |||
<math>\infty</math> in the expression for <math>q_z</math> calculated above. If <math>q < p</math>, then the expression approaches | |||
<math>(q/p)^z</math>, and if <math>q > p</math>, the expression approaches 1. In the case <math>p = q = 1/2</math>, we recall | |||
that <math>q_z = 1 - z/M</math>. Thus, if <math>M \rightarrow \infty</math>, we see that the probability of eventual | |||
ruin tends to 1. | |||
===Historical Remarks=== | |||
In 1711, De Moivre, in his book ''De Mesura Sortis'', gave an ingenious | |||
derivation of the probability of ruin. The following description of his argument is taken from | |||
David.<ref group="Notes" >F. N. David, | |||
''Games, Gods and Gambling'' (London: Griffin, 1962).</ref> The notation used | |||
is as follows: We imagine that there are two players, A and B, and the probabilities that they | |||
win a game are <math>p</math> and <math>q</math>, respectively. The players start with <math>a</math> and <math>b</math> counters, | |||
respectively. | |||
<blockquote> | |||
Imagine that each player starts with his counters before him in a pile, and that nominal values | |||
are assigned to the counters in the following manner. A's bottom counter is given the nominal | |||
value <math>q/p</math>; the next is given the nominal value <math>(q/p)^2</math>, and so on until his top counter | |||
which has the nominal value <math>(q/p)^a</math>. B's top counter is valued <math>(q/p)^{a+1}</math>, and so on | |||
downwards until his bottom counter which is valued <math>(q/p)^{a+b}</math>. After each game the loser's | |||
top counter is transferred to the top of the winner's pile, and it is always the top counter | |||
which is staked for the next game. Then ''in terms of the nominal values'' B's stake is | |||
always <math>q/p</math> times A's, so that at every game each player's nominal expectation is nil. This | |||
remains true throughout the play; therefore A's chance of winning all B's counters, multiplied | |||
by his nominal gain if he does so, must equal B's chance multiplied by B's nominal gain. Thus, | |||
<span id{{=}}"eq 12.2.1"/> | |||
<math display="block"> | |||
\begin{equation} | |||
P_a\biggl(\Bigl({q\over p}\Bigr)^{a+1} + \cdots + | |||
\Bigl({q\over p}\Bigr)^{a+b}\biggr) = | |||
P_b\biggl(\Bigl({q\over p}\Bigr) + \cdots + | |||
\Bigl({q\over p}\Bigr)^a\biggr)\ . | |||
\label{eq 12.2.1} | |||
\end{equation} | |||
</math> | |||
</blockquote> | |||
Using this equation, together with the fact that | |||
<math display="block"> | |||
P_a + P_b = 1\ , | |||
</math> | |||
it can easily be shown that | |||
<math display="block"> | |||
P_a = {{(q/p)^a - 1}\over{(q/p)^{a+b} - 1}}\ , | |||
</math> | |||
if <math>p \ne q</math>, and | |||
<math display="block"> | |||
P_a = {a\over{a+b}}\ , | |||
</math> | |||
if <math>p = q = 1/2</math>. | |||
In terms of modern probability theory, de Moivre is changing the values of the counters to make | |||
an unfair game into a fair game, which is called a martingale. With the new values, the | |||
expected fortune of player A (that is, the sum of the nominal values of his counters) after each | |||
play equals his fortune before the play (and similarly for player B). (For a simpler martingale | |||
argument, see Exercise [[exercise:D30a9d2d58 |Exercise]].) De Moivre then uses the fact that when the game ends, | |||
it is still fair, thus [[#eq 12.2.1 |Equation]] must be true. This fact requires proof, and is | |||
one of the central theorems in the area of martingale theory. | |||
\exercises | |||
==General references== | |||
{{cite web |url=https://math.dartmouth.edu/~prob/prob/prob.pdf |title=Grinstead and Snell’s Introduction to Probability |last=Doyle |first=Peter G.|date=2006 |access-date=June 6, 2024}} | |||
==Notes== | |||
{{Reflist|group=Notes}} |
Revision as of 02:38, 9 June 2024
label{sec 12.2}
In the last section, the simplest kind of symmetric random walk in [math]{\mathbf R}^1[/math] was studied.
In this section, we remove the assumption that the random walk is symmetric. Instead, we assume
that
[math]p[/math] and [math]q[/math] are non-negative real numbers with [math]p+q = 1[/math], and that the common distribution function
of the jumps of the random walk is
}.
\end{array} \right.
</math>
One can imagine the random walk as representing a sequence of tosses of a weighted coin, with a head appearing with probability [math]p[/math] and a tail appearing with probability [math]q[/math]. An alternative formulation of this situation is that of a gambler playing a sequence of games against an adversary (sometimes thought of as another person, sometimes called “the house”) where, in each game, the gambler has probability [math]p[/math] of winning.
The Gambler's Ruin Problem
The above formulation of this type of random walk leads to a problem known as the Gambler's Ruin problem. This problem was introduced in Exercise, but we will give the description of the problem again. A gambler starts with a “stake” of size [math]s[/math]. She plays until her capital reaches the value [math]M[/math] or the value 0. In the language of Markov chains, these two values correspond to absorbing states. We are interested in studying the probability of occurrence of each of these two outcomes.
One can also assume that the gambler is playing against an “infinitely rich” adversary. In this
case, we would say that there is only one absorbing state, namely when the gambler's stake is 0.
Under this assumption, one can ask for the probability that the gambler is eventually ruined.
We begin by defining [math]q_k[/math] to be the probability that the gambler's stake reaches 0, i.e.,
she is ruined, before it reaches [math]M[/math], given that the initial stake is [math]k[/math]. We note that [math]q_0 =
1[/math] and [math]q_M = 0[/math]. The fundamental relationship among the [math]q_k[/math]'s is the following:
where [math]1 \le k \le M-1[/math]. This holds because if her stake equals [math]k[/math], and she plays one game, then her stake becomes [math]k+1[/math] with probability [math]p[/math] and [math]k-1[/math] with probability [math]q[/math]. In the first case, the probability of eventual ruin is [math]q_{k+1}[/math] and in the second case, it is [math]q_{k-1}[/math]. We note that since [math]p + q = 1[/math], we can write the above equation as
or
From this equation, it is easy to see that
We now use telescoping sums to obtain an equation in which the only unknown is [math]q_1[/math]:
so
If [math]p \ne q[/math], then the above expression equals
while if [math]p = q = 1/2[/math], then we obtain the equation
For the moment we shall assume that [math]p \ne q[/math]. Then we have
Now, for any [math]z[/math] with [math]1 \le z \le M[/math], we have
Therefore,
Finally, if [math]p = q = 1/2[/math], it is easy to show that (see Exercise Exercise)
We note that both of these formulas hold if [math]z = 0[/math].
We define, for [math]0 \le z \le M[/math], the quantity [math]p_z[/math] to be the probability that the gambler's
stake reaches [math]M[/math] without ever having reached 0. Since the game might continue indefinitely, it
is not obvious that [math]p_z + q_z = 1[/math] for all [math]z[/math]. However, one can use the same method as above
to show that if [math]p \ne q[/math], then
and if [math]p = q = 1/2[/math], then
Thus, for all [math]z[/math], it is the case that [math]p_z + q_z = 1[/math], so the game ends with probability 1.
Infinitely Rich Adversaries
We now turn to the problem of finding the probability of eventual ruin if the gambler is playing against an infinitely rich adversary. This probability can be obtained by letting [math]M[/math] go to [math]\infty[/math] in the expression for [math]q_z[/math] calculated above. If [math]q \lt p[/math], then the expression approaches [math](q/p)^z[/math], and if [math]q \gt p[/math], the expression approaches 1. In the case [math]p = q = 1/2[/math], we recall that [math]q_z = 1 - z/M[/math]. Thus, if [math]M \rightarrow \infty[/math], we see that the probability of eventual ruin tends to 1.
Historical Remarks
In 1711, De Moivre, in his book De Mesura Sortis, gave an ingenious derivation of the probability of ruin. The following description of his argument is taken from David.[Notes 1] The notation used is as follows: We imagine that there are two players, A and B, and the probabilities that they win a game are [math]p[/math] and [math]q[/math], respectively. The players start with [math]a[/math] and [math]b[/math] counters, respectively.
Imagine that each player starts with his counters before him in a pile, and that nominal values are assigned to the counters in the following manner. A's bottom counter is given the nominal value [math]q/p[/math]; the next is given the nominal value [math](q/p)^2[/math], and so on until his top counter which has the nominal value [math](q/p)^a[/math]. B's top counter is valued [math](q/p)^{a+1}[/math], and so on downwards until his bottom counter which is valued [math](q/p)^{a+b}[/math]. After each game the loser's top counter is transferred to the top of the winner's pile, and it is always the top counter which is staked for the next game. Then in terms of the nominal values B's stake is always [math]q/p[/math] times A's, so that at every game each player's nominal expectation is nil. This remains true throughout the play; therefore A's chance of winning all B's counters, multiplied by his nominal gain if he does so, must equal B's chance multiplied by B's nominal gain. Thus,
[[math]] \begin{equation} P_a\biggl(\Bigl({q\over p}\Bigr)^{a+1} + \cdots + \Bigl({q\over p}\Bigr)^{a+b}\biggr) = P_b\biggl(\Bigl({q\over p}\Bigr) + \cdots + \Bigl({q\over p}\Bigr)^a\biggr)\ . \label{eq 12.2.1} \end{equation} [[/math]]
Using this equation, together with the fact that
it can easily be shown that
if [math]p \ne q[/math], and
if [math]p = q = 1/2[/math].
In terms of modern probability theory, de Moivre is changing the values of the counters to make
an unfair game into a fair game, which is called a martingale. With the new values, the
expected fortune of player A (that is, the sum of the nominal values of his counters) after each
play equals his fortune before the play (and similarly for player B). (For a simpler martingale
argument, see Exercise Exercise.) De Moivre then uses the fact that when the game ends,
it is still fair, thus Equation must be true. This fact requires proof, and is
one of the central theorems in the area of martingale theory.
\exercises
General references
Doyle, Peter G. (2006). "Grinstead and Snell's Introduction to Probability" (PDF). Retrieved June 6, 2024.