exercise:8ae7bbfa06: Difference between revisions

From Stochiki
(Created page with "<div class="d-none"><math> \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}</math></div> A gambler plays a game in which on each play he wins one dollar with probability <math>p</math> and loses one dollar with probability <math>q = 1 - p</math>. The ''Gambler's Ruin problem'' is the problem of finding the probability <math>w_x</math...")
 
No edit summary
 
Line 5: Line 5:
\newcommand{\secstoprocess}{\all}
\newcommand{\secstoprocess}{\all}
\newcommand{\NA}{{\rm NA}}
\newcommand{\NA}{{\rm NA}}
\newcommand{\mathds}{\mathbb}</math></div> A gambler plays a game in which on each play he wins
\newcommand{\mathds}{\mathbb}</math></div> A gambler plays a game in which on each play he wins one dollar with probability <math>p</math> and loses one dollar with probability <math>q = 1 -p</math>.  The ''Gambler's Ruin problem'' is the problem of finding the probability <math>w_x</math> of winning an amount <math>T</math> before losing everything, starting with state <math>x</math>.  Show that this problem may be considered to be an absorbing Markov chain with states 0, 1, 2, ..., <math>T</math> with 0 and <math>T</math> absorbing states. Suppose that a gambler has probability <math>p = .48</math> of winning on each play.  Suppose, in addition, that the gambler starts with 50 dollars and that <math>T =100</math> dollars.  Simulate this game 100 times and see how often the gambler is ruined.  This estimates <math>w_{50}</math>.
one dollar with probability <math>p</math> and loses one dollar with probability <math>q = 1 -
p</math>.  The ''Gambler's Ruin problem'' is the
problem of
finding the probability <math>w_x</math> of winning an amount <math>T</math> before losing
everything, starting
with state <math>x</math>.  Show that this problem may be considered to be an absorbing
Markov chain with states 0, 1, 2, \ldots, <math>T</math> with 0 and <math>T</math> absorbing states.  
Suppose that a gambler has probability <math>p = .48</math> of winning on each play.   
Suppose, in addition, that the gambler starts with 50 dollars and that <math>T =
100</math>
dollars.  Simulate this game 100 times and see how often the gambler is ruined.   
This estimates <math>w_{50}</math>.

Latest revision as of 00:02, 16 June 2024

[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

A gambler plays a game in which on each play he wins one dollar with probability [math]p[/math] and loses one dollar with probability [math]q = 1 -p[/math]. The Gambler's Ruin problem is the problem of finding the probability [math]w_x[/math] of winning an amount [math]T[/math] before losing everything, starting with state [math]x[/math]. Show that this problem may be considered to be an absorbing Markov chain with states 0, 1, 2, ..., [math]T[/math] with 0 and [math]T[/math] absorbing states. Suppose that a gambler has probability [math]p = .48[/math] of winning on each play. Suppose, in addition, that the gambler starts with 50 dollars and that [math]T =100[/math] dollars. Simulate this game 100 times and see how often the gambler is ruined. This estimates [math]w_{50}[/math].