⧼exchistory⧽
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Let [math]b_k[/math] denote the expected number of steps to reach [math]n[/math] from [math]n-k[/math], in the process described in Exercise.

  • Define [math]b_0 = 0[/math]. Show that for [math]k \gt 0[/math], we have
    [[math]] b_k = 1 + \frac 1k \bigl(b_{k-1} + b_{k-2} + \cdots + b_0\bigr)\ . [[/math]]
  • Let
    [[math]] f(x) = b_0 + b_1 x + b_2 x^2 + \cdots\ . [[/math]]
    Using the recursion in part (a), show that [math]f(x)[/math] satisfies the differential equation
    [[math]] (1-x)^2 y' - (1-x) y - 1 = 0\ . [[/math]]
  • Show that the general solution of the differential equation in part (b) is
    [[math]] y = \frac{-\log(1-x)}{1-x} + \frac c{1-x}\ , [[/math]]
    where [math]c[/math] is a constant.
  • Use part (c) to show that
    [[math]] b_k = 1 + \frac 12 + \frac 13 + \cdots + \frac 1k\ . [[/math]]
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Three tanks fight a three-way duel. Tank A has probability 1/2 of destroying the tank at which it fires, tank B has probability 1/3 of destroying the tank at which it fires, and tank C has probability 1/6 of destroying the tank at which it fires. The tanks fire together and each tank fires at the strongest opponent not yet destroyed. Form a Markov chain by taking as states the subsets of the set of tanks. Find [math]\mat{N},~\mat{N}\mat{c}[/math], and [math]\mat{N}\mat{R}[/math], and interpret your results. Hint: Take as states ABC, AC, BC, A, B, C, and none, indicating the tanks that could survive starting in state ABC. You can omit AB because this state cannot be reached from ABC.

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Smith is in jail and has 3 dollars; he can get out on bail if he has 8 dollars. A guard agrees to make a series of bets with him. If Smith bets [math]A[/math] dollars, he wins [math]A[/math] dollars with probability .4 and loses [math]A[/math] dollars with probability .6. Find the probability that he wins 8 dollars before losing all of his money if

  • he bets 1 dollar each time (timid strategy).
  • he bets, each time, as much as possible but not more than necessary to bring his fortune up to 8 dollars (bold strategy).
  • Which strategy gives Smith the better chance of getting out of jail?
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

With the situation in Exercise,consider the strategy such that for [math]i \lt 4[/math], Smith bets [math]\min(i,4 - i)[/math], and for [math]i \geq 4[/math], he bets according to the bold strategy, where [math]i[/math] is his current fortune. Find the probability that he gets out of jail using this strategy. How does this probability compare with that obtained for the bold strategy?

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Consider the game of tennis when deuce is reached. If a player wins the next point, he has advantage. On the following point, he either wins the game or the game returns to deuce. Assume that for any point, player A has probability .6 of winning the point and player B has probability .4 of winning the point.

  • Set this up as a Markov chain with state 1: A wins; 2: B wins; 3: advantage A; 4: deuce; 5: advantage B.
  • Find the absorption probabilities.
  • At deuce, find the expected duration of the game and the probability that B will win.

Exercise and Exercise concern the inheritance of color-blindness, which is a sex-linked characteristic. There is a pair of genes, g and G, of which the former tends to produce color-blindness, the latter normal vision. The G gene is dominant. But a man has only one gene, and if this is g, he is color-blind. A man inherits one of his mother's two genes, while a woman inherits one gene from each parent. Thus a man may be of type G or g, while a woman may be type GG or Gg or gg. We will study a process of inbreeding similar to that of Example by constructing a Markov chain.

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

List the states of the chain. Hint: There are six. Compute the transition probabilities. Find the fundamental matrix [math]\mat{N}[/math], [math]\mat{N}\mat{c}[/math], and [math]\mat{N}\mat{R}[/math].

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Show that in both Example and the example just given, the probability of absorption in a state having genes of a particular type is equal to the proportion of genes of that type in the starting state. Show that this can be explained by the fact that a game in which your fortune is the number of genes of a particular type in the state of the Markov chain is a fair game.[Notes 1]

Notes

  1. H. Gonshor, “An Application of Random Walk to a Problem in Population Genetics,” American Math Monthly, vol. 94 (1987), pp. 668--671
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Assume that a student going to a certain four-year medical school in northern New England has, each year, a probability [math]q[/math] of flunking out, a probability [math]r[/math] of having to repeat the year, and a probability [math]p[/math] of moving on to the next year (in the fourth year, moving on means graduating).

  • Form a transition matrix for this process taking as states F, 1, 2, 3, 4, and G where F stands for flunking out and G for graduating, and the other states represent the year of study.
  • For the case [math]q = .1[/math], [math]r = .2[/math], and [math]p = .7[/math] find the time a beginning student can expect to be in the second year. How long should this student expect to be in medical school?
  • Find the probability that this beginning student will graduate.
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

(E. Brown[Notes 1]) Mary and John are playing the following game: They have a three-card deck marked with the numbers 1, 2, and 3 and a spinner with the numbers 1, 2, and 3 on it. The game begins by dealing the cards out so that the dealer gets one card and the other person gets two. A move in the game consists of a spin of the spinner. The person having the card with the number that comes up on the spinner hands that card to the other person. The game ends when someone has all the cards.

  • Set up the transition matrix for this absorbing Markov chain, where the states correspond to the number of cards that Mary has.
  • Find the fundamental matrix.
  • On the average, how many moves will the game last?
  • If Mary deals, what is the probability that John will win the game?

Notes

  1. Private communication.
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Assume that an experiment has [math]m[/math] equally probable outcomes. Show that the expected number of independent trials before the first occurrence of [math]k[/math] consecutive occurrences of one of these outcomes is [math](m^k - 1)/(m - 1)[/math].

Hint: Form an absorbing Markov chain with states 1, 2, ..., [math]k[/math] with state [math]i[/math] representing the length of the current run. The expected time until a run of [math]k[/math] is 1 more than the expected time until absorption for the chain started in state 1. It has been found that, in the decimal expansion of pi, starting with the 24,58,01st digit, there is a run of nine 7's. What would your result say about the expected number of digits necessary to find such a run if the digits are produced randomly?