⧼exchistory⧽
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

(Roberts[Notes 1]) A city is divided into 3 areas 1, 2, and 3. It is estimated that amounts [math]u_1[/math], [math]u_2[/math],and [math]u_3[/math] of pollution are emitted each day from these three areas. A fraction [math]q_{ij}[/math] of the pollution from region [math]i[/math] ends up the next day at region [math]j[/math]. A fraction [math]q_i = 1 - \sum_j q_{ij} \gt 0[/math] goes into the atmosphere and escapes. Let [math]w_i^{(n)}[/math] be the amount of pollution in area [math]i[/math] after [math]n[/math] days.

  • Show that [math]\mat{w}^{(n)} = \mat{u} + \mat{u} \mat{Q} +\cdots + \mat{u}\mat{Q}^{n - 1}[/math].
  • Show that [math]\mat{w}^{(n)} \to \mat{w}[/math], and show how to compute [math]\mat{w}[/math] from [math]\mat{u}[/math].
  • The government wants to limit pollution levels to a prescribed level by prescribing [math]\mat{w}.[/math] Show how to determine the levels of pollution [math]\mat{u}[/math] which would result in a prescribed limiting value [math]\mat{w}[/math].

Notes

  1. F. Roberts, Discrete Mathematical Models (Englewood Cliffs, NJ: Prentice Hall, 1976).
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

In the Leontief economic model,[Notes 1] there are [math]n[/math] industries 1, 2, ..., [math]n[/math]. The [math]i[/math]th industry requires an amount [math]0 \leq q_{ij} \leq 1[/math] of goods (in dollar value) from company [math]j[/math] to produce 1 dollar's worth of goods. The outside demand on the industries, in dollar value, is given by the vector [math]\mat{d} = (d_1,d_2,\ldots,d_n)[/math]. Let [math]\mat{Q}[/math] be the matrix with entries [math]q_{ij}[/math].

  • Show that if the industries produce total amounts given by the vector [math]\mat{x} = (x_1,x_2,\ldots,x_n)[/math] then the amounts of goods of each type that the industries will need just to meet their internal demands is given by the vector [math]\mat{x} \mat{Q}[/math].
  • Show that in order to meet the outside demand [math]\mat{d}[/math] and the internal demands the industries must produce total amounts given by a vector [math]\mat{x} = (x_1,x_2,\ldots,x_n)[/math] which satisfies the equation [math]\mat{x} = \mat{x} \mat{Q} + \mat{d}[/math].
  • Show that if [math]\mat{Q}[/math] is the [math]\mat{Q}[/math]-matrix for an absorbing Markov chain, then it is possible to meet any outside demand [math]\mat{d}[/math].
  • Assume that the row sums of [math]\mat{Q}[/math] are less than or equal to 1. Give an economic interpretation of this condition. Form a Markov chain by taking the states to be the industries and the transition probabilites to be the [math]q_{ij}[/math]. Add one absorbing state 0. Define
    [[math]] q_{i0} = 1 - \sum_j q_{ij}\ . [[/math]]
    Show that this chain will be absorbing if every company is either making a profit or ultimately depends upon a profit-making company.
  • Define [math]\mat{x} \mat{c}[/math] to be the gross national product. Find an expression for the gross national product in terms of the demand vector [math]\mat{d}[/math] and the vector [math]\mat{t}[/math] giving the expected time to absorption.

Notes

  1. W. W. Leontief, Input-Output Economics (Oxford: Oxford University Press, 1966).
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

A gambler plays a game in which on each play he wins one dollar with probability [math]p[/math] and loses one dollar with probability [math]q = 1 -p[/math]. The Gambler's Ruin problem is the problem of finding the probability [math]w_x[/math] of winning an amount [math]T[/math] before losing everything, starting with state [math]x[/math]. Show that this problem may be considered to be an absorbing Markov chain with states 0, 1, 2, ..., [math]T[/math] with 0 and [math]T[/math] absorbing states. Suppose that a gambler has probability [math]p = .48[/math] of winning on each play. Suppose, in addition, that the gambler starts with 50 dollars and that [math]T =100[/math] dollars. Simulate this game 100 times and see how often the gambler is ruined. This estimates [math]w_{50}[/math].

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Show that [math]w_x[/math] of Exercise satisfies the following conditions:

  • [math]w_x = pw_{x + 1} + qw_{x - 1}[/math] for [math]x = 1[/math], 2, ..., [math]T - 1[/math].
  • [math]w_0 = 0[/math].
  • [math]w_T = 1[/math].

Show that these conditions determine [math]w_x[/math]. Show that, if [math]p = q =1/2[/math], then

[[math]] w_x = \frac xT [[/math]]

satisfies (a), (b), and (c) and hence is the solution. If [math]p \ne q[/math], show that

[[math]] w_x = \frac{(q/p)^x - 1}{(q/p)^T - 1} [[/math]]

satisfies these conditions and hence gives the probability of the gambler winning.

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Write a program to compute the probability [math]w_x[/math] of Exercise for given values of [math]x[/math], [math]p[/math], and [math]T[/math]. Study the probability that the gambler will ruin the bank in a game that is only slightly unfavorable, say [math]p = .49[/math], if the bank has significantly more money than the gambler.

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

We considered the two examples of the Drunkard's Walk corresponding to the cases [math]n = 4[/math] and [math]n =5[/math] blocks (see Example). Verify that in these two examples the expected time to absorption, starting at [math]x[/math], is equal to [math]x(n - x)[/math]. See if you can prove that this is true in general. Hint:

Show that if [math]f(x)[/math] is the expected time to absorption then [math]f(0) = f(n) = 0[/math] and

[[math]] f(x) = (1/2)f(x - 1) + (1/2)f(x + 1) + 1 [[/math]]

for [math]0 \lt x \lt n[/math]. Show that if [math]f_1(x)[/math] and [math]f_2(x)[/math] are two solutions, then their difference [math]g(x)[/math] is a solution of the equation

[[math]] g(x) = (1/2)g(x - 1) + (1/2)g(x + 1)\ . [[/math]]

Also, [math]g(0) = g(n) = 0[/math]. Show that it is not possible for [math]g(x)[/math] to have a strict maximum or a strict minimum at the point [math]i[/math], where [math]1 \le i \le n-1[/math]. Use this to show that [math]g(i) = 0[/math] for all i. This shows that there is at most one solution. Then verify that the function [math]f(x) = x(n-x)[/math] is a solution.

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Consider an absorbing Markov chain with state space [math]S[/math]. Let [math]f[/math] be a function defined on [math]S[/math] with the property that

[[math]] f(i) = \sum_{j \in S} p_{ij} f(j)\ , [[/math]]

or in vector form

[[math]] \mat{f} = \mat{Pf}\ . [[/math]]

Then [math]f[/math] is called a harmonic function for [math]\mat{P}[/math]. If you imagine a game in which your fortune is [math]f(i)[/math] when you are in state [math]i[/math], then the harmonic condition means that the game is fair in the sense that your expected fortune after one step is the same as it was before the step.

  • Show that for [math]f[/math] harmonic
    [[math]] \mat{f} = \mat{P}^n{\mat{f}} [[/math]]
    for all [math]n[/math].
  • Show, using (a), that for [math]f[/math] harmonic
    [[math]] \mat{f} = \mat{P}^\infty \mat{f}\ , [[/math]]
    where
    [[math]] \mat{P}^\infty = \lim_{n \to \infty} \mat{P}^n = \pmatrix{ \begin{array}{l|r} \mat{0} & \mat{B} \\ \hline \mat{0} & \mat{I} \\ \end{array} \cr}\ . [[/math]]
  • Using (b), prove that when you start in a transient state [math]i[/math] your expected final fortune
    [[math]] \sum_k b_{ik} f(k) [[/math]]
    is equal to your starting fortune [math]f(i)[/math]. In other words, a fair game on a finite state space remains fair to the end. (Fair games in general are called martingales. Fair games on infinite state spaces need not remain fair with an unlimited number of plays allowed. For example, consider the game of Heads or Tails (see Example). Let Peter start with 1 penny and play until he has 2. Then Peter will be sure to end up 1 penny ahead.)
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

A coin is tossed repeatedly. We are interested in finding the expected number of tosses until a particular pattern, say B = HTH, occurs for the first time. If, for example, the outcomes of the tosses are HHTTHTH we say that the pattern B has occurred for the first time after 7 tosses. Let [math]T^B[/math] be the time to obtain pattern B for the first time. Li[Notes 1] gives the following method for determining [math]E(T^B)[/math]. We are in a casino and, before each toss of the coin, a gambler enters, pays 1 dollar to play, and bets that the pattern B = HTH will occur on the next three tosses. If H occurs, he wins 2 dollars and bets this amount that the next outcome will be T. If he wins, he wins 4 dollars and bets this amount that H will come up next time. If he wins, he wins 8 dollars and the pattern has occurred. If at any time he loses, he leaves with no winnings. Let A and B be two patterns. Let AB be the amount the gamblers win who arrive while the pattern A occurs and bet that B will occur. For example, if A = HT and B = HTH then AB = 2 + 4 = 6 since the first gambler bet on H and won 2 dollars and then bet on T and won 4

dollars more. The second gambler bet on H and lost. If A = HH and B = HTH, then AB = 2 since the first gambler bet on H and won but then bet on T and lost and the second gambler bet on H and won. If A = B = HTH then AB = BB = 8 + 2 = 10.

Now for each gambler coming in, the casino takes in 1 dollar. Thus the casino takes in [math]T^B[/math] dollars. How much does it pay out? The only gamblers who go off with any money are those who arrive during the time the pattern B occurs and they win the amount BB. But since all the bets made are perfectly fair bets, it seems quite intuitive that the expected amount the casino takes in should equal the expected amount that it pays out. That is, [math]E(T^B)[/math] = BB.

Since we have seen that for B = HTH, BB = 10, the expected time to reach the pattern HTH for the first time is 10. If we had been trying to get the pattern B = HHH, then BB [math]= 8 + 4 + 2 = 14[/math] since all the last three gamblers are paid off in this case. Thus the expected time to get the pattern HHH is 14. To justify this argument, Li used a theorem from the theory of martingales (fair games).

We can obtain these expectations by considering a Markov chain whose states are the possible initial segments of the sequence HTH; these states are HTH, HT, H, and [math]\emptyset[/math], where [math]\emptyset[/math] is the empty set. Then, for this example, the transition matrix is

[[math]] \bordermatrix{ &\mbox{HTH} & \mbox{HT} & \mbox{H} & \emptyset \cr \mbox{HTH} & 1 & 0 & 0 & 0 \cr \mbox{HT} & .5 & 0 & 0 & .5 \cr \mbox{H} & 0 & .5 & .5 & 0 \cr \emptyset & 0 & 0 & .5 & .5 }\ , [[/math]]

and if B = HTH, [math]E(T^B)[/math] is the expected time to absorption for this chain started in state [math]\emptyset[/math].


Show, using the associated Markov chain, that the values [math]E(T^B)[/math] = 10 and [math]E(T^B)[/math] = 14 are correct for the expected time to reach the patterns HTH and HHH, respectively.

Notes

  1. S-Y. R. Li, “A Martingale Approach to the Study of Occurrence of Sequence Patterns in Repeated Experiments, Annals of Probability, vol. 8 (1980), pp. 1171--1176.
BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

We can use the gambling interpretation given in Exercise to find the expected number of tosses required to reach pattern B when we start with pattern A. To be a meaningful problem, we assume that pattern A does not have pattern B as a subpattern. Let [math]E_A(T^B)[/math] be the expected time to reach pattern B starting with pattern A. We use our gambling scheme and assume that the first k coin tosses produced the pattern A. During this time, the gamblers made an amount AB.

The total amount the gamblers will have made when the pattern B occurs is BB. Thus, the amount that the gamblers made after the pattern A has occurred is BB - AB. Again by the fair game argument, [math]E_A(T^B)[/math] = BB-AB.

For example, suppose that we start with pattern A = HT and are trying to get the pattern B = HTH. Then we saw in Exercise that AB = 4 and BB = 10 so [math]E_A(T^B)[/math] = BB-AB=6.

Verify that this gambling interpretation leads to the correct answer for all starting states in the examples that you worked in Exercise.

BBy Bot
Jun 09'24
[math] \newcommand{\NA}{{\rm NA}} \newcommand{\mat}[1]{{\bf#1}} \newcommand{\exref}[1]{\ref{##1}} \newcommand{\secstoprocess}{\all} \newcommand{\NA}{{\rm NA}} \newcommand{\mathds}{\mathbb}[/math]

Here is an elegant method due to Guibas and Odlyzko[Notes 1] to obtain the expected time to reach a pattern, say HTH, for the first time. Let [math]f(n)[/math] be the number of sequences of length [math]n[/math] which do not have the pattern HTH. Let [math]f_p(n)[/math] be the number of sequences that have the pattern for the first time after [math]n[/math] tosses. To each element of [math]f(n)[/math], add the pattern HTH. Then divide the resulting sequences into three subsets: the set where HTH occurs for the first time at time [math]n + 1[/math] (for this, the original sequence must have ended with HT); the set where HTH occurs for the first time at time [math]n + 2[/math] (cannot happen for this pattern); and the set where the sequence HTH occurs for the first time at time [math]n + 3[/math] (the original sequence ended with anything except HT). Doing this, we have

[[math]] f(n) = f_p(n + 1) + f_p(n + 3)\ . [[/math]]

Thus,

[[math]] \frac{f(n)}{2^n} = \frac{2f_p(n + 1)}{2^{n + 1}} + \frac{2^3f_p(n + 3)}{2^{n + 3}}\ . [[/math]]

If [math]T[/math] is the time that the pattern occurs for the first time, this equality states that

[[math]] P(T \gt n) = 2P(T = n + 1) + 8P(T = n + 3)\ . [[/math]]

Show that if you sum this equality over all [math]n[/math] you obtain

[[math]] \sum_{n = 0}^\infty P(T \gt n) = 2 + 8 = 10\ . [[/math]]

Show that for any integer-valued random variable

[[math]] E(T) = \sum_{n = 0}^\infty P(T \gt n)\ , [[/math]]

and conclude that [math]E(T) = 10[/math]. Note that this method of proof makes very clear that [math]E(T)[/math] is, in general, equal to the expected amount the casino pays out and avoids the martingale system theorem used by Li.

Notes

  1. L. J. Guibas and A. M. Odlyzko, “String Overlaps, Pattern Matching, and Non-transitive Games,” Journal of Combinatorial Theory, Series A, vol. 30 (1981),pp. 183--208.