guide:F6dd522435: Difference between revisions

From Stochiki
No edit summary
mNo edit summary
Line 9: Line 9:
where <math>p_k = \operatorname{P}(N = k)</math> (provided <math>a</math> and <math>b</math> exist and are real). There are only three discrete distributions that satisfy the full form of this relationship: the [[#Poisson|Poisson]], [[#Binomial Distribution|binomial]] and [[#Negative Binomial|negative binomial]] distributions.
where <math>p_k = \operatorname{P}(N = k)</math> (provided <math>a</math> and <math>b</math> exist and are real). There are only three discrete distributions that satisfy the full form of this relationship: the [[#Poisson|Poisson]], [[#Binomial Distribution|binomial]] and [[#Negative Binomial|negative binomial]] distributions.


===Mean and Variance ===


<div class="card mb-4"><div class="card-header">Mean and Variance for (''a'', ''b'', 0) distributions </div><div class="card-body">
<p class="card-text">
If a distribution belongs to the <math>(a,b,0)</math> class, then its mean equals <math>(a + b)/(1 - a)</math> and its variance equals <math>(a+b)/(1−a)^2 </math>.
If a distribution belongs to the <math>(a,b,0)</math> class, then its mean equals <math>(a + b)/(1 - a)</math> and its variance equals <math>(a+b)/(1−a)^2 </math>.
</p>
<span class="mw-customtoggle-theo.meanvar btn btn-primary" >Show Proof</span><div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-theo.meanvar"><div class="mw-collapsible-content p-3">
Set <math>\mu = \operatorname{E}[N]</math> and <math>\mu_{(2)} = \operatorname{E}[N^2]</math>. We have


<div class="text-right">
<math display="block">
<proofs page = "guide_proofs:F6dd522435" section = "meanvar" label = "Mean and Variance for (a,b,0) distributions" />
\begin{align*}
</div>
\mu = \sum_{k=1}^{\infty} kp_k =  \sum_{k=1}^{\infty} k (a + b/k)p_{k-1} &= a\sum_{k=1}^{\infty}kp_{k-1} + b\sum_{k=1}^{\infty} p_{k-1} \\ &= a\sum_{k=1}^{\infty} (k-1)p_{k-1} + (a + b) \sum_{k=1}^{\infty} p_{k-1} \\
&= a\mu + a + b.
\end{align*}
</math>
Hence <math> \mu = (a + b)/(1+ a) </math>. Similarly, we have
 
<math display="block">
\begin{align*}
\mu_{(2)} = \sum_{k=1}^{\infty}k^2p_k &= a\sum_{k=1}^{\infty}k^2p_{k-1} + b\sum_{k=1}^{\infty}kp_{k-1} \\ &= a \sum_{k=1}^{\infty}[(k-1)^2 + 2k -1]p_{k-1} + b(\mu + 1) \\ &= a(\mu_{(2)} + 2\mu + 1) + b(\mu + 1). \end{align*}
</math>
 
Hence <math> \mu_{(2)} = [a + b + \mu(2a + b)]/(1-a)</math>. Using <math>\operatorname{Var}[N] = \mu_{(2)} - \mu^2</math>:
 
<math display="block">
\begin{align*}
\operatorname{Var}[N] &= \frac{[a + b + \mu(2a + b)](1-a) - (a + b)^2}{(1-a)^2}\\ &= \frac{a+b}{(1-a)^2}.
\end{align*}
</math>
<div class="text-end">&#x25A0;</div></div></div></div></div>


==(''a'', ''b'', 1) Class ==
==(''a'', ''b'', 1) Class ==

Revision as of 15:38, 30 June 2023

A frequency distribution is a discrete distribution that is supported on [math]\mathbb{N} = \{0,1,2,\ldots\}[/math]. Frequency distributions are useful in actuarial science since they can be used to model the number of insurance claims generated in a fixed time period.

(a, b, 0) Class

The distribution of a discrete random variable [math]N[/math] whose values are nonnegative integers is said to be a member of the (a, b, 0) class of distributions if its probability mass function obeys

[[math]]\frac{p_k}{p_{k-1}} = a + \frac{b}{k}, \qquad k = 1, 2, 3, \dots[[/math]]

where [math]p_k = \operatorname{P}(N = k)[/math] (provided [math]a[/math] and [math]b[/math] exist and are real). There are only three discrete distributions that satisfy the full form of this relationship: the Poisson, binomial and negative binomial distributions.


Mean and Variance for (a, b, 0) distributions

If a distribution belongs to the [math](a,b,0)[/math] class, then its mean equals [math](a + b)/(1 - a)[/math] and its variance equals [math](a+b)/(1−a)^2 [/math].

Show Proof

Set [math]\mu = \operatorname{E}[N][/math] and [math]\mu_{(2)} = \operatorname{E}[N^2][/math]. We have

[[math]] \begin{align*} \mu = \sum_{k=1}^{\infty} kp_k = \sum_{k=1}^{\infty} k (a + b/k)p_{k-1} &= a\sum_{k=1}^{\infty}kp_{k-1} + b\sum_{k=1}^{\infty} p_{k-1} \\ &= a\sum_{k=1}^{\infty} (k-1)p_{k-1} + (a + b) \sum_{k=1}^{\infty} p_{k-1} \\ &= a\mu + a + b. \end{align*} [[/math]]
Hence [math] \mu = (a + b)/(1+ a) [/math]. Similarly, we have

[[math]] \begin{align*} \mu_{(2)} = \sum_{k=1}^{\infty}k^2p_k &= a\sum_{k=1}^{\infty}k^2p_{k-1} + b\sum_{k=1}^{\infty}kp_{k-1} \\ &= a \sum_{k=1}^{\infty}[(k-1)^2 + 2k -1]p_{k-1} + b(\mu + 1) \\ &= a(\mu_{(2)} + 2\mu + 1) + b(\mu + 1). \end{align*} [[/math]]

Hence [math] \mu_{(2)} = [a + b + \mu(2a + b)]/(1-a)[/math]. Using [math]\operatorname{Var}[N] = \mu_{(2)} - \mu^2[/math]:

[[math]] \begin{align*} \operatorname{Var}[N] &= \frac{[a + b + \mu(2a + b)](1-a) - (a + b)^2}{(1-a)^2}\\ &= \frac{a+b}{(1-a)^2}. \end{align*} [[/math]]

(a, b, 1) Class

The (a,b,1) class of distributions is a family of discrete distributions related the (a,b,0) class in the sense that the probability mass function of any distribution belonging to that class satisfies the recurrence relation

[[math]]\frac{p_k}{p_{k-1}} = a + \frac{b}{k}, \qquad k = 2, 3, \dots[[/math]]

Notice that the (a,b,1) class is not a subclass of (a,b,0) since we don't require that the recurrence relation hold for [math]k = 1[/math]. The (a,b,1) class is made up of two subclasses: the zero-truncated subclass and the zero-modified subclass.

The zero-truncated subclass

This class contains discrete distributions belonging to (a,b,1) with the additional property that no probability mass is attributed to the value 0. The probability mass function of any distribution belonging to this class will be denoted by [math]p_{k}^T[/math]; consequently, a random variable [math]N[/math] has a distribution contained in the zero-truncated subclass if and only if the following holds:

[[math]] \begin{equation}p_0^T = 0,\, \frac{p_k^T}{p_{k-1}^T} = a + \frac{b}{k}, \qquad k = 2, 3, \dots \end{equation} [[/math]]

It is clear that every distribution in the zero truncated subclass can be obtained by conditioning a distribution from (a,b,0) to be non-zero.

The zero-modified subclass

This class contains discrete distributions belonging to (a,b,1) with the additional property that a positive mass is attributed to the value 0, i.e., the zero-modified subclass is the compliment of the zero-truncated subclass in (a,b,1). The probability mass function of any distribution belonging to this class will be denoted by [math]p_{k}^M[/math]; consequently, a random variable [math]N[/math] has a distribution contained in the zero-truncated subclass if and only if the following holds:

[[math]] \begin{equation}p_0^M \gt 0,\, \frac{p_k^M}{p_{k-1}^M} = a + \frac{b}{k}, \qquad k = 2, 3, \dots \end{equation} [[/math]]

Notice that a distribution belonging to the zero-modified subclass doesn't necessarily belong to the (a,b,0) class since [math]p_0^M[/math] can be set independently of [math]p_1^M[/math]. The zero-modified subclass can be constructed from the zero-truncated subclass by the following procedure:

Constructing a zero-modified distribution
  1. Let [math]p_k^T[/math] be the probability mass function for a zero-truncated distribution.
  2. Set [math]p_k^M = p_k^T \cdot (1-p_0^M)[/math] for [math]k \geq 1[/math] with [math]0 \lt p_0^M \leq 1[/math], then [math]p_k^M[/math] is the probability mass function of a zero-modified distribution.

Binomial

The binomial distribution with parameters [math]m[/math] and [math]q[/math] is the number of successes in a sequence of [math]m[/math] independent yes/no experiments, each of which yields success with probability [math]q[/math]. A success/failure experiment is also called a Bernoulli experiment or Bernoulli trial; when [math]m = 1[/math], the binomial distribution is a Bernoulli distribution.

Probability Mass Function

In general, if the random variable [math]N[/math] follows the binomial distribution with parameters [math]m \in \mathbb{N}[/math] and [math]q \in [0,1][/math], we write [math]N \sim \operatorname{B}(m,q)[/math]. The probability of getting exactly [math]k[/math] successes in [math]m[/math] trials is given by the probability mass function:

[[math]] f(k;m,q) = \operatorname{P}(N = k) = \binom m k q^k(1-q)^{m-k}[[/math]]

for [math]k=0,1,2,...,m[/math], where

[[math]]\binom m k =\frac{m!}{k!(m-k)!}[[/math]]

is the binomial coefficient, hence the name of the distribution. The formula can be understood as follows: we want exactly [math]k[/math] successes ([math]q^k[/math]) and [math]m-k[/math] failures ([math](1 - q)^{m-k}[/math]). However, the [math]k[/math] successes can occur anywhere among the [math]m[/math] trials, and there are [math]{n\choose k}[/math] different ways of distributing [math]k[/math] successes in a sequence of [math]m[/math] trials.

In creating reference tables for binomial distribution probability, usually the table is filled in up to [math]m/2[/math] values. This is because for [math]k \gt m/2[/math], the probability can be calculated by its complement as

[[math]]f(k,n,q)=f(n-k,n,1-q). [[/math]]

The probability mass function satisfies the following recurrence relation, for every [math]m,q[/math] :

[[math]]\left\{\begin{array}{l} q (m-k) f(k,m,q) = (k+1) (1-q) f(k+1,m,q), \\ f(0,m,q)=(1-q)^m \end{array}\right\}[[/math]]

i.e., a binomial distribution belongs to the [math](a,b,0)[/math] class with [math] a = -q/(1-q) [/math] and [math] b = (m + 1)q/(1-q)[/math].

Poisson

In probability theory and statistics, the Poisson distribution, named after French mathematician Siméon Denis Poisson, is a discrete probability distribution that expresses the probability of a given number of events occurring in a fixed interval of time and/or space if these events occur with a known average rate and independently of the time since the last event.[1] The Poisson distribution can also be used for the number of events in other specified intervals such as distance, area or volume. Within the context of insurance, the Poisson distribution can be used to model the number (frequency) of claims during a given time period.

Probability Mass Function

A discrete random variable [math]N[/math] is said to have a Poisson distribution with parameter [math]\lambda \gt 0[/math], if, for [math]k=0,1,2,...[/math] the probability mass function of [math]N[/math] is given by:

[[math]]\!p_k= \operatorname{P}(N = k)= \frac{\lambda^k e^{-\lambda}}{k!}.[[/math]]

The probability mass function satisfies the following recurrence relation:

[[math]]\left\{\begin{array}{l} (k+1) f(k+1)-\lambda f(k)=0, \\ f(0)=e^{-\lambda} \end{array}\right\} [[/math]]

i.e., a poisson distribution belongs to the [math](a,b,0)[/math] class with [math] a = 0 [/math] and [math] b = \lambda[/math].

Negative Binomial

The negative binomial distribution is a discrete probability distribution of the number of failures in a sequence of independent and identically distributed Bernoulli trials before a specified number of successes (denoted [math]r[/math]) occurs. More precisely, suppose there is a sequence of independent Bernoulli trials. Thus, each trial has two potential outcomes called “success” and “failure”. In each trial the probability of failure is [math]q[/math] and of success is [math]1 - q[/math]. We are observing this sequence until a predefined number [math]r[/math] of successes has occurred.

Probability Mass Function

The probability mass function of the negative binomial distribution is

[[math]] f(k; r, q) \equiv \operatorname{P}(N = k) = \binom{k+r-1}{k} q^k(1-q)^r \quad\text{for }k = 0, 1, 2, \dotsc [[/math]]

The binomial coefficient can be written in the following manner, explaining the name “negative binomial”:

[[math]] \begin{align*} \frac{(k+r-1)\dotsm(r)}{k!} &= (-1)^k \frac{(-r)(-r-1)(-r-2)\dotsm(-r-k+1)}{k!} \\ \label{*} &= (-1)^k\binom{-r}{k}. \end{align*} [[/math]]

To understand the above definition of the probability mass function, note that the probability for every specific sequence of [math]k[/math] failures and [math]r[/math] successes is [math](1-q)^rq^k[/math], because the outcomes of the [math]k[/math] trials are supposed to happen independently. Since the [math]r[/math]th success comes last, it remains to choose the [math]k[/math] trials with failures out of the remaining [math]r-1[/math] trials. The above binomial coefficient gives precisely the number of all these sequences of length [math]k-1[/math].

Extension to real-valued r

It is possible to extend the definition of the negative binomial distribution to the case of a positive real parameter [math]r[/math]. Although it is impossible to visualize a non-integer number of “successes”, we can still formally define the distribution through its probability mass function.

In the spirit of being consistent with the parametrizations found in [2], we consider the alternative parametrization defined implicitly by [math]q = \beta/(1+\beta)[/math].

As before, we say that [math]N[/math] has a negative binomial (or Pólya) distribution if it has a probability mass function:

[[math]] f(k; r, \beta) \equiv \operatorname{P}(N = k) = \binom{k+r-1}{k} \frac{\beta^k}{(1 + \beta)^{r + k}} \quad\text{for }k = 0, 1, 2, \dotsc [[/math]]

Here [math]r[/math] is a real, positive number. The binomial coefficient is then defined by the multiplicative formula and can also be rewritten using the gamma function:

[[math]] \binom{k+r-1}{k} = \frac{(k+r-1)(k+r-2)\dotsm(r)}{k!} = \frac{\Gamma(k+r)}{k!\,\Gamma(r)}. [[/math]]

To show that the probability mass function adds up to one, we have, by the binomial series

[[math]] (1 + \beta)^{-r} = (1 - q)^{-r} =\sum_{k=0}^\infty(-1)^k\binom{-r}{k}q^k = (1 + \beta)^r \,\sum_{k=0}^\infty \operatorname{P}(N = k). [[/math]]

Finally, the following recurrence relation holds:

[[math]]\begin{array}{l} (k+1) \operatorname{P} (k+1)- q \operatorname{P} (k) (k+r)=0, \\ \operatorname{P} (0) = (1-q)^r. \end{array} [[/math]]

In particular, the negative binomial distribution belongs to the [math](a,b,0)[/math] class with [math] a = \beta/(1 + \beta)[/math] and [math] b = \beta(r-1)/(1 + \beta)[/math].

Geometric

The geometric distribution is the negative binomial distribution with [math]r = 1[/math] and can be interpreted (in the actuarial context) as the number of succeses before the first failure.

Summary

The following table summarizes key information regarding the distributions covered on this page:

Distribution [math] \operatorname{P}[N=k]\, [/math] [math] a\, [/math] [math] b \,[/math] [math] P(z)\, [/math] [math] \operatorname{E}[N]\, [/math] [math] \operatorname{Var}(N)\, [/math]
Binomial [math] \binom{n}{k} q^k (1-q)^{n-k} [/math] [math] \frac{-q}{1-q} [/math] [math] \frac{q(n+1)}{1-q} [/math] [math] [qz+(1-q)]^{n} \,[/math] [math] nq\, [/math] [math] nq(1-q) \,[/math]
Poisson [math] e^{-\lambda}\frac{ \lambda^k}{k!}\, [/math] [math] 0\, [/math] [math] \lambda \,[/math] [math] e^{\lambda(z-1)} \,[/math] [math] \lambda\, [/math] [math] \lambda \,[/math]
Negative Binomial [math] (-1)^k \binom{-r}{k}\frac{\beta^k}{(1 + \beta)^{r + k}} [/math] [math] \frac{\beta}{1 + \beta}[/math] [math] \frac{\beta(r-1)}{1 + \beta}\, [/math] [math] [1−\beta(z−1)]^{−r}[/math] [math] r\beta [/math] [math] r\beta (1 + \beta)[/math]

Notes

  1. Frank A. Haight (1967). Handbook of the Poisson Distribution. New York: John Wiley & Sons.CS1 maint: ref=harv (link)
  2. https://www.soa.org/globalassets/assets/Files/Edu/2019/2019-02-exam-stam-tables.pdf

References