Revision as of 16:13, 11 July 2023 by Admin

Order Statistics

The [math]k[/math]th order statistic of a statistical sample is equal to its [math]k[/math]th-smallest value.[1] Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.

When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

Notation and examples

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are

6  9  3  8

they will usually be denoted

[[math]] x_1=6,\ \ x_2=9,\ \ x_3=3,\ \ x_4=8,\, [[/math]]

where the subscript [math]i[/math] in [math]x_i[/math] indicates simply the order in which the observations were recorded and is usually assumed not to be significant. A case when the order is significant is when the observations are part of a time series.

The order statistics would be denoted

[[math]]x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=8,\ \ x_{(4)}=9,\,[[/math]]

where the subscript [math]i[/math] enclosed in parentheses indicates the [math]i[/math]th order statistic of the sample.

The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,

[[math]]X_{(1)}=\min\{\,X_1,\ldots,X_n\,\}[[/math]]

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of size [math]n[/math], the [math]i[/math]th order statistic (or largest order statistic) is the maximum, that is,

[[math]]X_{(n)}=\max\{\,X_1,\ldots,X_n\,\}.[[/math]]

The sample range is the difference between the maximum and minimum. It is clearly a function of the order statistics:

[[math]]{\rm Range}\{\,X_1,\ldots,X_n\,\} = X_{(n)}-X_{(1)}.[[/math]]

A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the number [math]n[/math] of observations is odd. More precisely, if [math]n=2m+1[/math] for some [math]n[/math], then the sample median is [math]X_{(m+1)}[/math] and so is an order statistic. On the other hand, when [math]n[/math] is even, [math]n = 2m[/math] and there are two middle values, [math]X_{(m)}[/math] and [math]X_{(m+1)}[/math], and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

Probabilistic analysis

Given any random variables [math]X_1,\ldots,X_n[/math], the order statistics [math]X_1,\ldots, X_n[/math] are also random variables, defined by sorting the values (realizations) of [math]X_1,\ldots, X_n [/math] in increasing order.

When the random variables [math]X_1,\ldots,X_n[/math] form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables [math]X_1,\ldots, X_n[/math] can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.

From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a probability density function (that is, they are absolutely continuous). The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.

Probability distributions of order statistics

In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the Beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.

We assume throughout this section that [math]X_{1}, \ldots, X_{n}[/math] is a random sample drawn from a continuous distribution with cdf [math]F_X[/math]. Denoting [math]U_i=F_X(X_i)[/math] we obtain the corresponding random sample [math]U_1,\ldots,U_n[/math] from the standard uniform distribution. Note that the order statistics also satisfy [math]U_{(i)}=F_X(X_{(i)})[/math].

Order statistics sampled from a uniform distribution

The probability of the order statistic [math]U_{(k)}[/math] falling in the interval [math][u,\ u+du][/math] is equal to[2]

[[math]]{n!\over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}\,du+O(du^2),[[/math]]

that is, the [math]k[/math]th order statistic of the uniform distribution is a Beta random variable.[2][3]

[[math]]U_{(k)} \sim B(k,n+1-k).[[/math]]

Proposition (Order Statistic of a uniform distribution)

The [math]k[/math]th order statistic of the uniform distribution is a Beta random variable

Show Proof

For [math]U_{(k)}[/math] to be between u and u + du, it is necessary that exactly k − 1 elements of the sample are smaller than u, and that at least one is between u and u + du. The probability that more than one is in this latter interval is already [math]O(du^2)[/math], so we have to calculate the probability that exactly k − 1, 1 and n − k observations fall in the intervals [math](0,u)[/math], [math](u,u+du)[/math] and [math](u+du,1)[/math] respectively. This equals (refer to multinomial distribution for details)

[[math]]{n!\over (k-1)!(n-k)!}u^{k-1}\cdot du\cdot(1-u-du)^{n-k}[[/math]]

and the result follows.

The mean of this distribution is [math]k/(n+1)[/math].

The distribution of the order statistics of an absolutely continuous distribution

If [math]F_X[/math] is absolutely continuous, it has a density such that [math]dF_X(x)=f_X(x)\,dx[/math], and we can use the substitutions [math]u = F_X(x)[/math] and [math]du = f_X(x)[/math] to derive the following the probability density functions for the order statistics of a sample of size [math]n[/math] drawn from the distribution of [math]X[/math]:

[[math]]f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x).[[/math]]

Dealing with discrete variables

Suppose [math]X_1,\ldots,X_n[/math] are i.i.d. random variables from a discrete distribution with cumulative distribution function [math]F(x)[/math] and probability mass function [math]f(x)[/math].

Proposition (Order Statistic of a discrete random variable)

The probability mass function for the [math]k[/math]th order statistic evaluated at [math]x[/math] equals

[[math]] \sum_{j=0}^{n-k}{n\choose j}\left[S(x)^jF(x)^{n-j}-(S(x)+f(x))^j(F(x)-f(x))^{n-j}\right] [[/math]]
with [math]S(x) = 1 - F(x) [/math].

Show Proof

To find the probabilities of the [math]k^\text{th}[/math] order statistics, three values are first needed, namely

[[math]]p_1=P(X \lt x)=F(x)-f(x), \ p_2=P(X=x)=f(x),\text{ and }p_3=P(X \gt x)=1-F(x).[[/math]]

The cumulative distribution function of the [math]k^\text{th}[/math] order statistic can be computed by noting that

[[math]] \begin{align*} P(X_{(k)}\leq x)& =P(\text{there are at most }n-k\text{ observations greater than }x) ,\\ & =\sum_{j=0}^{n-k}{n\choose j}p_3^j(p_1+p_2)^{n-j} . \end{align*} [[/math]]

Similarly, [math]P(X_{(k)} \lt x)[/math] is given by

[[math]] \begin{align*} P(X_{(k)}\lt x)& =P(\text{there are at most }n-k\text{ observations greater than or equal to }x) ,\\ &=\sum_{j=0}^{n-k}{n\choose j}(p_2+p_3)^j(p_1)^{n-j} . \end{align*} [[/math]]

Note that the probability mass function of [math]X_{k}[/math] is just the difference of these values, that is to say

[[math]] \begin{align*} P(X_{(k)}=x)&=P(X_{(k)}\leq x)-P(X_{(k)}\lt x) ,\\ &=\sum_{j=0}^{n-k}{n\choose j}\left(p_3^j(p_1+p_2)^{n-j}-(p_2+p_3)^j(p_1)^{n-j}\right) ,\\ &=\sum_{j=0}^{n-k}{n\choose j}\left((1-F(x))^j(F(x))^{n-j}-(1-F(x)+f(x))^j(F(x)-f(x))^{n-j}\right). \end{align*} [[/math]]


Notes

  1. "Order Statistics" (2003). doi:10.1002/0471722162. 
  2. 2.0 2.1 Gentle, James E. (2009), Computational Statistics, Springer, p. 63, ISBN 9780387981444.
  3. Jones, M. C. (2009), "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages", Statistical Methodology, 6 (1): 70–81, doi:10.1016/j.stamet.2008.04.001, As is well known, the beta distribution is the distribution of the m’th order statistic from a random sample of size n from the uniform distribution (on (0,1)).

References

  • Wikipedia contributors. "Order statistic". Wikipedia. Wikipedia. Retrieved 28 January 2022.

Further Reading

  • Sefling, R. J. (1980). Approximation Theorems of Mathematical Statistics. New York: Wiley. ISBN 0-471-02403-1.