guide:71e84de82f: Difference between revisions

From Stochiki
No edit summary
mNo edit summary
 
(One intermediate revision by the same user not shown)
Line 1: Line 1:
The <math>k</math>th '''order statistic''' of a [[wikipedia:statistical sample|statistical sample]] is equal to its <math>k</math>th-smallest value.<ref>{{Cite journal | last1 = David | first1 = H. A. | last2 = Nagaraja | first2 = H. N. | doi = 10.1002/0471722162 | title = Order Statistics | series = Wiley Series in Probability and Statistics | year = 2003 | isbn = 9780471722168 | pmid =  | pmc = }}</ref> Together with rank statistics, order statistics are among the most fundamental tools in [[wikipedia:non-parametric statistics|non-parametric statistics]] and [[wikipedia:non-parametric inference|inference]].
The <math>k</math>th '''order statistic''' of a [[statistical sample|statistical sample]] is equal to its <math>k</math>th-smallest value.<ref>{{Cite journal | last1 = David | first1 = H. A. | last2 = Nagaraja | first2 = H. N. | doi = 10.1002/0471722162 | title = Order Statistics | series = Wiley Series in Probability and Statistics | year = 2003 | isbn = 9780471722168 | pmid =  | pmc = }}</ref> Together with rank statistics, order statistics are among the most fundamental tools in [[non-parametric statistics|non-parametric statistics]] and [[non-parametric inference|inference]].


Important special cases of the order statistics are the [[wikipedia:minimum|minimum]] and [[wikipedia:maximum|maximum]] value of a sample, and (with some qualifications discussed below) the [[wikipedia:sample median|sample median]] and other [[wikipedia:quantile|sample quantiles]].
Important special cases of the order statistics are the [[minimum|minimum]] and [[maximum|maximum]] value of a sample, and (with some qualifications discussed below) the [[sample median|sample median]] and other [[quantile|sample quantiles]].


When using [[wikipedia:probability theory|probability theory]] to analyze order statistics of [[wikipedia:random sample|random sample]]s from a [[wikipedia:continuous distribution|continuous distribution]], the [[wikipedia:cumulative distribution function|cumulative distribution function]] is used to reduce the analysis to the case of order statistics of the [[wikipedia:uniform distribution (continuous)|uniform distribution]].
When using [[probability theory|probability theory]] to analyze order statistics of [[random sample|random sample]]s from a [[guide:82d603b116#Continuous_probability_distribution|continuous distribution]], the [[cumulative distribution function|cumulative distribution function]] is used to reduce the analysis to the case of order statistics of the [[guide:269af6cf67#Standard_uniform|uniform distribution]].


== Notation and examples ==
== Notation and examples ==
Line 19: Line 19:
</math>
</math>


where the subscript <math>i</math> in <math>x_i</math> indicates simply the order in which the observations were recorded and is usually assumed not to be significant. A case when the order is significant is when the observations are part of a [[wikipedia:time series|time series]].
where the subscript <math>i</math> in <math>x_i</math> indicates simply the order in which the observations were recorded and is usually assumed not to be significant. A case when the order is significant is when the observations are part of a [[time series|time series]].


The order statistics would be denoted<math display="block">x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=8,\ \ x_{(4)}=9,\,</math>
The order statistics would be denoted<math display="block">x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=8,\ \ x_{(4)}=9,\,</math>
Line 25: Line 25:
where the subscript <math>i</math> enclosed in parentheses indicates the <math>i</math><sup>th</sup> order statistic of the sample.
where the subscript <math>i</math> enclosed in parentheses indicates the <math>i</math><sup>th</sup> order statistic of the sample.


The '''first order statistic''' (or '''smallest order statistic''') is always the [[wikipedia:minimum|minimum]] of the sample, that is,
The '''first order statistic''' (or '''smallest order statistic''') is always the [[minimum|minimum]] of the sample, that is,


<math display="block">X_{(1)}=\min\{\,X_1,\ldots,X_n\,\}</math>
<math display="block">X_{(1)}=\min\{\,X_1,\ldots,X_n\,\}</math>
Line 31: Line 31:
where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.
where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.


Similarly, for a sample of size <math>n</math>, the <math>i</math><sup>th</sup> order statistic (or '''largest order statistic''') is the [[wikipedia:maximum|maximum]], that is,
Similarly, for a sample of size <math>n</math>, the <math>i</math><sup>th</sup> order statistic (or '''largest order statistic''') is the [[maximum|maximum]], that is,


<math display="block">X_{(n)}=\max\{\,X_1,\ldots,X_n\,\}.</math>
<math display="block">X_{(n)}=\max\{\,X_1,\ldots,X_n\,\}.</math>


The [[wikipedia:sample range|sample range]] is the difference between the maximum and minimum.  It is clearly a function of the order statistics:
The [[sample range|sample range]] is the difference between the maximum and minimum.  It is clearly a function of the order statistics:


<math display="block">{\rm Range}\{\,X_1,\ldots,X_n\,\} = X_{(n)}-X_{(1)}.</math>
<math display="block">{\rm Range}\{\,X_1,\ldots,X_n\,\} = X_{(n)}-X_{(1)}.</math>


A similar important statistic in [[wikipedia:exploratory data analysis|exploratory data analysis]] that is simply related to the order statistics is the sample [[wikipedia:interquartile range|interquartile range]].
A similar important statistic in [[exploratory data analysis|exploratory data analysis]] that is simply related to the order statistics is the sample [[interquartile range|interquartile range]].


The sample median may or may not be an order statistic, since there is a single middle value only when the number <math>n</math> of observations is [[wikipedia:Even and odd numbers|odd]].  More precisely, if <math>n=2m+1</math> for some <math>n</math>, then the sample median is <math>X_{(m+1)}</math> and so is an order statistic.  On the other hand, when <math>n</math> is [[wikipedia:even and odd numbers|even]], <math>n = 2m</math> and there are two middle values, <math>X_{(m)}</math> and <math>X_{(m+1)}</math>, and the sample median is some function of the two (usually the average) and hence not an order statistic.  Similar remarks apply to all sample quantiles.
The sample median may or may not be an order statistic, since there is a single middle value only when the number <math>n</math> of observations is [[Even and odd numbers|odd]].  More precisely, if <math>n=2m+1</math> for some <math>n</math>, then the sample median is <math>X_{(m+1)}</math> and so is an order statistic.  On the other hand, when <math>n</math> is [[even and odd numbers|even]], <math>n = 2m</math> and there are two middle values, <math>X_{(m)}</math> and <math>X_{(m+1)}</math>, and the sample median is some function of the two (usually the average) and hence not an order statistic.  Similar remarks apply to all sample quantiles.


== Probabilistic analysis ==
== Probabilistic analysis ==


Given any random variables <math>X_1,\ldots,X_n</math>, the order statistics <math>X_1,\ldots, X_n</math> are also random variables, defined by sorting the values ([[wikipedia:realization (probability)|realizations]]) of <math>X_1,\ldots, X_n </math> in increasing order.
Given any random variables <math>X_1,\ldots,X_n</math>, the order statistics <math>X_1,\ldots, X_n</math> are also random variables, defined by sorting the values ([[realization (probability)|realizations]]) of <math>X_1,\ldots, X_n </math> in increasing order.


When the random variables <math>X_1,\ldots,X_n</math> form a [[wikipedia:sample (statistics)|sample]] they are [[wikipedia:i.i.d._Sequences|independent and identically distributed]]. This is the case treated below. In general, the random variables <math>X_1,\ldots, X_n</math> can arise by sampling from more than one population. Then they are [[wikipedia:independent (statistics)|independent]], but not necessarily identically distributed, and their [[wikipedia:joint probability distribution|joint probability distribution]] is given by the [[wikipedia:Bapat–Beg theorem|Bapat–Beg theorem]].
When the random variables <math>X_1,\ldots,X_n</math> form a [[sample (statistics)|sample]] they are [[i.i.d._Sequences|independent and identically distributed]]. This is the case treated below. In general, the random variables <math>X_1,\ldots, X_n</math> can arise by sampling from more than one population. Then they are [[independent (statistics)|independent]], but not necessarily identically distributed, and their [[joint probability distribution|joint probability distribution]] is given by the [[Bapat–Beg theorem|Bapat–Beg theorem]].


From now on, we will assume that the random variables under consideration are [[wikipedia:continuous|continuous]] and, where convenient, we will also assume that they have a [[wikipedia:probability density function|probability density function]] (that is, they are [[wikipedia:absolute continuity|absolutely continuous]]).  The peculiarities of the analysis of distributions assigning mass to points (in particular, [[wikipedia:discrete distribution|discrete distribution]]s) are discussed at the end.
From now on, we will assume that the random variables under consideration are [[continuous|continuous]] and, where convenient, we will also assume that they have a [[guide:82d603b116#Continuous probability distribution|density]] (that is, they are [[absolute continuity|absolutely continuous]]).  The peculiarities of the analysis of distributions assigning mass to points (in particular, [[guide:B5ab48c211|discrete distribution]]s) are discussed at the end.


=== Probability distributions of order statistics ===
=== Probability distributions of order statistics ===


In this section we show that the order statistics of the [[wikipedia:uniform distribution (continuous)|uniform distribution]] on the [[wikipedia:unit interval|unit interval]] have [[wikipedia:marginal distribution|marginal distribution]]s belonging to the [[wikipedia:Beta distribution|Beta distribution]] family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the [[wikipedia:cdf|cdf]].
In this section we show that the order statistics of the [[guide:269af6cf67#Standard_uniform|uniform distribution]] on the [[unit interval|unit interval]] have [[marginal distribution|marginal distribution]]s belonging to the [[Beta distribution|Beta distribution]] family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the [[cdf|cdf]].


We assume throughout this section that <math>X_{1}, \ldots, X_{n}</math> is a [[wikipedia:random sample|random sample]] drawn from a continuous distribution with cdf <math>F_X</math>. Denoting <math>U_i=F_X(X_i)</math> we obtain the corresponding random sample <math>U_1,\ldots,U_n</math> from the standard [[wikipedia:uniform distribution (continuous)|uniform distribution]]. Note that the order statistics also satisfy <math>U_{(i)}=F_X(X_{(i)})</math>.
We assume throughout this section that <math>X_{1}, \ldots, X_{n}</math> is a [[random sample|random sample]] drawn from a continuous distribution with cdf <math>F_X</math>. Denoting <math>U_i=F_X(X_i)</math> we obtain the corresponding random sample <math>U_1,\ldots,U_n</math> from the standard [[guide:269af6cf67#Standard_uniform|uniform distribution]]. Note that the order statistics also satisfy <math>U_{(i)}=F_X(X_{(i)})</math>.


==== Order statistics sampled from a uniform distribution ====
==== Order statistics sampled from a uniform distribution ====
Line 61: Line 61:
The probability of the order statistic <math>U_{(k)}</math> falling in the interval <math>[u,\ u+du]</math> is equal to<ref name="gentle">{{citation|title=Computational Statistics|first=James E.|last=Gentle|publisher=Springer|year=2009|isbn=9780387981444|page=63|url=https://books.google.com/books?id=mQ5KAAAAQBAJ&pg=PA63}}.</ref><math display="block">{n!\over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}\,du+O(du^2),</math>
The probability of the order statistic <math>U_{(k)}</math> falling in the interval <math>[u,\ u+du]</math> is equal to<ref name="gentle">{{citation|title=Computational Statistics|first=James E.|last=Gentle|publisher=Springer|year=2009|isbn=9780387981444|page=63|url=https://books.google.com/books?id=mQ5KAAAAQBAJ&pg=PA63}}.</ref><math display="block">{n!\over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}\,du+O(du^2),</math>


that is, the <math>k</math>th order statistic of the uniform distribution is a [[wikipedia:Beta|Beta]] random variable.<ref name="gentle"/><ref>{{citation|title=Kumaraswamy’s distribution: A beta-type distribution with some tractability advantages|first=M. C.|last=Jones|journal=Statistical Methodology|volume=6|issue=1|year=2009|pages=70–81|doi=10.1016/j.stamet.2008.04.001|quote=As is well known, the beta distribution is the distribution of the ''m''’th order statistic from a random sample of size ''n'' from the uniform distribution (on (0,1)).}}</ref><math display="block">U_{(k)} \sim B(k,n+1-k).</math>
that is, the <math>k</math>th order statistic of the uniform distribution is a [[Beta|Beta]] random variable.<ref name="gentle"/><ref>{{citation|title=Kumaraswamy’s distribution: A beta-type distribution with some tractability advantages|first=M. C.|last=Jones|journal=Statistical Methodology|volume=6|issue=1|year=2009|pages=70–81|doi=10.1016/j.stamet.2008.04.001|quote=As is well known, the beta distribution is the distribution of the ''m''’th order statistic from a random sample of size ''n'' from the uniform distribution (on (0,1)).}}</ref><math display="block">U_{(k)} \sim B(k,n+1-k).</math>
 
<div class="card mb-4"><div class="card-header"> Proposition (Order Statistic of a uniform distribution)</div><div class="card-body">
<p class="card-text">
The <math>k</math>th order statistic of the uniform distribution is a [[Beta|Beta]] random variable
</p>
<span class="mw-customtoggle-theo.uniform btn btn-primary" >Show Proof</span><div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-theo.uniform"><div class="mw-collapsible-content p-3">
For <math>U_{(k)}</math> to be between ''u'' and ''u''&nbsp;+&nbsp;''du'', it is necessary that exactly ''k''&nbsp;−&nbsp;1 elements of the sample are smaller than ''u'', and that at least one is between ''u'' and ''u''&nbsp;+&nbsp;d''u''. The probability that more than one is in this latter interval is already <math>O(du^2)</math>, so we have to calculate the probability that exactly ''k''&nbsp;−&nbsp;1, 1 and ''n''&nbsp;−&nbsp;''k'' observations fall in the intervals <math>(0,u)</math>, <math>(u,u+du)</math> and <math>(u+du,1)</math> respectively. This equals (refer to [[multinomial distribution|multinomial distribution]] for details)<math display="block">{n!\over (k-1)!(n-k)!}u^{k-1}\cdot du\cdot(1-u-du)^{n-k}</math>
 
and the result follows.
 
<div class="text-end"> &#x25A0;</div></div></div></div></div>


<div class="text-right">
<proofs page="guide_proofs:71e84de82f" section="uniform" label="Distribution of Order Statistics (uniform)"></proofs>
</div>
<br>
{{alert-info|The mean of this distribution is <math>k/(n+1)</math>.}}
{{alert-info|The mean of this distribution is <math>k/(n+1)</math>.}}


==== The distribution of the order statistics of an absolutely continuous distribution ====
==== The distribution of the order statistics of an absolutely continuous distribution ====


If <math>F_X</math> is [[wikipedia:absolute continuity|absolutely continuous]], it has a density such that <math>dF_X(x)=f_X(x)\,dx</math>, and we can use the substitutions <math>u = F_X(x)</math> and <math>du = f_X(x)</math> to derive the following the probability density functions for the order statistics of a sample of size <math>n</math> drawn from the distribution of <math>X</math>:
If <math>F_X</math> is [[absolute continuity|absolutely continuous]], it has a density such that <math>dF_X(x)=f_X(x)\,dx</math>, and we can use the substitutions <math>u = F_X(x)</math> and <math>du = f_X(x)</math> to derive the following the probability density functions for the order statistics of a sample of size <math>n</math> drawn from the distribution of <math>X</math>:


<math display="block">f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x).</math>
<math display="block">f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x).</math>
Line 77: Line 84:
== Dealing with discrete variables ==
== Dealing with discrete variables ==


Suppose <math>X_1,\ldots,X_n</math> are i.i.d. random variables from a discrete distribution with cumulative distribution function <math>F(x)</math> and probability mass function <math>f(x)</math>. The probability mass function for the <math>k</math><sup>th</sup> order statistic evaluated at <math>x</math> equals
Suppose <math>X_1,\ldots,X_n</math> are i.i.d. random variables from a discrete distribution with cumulative distribution function <math>F(x)</math> and probability mass function <math>f(x)</math>.  
 
<div class="card mb-4 mt-4"><div class="card-header"> Proposition (Order Statistic of a discrete random variable)</div><div class="card-body">
<p class="card-text">
The probability mass function for the <math>k</math><sup>th</sup> order statistic evaluated at <math>x</math> equals


<math display="block">
<math display="block">
Line 84: Line 95:


with <math>S(x) = 1 - F(x) </math>.
with <math>S(x) = 1 - F(x) </math>.
</p>
<span class="mw-customtoggle-theo.discrete btn btn-primary" >Show Proof</span><div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-theo.discrete"><div class="mw-collapsible-content p-3">
To find the probabilities of the <math>k^\text{th}</math> order statistics, three values are first needed, namely
<math display="block">p_1=P(X \lt x)=F(x)-f(x), \ p_2=P(X=x)=f(x),\text{ and }p_3=P(X \gt x)=1-F(x).</math>
The cumulative distribution function of the <math>k^\text{th}</math> order statistic can be computed by noting that<math display="block">
\begin{align*}
P(X_{(k)}\leq x)& =P(\text{there are at most }n-k\text{ observations greater than }x) ,\\
& =\sum_{j=0}^{n-k}{n\choose j}p_3^j(p_1+p_2)^{n-j} .
\end{align*}
</math>
Similarly, <math>P(X_{(k)} \lt x)</math> is given by<math display="block">
\begin{align*}
P(X_{(k)}< x)& =P(\text{there are at most }n-k\text{ observations greater than or equal to }x) ,\\
&=\sum_{j=0}^{n-k}{n\choose j}(p_2+p_3)^j(p_1)^{n-j} .
\end{align*}
</math>
Note that the probability mass function of <math>X_{k}</math> is just the difference of these values, that is to say<math display="block">
\begin{align*}
P(X_{(k)}=x)&=P(X_{(k)}\leq x)-P(X_{(k)}< x) ,\\
&=\sum_{j=0}^{n-k}{n\choose j}\left(p_3^j(p_1+p_2)^{n-j}-(p_2+p_3)^j(p_1)^{n-j}\right) ,\\
&=\sum_{j=0}^{n-k}{n\choose j}\left((1-F(x))^j(F(x))^{n-j}-(1-F(x)+f(x))^j(F(x)-f(x))^{n-j}\right).
\end{align*}
</math>


<div class="text-right">
<div class="text-end"> &#x25A0;</div></div></div></div></div>
<proofs page="guide_proofs:71e84de82f" section="discrete" label="Distribution of Order Statistics (discrete)"></proofs>
</div>


==Notes==
==Notes==

Latest revision as of 23:33, 5 April 2024

The [math]k[/math]th order statistic of a statistical sample is equal to its [math]k[/math]th-smallest value.[1] Together with rank statistics, order statistics are among the most fundamental tools in non-parametric statistics and inference.

Important special cases of the order statistics are the minimum and maximum value of a sample, and (with some qualifications discussed below) the sample median and other sample quantiles.

When using probability theory to analyze order statistics of random samples from a continuous distribution, the cumulative distribution function is used to reduce the analysis to the case of order statistics of the uniform distribution.

Notation and examples

For example, suppose that four numbers are observed or recorded, resulting in a sample of size 4. If the sample values are

6  9  3  8

they will usually be denoted

[[math]] x_1=6,\ \ x_2=9,\ \ x_3=3,\ \ x_4=8,\, [[/math]]

where the subscript [math]i[/math] in [math]x_i[/math] indicates simply the order in which the observations were recorded and is usually assumed not to be significant. A case when the order is significant is when the observations are part of a time series.

The order statistics would be denoted

[[math]]x_{(1)}=3,\ \ x_{(2)}=6,\ \ x_{(3)}=8,\ \ x_{(4)}=9,\,[[/math]]

where the subscript [math]i[/math] enclosed in parentheses indicates the [math]i[/math]th order statistic of the sample.

The first order statistic (or smallest order statistic) is always the minimum of the sample, that is,

[[math]]X_{(1)}=\min\{\,X_1,\ldots,X_n\,\}[[/math]]

where, following a common convention, we use upper-case letters to refer to random variables, and lower-case letters (as above) to refer to their actual observed values.

Similarly, for a sample of size [math]n[/math], the [math]i[/math]th order statistic (or largest order statistic) is the maximum, that is,

[[math]]X_{(n)}=\max\{\,X_1,\ldots,X_n\,\}.[[/math]]

The sample range is the difference between the maximum and minimum. It is clearly a function of the order statistics:

[[math]]{\rm Range}\{\,X_1,\ldots,X_n\,\} = X_{(n)}-X_{(1)}.[[/math]]

A similar important statistic in exploratory data analysis that is simply related to the order statistics is the sample interquartile range.

The sample median may or may not be an order statistic, since there is a single middle value only when the number [math]n[/math] of observations is odd. More precisely, if [math]n=2m+1[/math] for some [math]n[/math], then the sample median is [math]X_{(m+1)}[/math] and so is an order statistic. On the other hand, when [math]n[/math] is even, [math]n = 2m[/math] and there are two middle values, [math]X_{(m)}[/math] and [math]X_{(m+1)}[/math], and the sample median is some function of the two (usually the average) and hence not an order statistic. Similar remarks apply to all sample quantiles.

Probabilistic analysis

Given any random variables [math]X_1,\ldots,X_n[/math], the order statistics [math]X_1,\ldots, X_n[/math] are also random variables, defined by sorting the values (realizations) of [math]X_1,\ldots, X_n [/math] in increasing order.

When the random variables [math]X_1,\ldots,X_n[/math] form a sample they are independent and identically distributed. This is the case treated below. In general, the random variables [math]X_1,\ldots, X_n[/math] can arise by sampling from more than one population. Then they are independent, but not necessarily identically distributed, and their joint probability distribution is given by the Bapat–Beg theorem.

From now on, we will assume that the random variables under consideration are continuous and, where convenient, we will also assume that they have a density (that is, they are absolutely continuous). The peculiarities of the analysis of distributions assigning mass to points (in particular, discrete distributions) are discussed at the end.

Probability distributions of order statistics

In this section we show that the order statistics of the uniform distribution on the unit interval have marginal distributions belonging to the Beta distribution family. We also give a simple method to derive the joint distribution of any number of order statistics, and finally translate these results to arbitrary continuous distributions using the cdf.

We assume throughout this section that [math]X_{1}, \ldots, X_{n}[/math] is a random sample drawn from a continuous distribution with cdf [math]F_X[/math]. Denoting [math]U_i=F_X(X_i)[/math] we obtain the corresponding random sample [math]U_1,\ldots,U_n[/math] from the standard uniform distribution. Note that the order statistics also satisfy [math]U_{(i)}=F_X(X_{(i)})[/math].

Order statistics sampled from a uniform distribution

The probability of the order statistic [math]U_{(k)}[/math] falling in the interval [math][u,\ u+du][/math] is equal to[2]

[[math]]{n!\over (k-1)!(n-k)!}u^{k-1}(1-u)^{n-k}\,du+O(du^2),[[/math]]

that is, the [math]k[/math]th order statistic of the uniform distribution is a Beta random variable.[2][3]

[[math]]U_{(k)} \sim B(k,n+1-k).[[/math]]

Proposition (Order Statistic of a uniform distribution)

The [math]k[/math]th order statistic of the uniform distribution is a Beta random variable

Show Proof

For [math]U_{(k)}[/math] to be between u and u + du, it is necessary that exactly k − 1 elements of the sample are smaller than u, and that at least one is between u and u + du. The probability that more than one is in this latter interval is already [math]O(du^2)[/math], so we have to calculate the probability that exactly k − 1, 1 and n − k observations fall in the intervals [math](0,u)[/math], [math](u,u+du)[/math] and [math](u+du,1)[/math] respectively. This equals (refer to multinomial distribution for details)

[[math]]{n!\over (k-1)!(n-k)!}u^{k-1}\cdot du\cdot(1-u-du)^{n-k}[[/math]]

and the result follows.

The mean of this distribution is [math]k/(n+1)[/math].

The distribution of the order statistics of an absolutely continuous distribution

If [math]F_X[/math] is absolutely continuous, it has a density such that [math]dF_X(x)=f_X(x)\,dx[/math], and we can use the substitutions [math]u = F_X(x)[/math] and [math]du = f_X(x)[/math] to derive the following the probability density functions for the order statistics of a sample of size [math]n[/math] drawn from the distribution of [math]X[/math]:

[[math]]f_{X_{(k)}}(x) =\frac{n!}{(k-1)!(n-k)!}[F_X(x)]^{k-1}[1-F_X(x)]^{n-k} f_X(x).[[/math]]

Dealing with discrete variables

Suppose [math]X_1,\ldots,X_n[/math] are i.i.d. random variables from a discrete distribution with cumulative distribution function [math]F(x)[/math] and probability mass function [math]f(x)[/math].

Proposition (Order Statistic of a discrete random variable)

The probability mass function for the [math]k[/math]th order statistic evaluated at [math]x[/math] equals

[[math]] \sum_{j=0}^{n-k}{n\choose j}\left[S(x)^jF(x)^{n-j}-(S(x)+f(x))^j(F(x)-f(x))^{n-j}\right] [[/math]]
with [math]S(x) = 1 - F(x) [/math].

Show Proof

To find the probabilities of the [math]k^\text{th}[/math] order statistics, three values are first needed, namely

[[math]]p_1=P(X \lt x)=F(x)-f(x), \ p_2=P(X=x)=f(x),\text{ and }p_3=P(X \gt x)=1-F(x).[[/math]]

The cumulative distribution function of the [math]k^\text{th}[/math] order statistic can be computed by noting that

[[math]] \begin{align*} P(X_{(k)}\leq x)& =P(\text{there are at most }n-k\text{ observations greater than }x) ,\\ & =\sum_{j=0}^{n-k}{n\choose j}p_3^j(p_1+p_2)^{n-j} . \end{align*} [[/math]]

Similarly, [math]P(X_{(k)} \lt x)[/math] is given by

[[math]] \begin{align*} P(X_{(k)}\lt x)& =P(\text{there are at most }n-k\text{ observations greater than or equal to }x) ,\\ &=\sum_{j=0}^{n-k}{n\choose j}(p_2+p_3)^j(p_1)^{n-j} . \end{align*} [[/math]]

Note that the probability mass function of [math]X_{k}[/math] is just the difference of these values, that is to say

[[math]] \begin{align*} P(X_{(k)}=x)&=P(X_{(k)}\leq x)-P(X_{(k)}\lt x) ,\\ &=\sum_{j=0}^{n-k}{n\choose j}\left(p_3^j(p_1+p_2)^{n-j}-(p_2+p_3)^j(p_1)^{n-j}\right) ,\\ &=\sum_{j=0}^{n-k}{n\choose j}\left((1-F(x))^j(F(x))^{n-j}-(1-F(x)+f(x))^j(F(x)-f(x))^{n-j}\right). \end{align*} [[/math]]


Notes

  1. "Order Statistics" (2003). doi:10.1002/0471722162. 
  2. 2.0 2.1 Gentle, James E. (2009), Computational Statistics, Springer, p. 63, ISBN 9780387981444.
  3. Jones, M. C. (2009), "Kumaraswamy's distribution: A beta-type distribution with some tractability advantages", Statistical Methodology, 6 (1): 70–81, doi:10.1016/j.stamet.2008.04.001, As is well known, the beta distribution is the distribution of the m’th order statistic from a random sample of size n from the uniform distribution (on (0,1)).

References

  • Wikipedia contributors. "Order statistic". Wikipedia. Wikipedia. Retrieved 28 January 2022.

Further Reading

  • Sefling, R. J. (1980). Approximation Theorems of Mathematical Statistics. New York: Wiley. ISBN 0-471-02403-1.