guide:84f92e739f: Difference between revisions
No edit summary |
mNo edit summary |
||
Line 1: | Line 1: | ||
The '''Kolmogorov–Smirnov test''' ('''K–S test''' or '''KS test''') is a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The [[wikipedia:null distribution|null distribution]] of this statistic is calculated under the [[wikipedia:null hypothesis|null hypothesis]] that the sample is drawn from the reference distribution (in the one-sample case) or that the samples are drawn from the same distribution (in the two-sample case). In each case, the distributions considered under the null hypothesis are continuous distributions but are otherwise unrestricted. | |||
The two-sample K–S test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples. | |||
==Kolmogorov–Smirnov statistic== | |||
The [[wikipedia:empirical distribution function|empirical distribution function]] <math>F_n</math> for <math>n</math> [[wikipedia:Independent and identically distributed random variables|iid]] observations <math>X_i</math> is defined as: <math display="block">F_n(x)={1 \over n}\sum_{i=1}^n I_{[-\infty,x]}(X_i)</math> | |||
where <math>I_{[-\infty,x]}(X_i)</math> is the [[wikipedia:indicator function|indicator function]], equal to 1 if <math>X_i \le x</math> and equal to 0 otherwise. | |||
The '''Kolmogorov–Smirnov statistic''' for a given cumulative distribution function <math>F(x)</math> | |||
<math display="block">D_n= \sup_x |F_n(x)-F(x)|</math> | |||
where <math>\sup_x</math> is the [[wikipedia:supremum|supremum]] of the set of distances. By the [[wikipedia:Glivenko–Cantelli theorem|Glivenko–Cantelli theorem]], if the sample comes from distribution <math>F(x)</math>, then <math>D_n</math> converges to 0 [[wikipedia:almost surely|almost surely]] in the limit when <math>n</math> goes to infinity. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see below). In practice, the statistic requires a relatively large number of data points to properly reject the null hypothesis. | |||
==Kolmogorov distribution== | |||
The '''Kolmogorov distribution''' is the distribution of the random variable: | |||
<math display="block">K=\sup_{t\in[0,1]}|B(t)|</math> | |||
where <math>B(t)</math> is the [[wikipedia:Brownian bridge|Brownian bridge]]. The cumulative distribution function of <math>K</math> is given by<ref>{{Cite journal |author=Marsaglia G, Tsang WW, Wang J |year=2003 |title=Evaluating Kolmogorov’s Distribution |journal=Journal of Statistical Software |volume=8 |issue=18 |pages=1–4 |url=http://www.jstatsoft.org/v08/i18/paper}}</ref>: | |||
<math display="block">P(K\leq x)=1-2\sum_{k=1}^\infty (-1)^{k-1} e^{-2k^2 x^2}=\frac{\sqrt{2\pi}}{x}\sum_{k=1}^\infty e^{-(2k-1)^2\pi^2/(8x^2)}.</math> | |||
Under null hypothesis that the sample comes from the hypothesized distribution <math>F(x)</math>, | |||
<math display="block">\sqrt{n}D_n\xrightarrow{D}\sup_t |B(F(t))|</math> where <math>B(t)</math> is the [[wikipedia:Brownian bridge|Brownian bridge]] and the convergence is in distribution. If <math>F</math> is continuous then under the null hypothesis <math>\sqrt{n}D_n</math> converges to the Kolmogorov distribution, which does not depend on <math>F</math>. | |||
The ''goodness-of-fit'' test or the Kolmogorov–Smirnov test is constructed by using the critical values of the Kolmogorov distribution. The null hypothesis is rejected at level <math>\alpha</math> if <math>\sqrt{n}D_{n} > K_{\alpha}</math> where <math>K_{\alpha}</math> is found from <math>P(K\leq K_{\alpha})=1-\alpha.\,</math>. The value of <math>K_{\alpha}</math> is given in the table below for each level of <math>\alpha</math><ref name='TableTwoSample'>[https://www.webdepot.umontreal.ca/Usagers/angers/MonDepotPublic/STT3500H10/Critical_KS.pdf Table of critical values for the two-sample test]</ref>: | |||
<table class ='table'> | |||
<tr><td><math>\alpha</math></td> <td> 0.10 </td> <td> 0.05 </td> <td>0.025</td> <td>0.01</td> <td>0.005</td> <td>0.001</td> </tr> | |||
<tr><td><math>K_{\alpha}</math></td> <td>1.22</td> <td>1.36 </td> <td>1.48</td> <td>1.63</td> <td>1.73</td> <td>1.95</td> </tr> | |||
</table> | |||
==Two-sample Kolmogorov–Smirnov test== | |||
The Kolmogorov–Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the Kolmogorov–Smirnov statistic is :<math display="block">D_{n,n'}=\sup_x |F_{1,n}(x)-F_{2,n'}(x)|,</math> | |||
where <math>F_{1,n}</math> and <math>F_{2,n'}</math> are the [[wikipedia:empirical distribution function|empirical distribution function]]s of the first and the second sample respectively, and <math>\sup</math> is the [[wikipedia:Infimum and supremum|supremum function]]. | |||
The null hypothesis is rejected at level <math>\alpha</math> if | |||
<math display="block">D_{n,n'} > K_{\alpha}\sqrt{\frac{n + n'}{n n'}}.</math> | |||
Note that the two-sample test checks whether the two data samples come from the same distribution. This does not specify what that common distribution is (e.g. whether it's normal or not normal). | |||
==Setting confidence limits for the shape of a distribution function== | |||
While the Kolmogorov–Smirnov test is usually used to test whether a given <math>F(x)</math> is the underlying probability distribution of <math>F_n(x)</math>, the procedure may be inverted to give confidence limits on <math>F(x)</math> itself. If one chooses a critical value of the test statistic, <math>D_{\alpha}</math>, such that <math>P(D_n > D_{\alpha}) = \alpha</math>, then a band of width <math>± D_{\alpha}</math> around <math>F_n(x)</math> will entirely contain <math>F(x)</math> with probability <math>1-\alpha</math>. | |||
==References== | |||
<references/> | |||
==Wikipedia References== | |||
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Kolmogorov%E2%80%93Smirnov_test&oldid=890607772 | title = Kolmogorov–Smirnov test | author = Wikipedia contributors | website= Wikipedia |publisher= Wikipedia |access-date = 31 May 2019 }} |
Latest revision as of 22:54, 22 August 2022
The Kolmogorov–Smirnov test (K–S test or KS test) is a nonparametric test of the equality of continuous, one-dimensional probability distributions that can be used to compare a sample with a reference probability distribution (one-sample K–S test), or to compare two samples (two-sample K–S test). The Kolmogorov–Smirnov statistic quantifies a distance between the empirical distribution function of the sample and the cumulative distribution function of the reference distribution, or between the empirical distribution functions of two samples. The null distribution of this statistic is calculated under the null hypothesis that the sample is drawn from the reference distribution (in the one-sample case) or that the samples are drawn from the same distribution (in the two-sample case). In each case, the distributions considered under the null hypothesis are continuous distributions but are otherwise unrestricted.
The two-sample K–S test is one of the most useful and general nonparametric methods for comparing two samples, as it is sensitive to differences in both location and shape of the empirical cumulative distribution functions of the two samples.
Kolmogorov–Smirnov statistic
The empirical distribution function [math]F_n[/math] for [math]n[/math] iid observations [math]X_i[/math] is defined as:
where [math]I_{[-\infty,x]}(X_i)[/math] is the indicator function, equal to 1 if [math]X_i \le x[/math] and equal to 0 otherwise.
The Kolmogorov–Smirnov statistic for a given cumulative distribution function [math]F(x)[/math]
where [math]\sup_x[/math] is the supremum of the set of distances. By the Glivenko–Cantelli theorem, if the sample comes from distribution [math]F(x)[/math], then [math]D_n[/math] converges to 0 almost surely in the limit when [math]n[/math] goes to infinity. Kolmogorov strengthened this result, by effectively providing the rate of this convergence (see below). In practice, the statistic requires a relatively large number of data points to properly reject the null hypothesis.
Kolmogorov distribution
The Kolmogorov distribution is the distribution of the random variable:
where [math]B(t)[/math] is the Brownian bridge. The cumulative distribution function of [math]K[/math] is given by[1]:
Under null hypothesis that the sample comes from the hypothesized distribution [math]F(x)[/math],
where [math]B(t)[/math] is the Brownian bridge and the convergence is in distribution. If [math]F[/math] is continuous then under the null hypothesis [math]\sqrt{n}D_n[/math] converges to the Kolmogorov distribution, which does not depend on [math]F[/math].
The goodness-of-fit test or the Kolmogorov–Smirnov test is constructed by using the critical values of the Kolmogorov distribution. The null hypothesis is rejected at level [math]\alpha[/math] if [math]\sqrt{n}D_{n} \gt K_{\alpha}[/math] where [math]K_{\alpha}[/math] is found from [math]P(K\leq K_{\alpha})=1-\alpha.\,[/math]. The value of [math]K_{\alpha}[/math] is given in the table below for each level of [math]\alpha[/math][2]:
[math]\alpha[/math] | 0.10 | 0.05 | 0.025 | 0.01 | 0.005 | 0.001 |
[math]K_{\alpha}[/math] | 1.22 | 1.36 | 1.48 | 1.63 | 1.73 | 1.95 |
Two-sample Kolmogorov–Smirnov test
The Kolmogorov–Smirnov test may also be used to test whether two underlying one-dimensional probability distributions differ. In this case, the Kolmogorov–Smirnov statistic is :
where [math]F_{1,n}[/math] and [math]F_{2,n'}[/math] are the empirical distribution functions of the first and the second sample respectively, and [math]\sup[/math] is the supremum function.
The null hypothesis is rejected at level [math]\alpha[/math] if
Note that the two-sample test checks whether the two data samples come from the same distribution. This does not specify what that common distribution is (e.g. whether it's normal or not normal).
Setting confidence limits for the shape of a distribution function
While the Kolmogorov–Smirnov test is usually used to test whether a given [math]F(x)[/math] is the underlying probability distribution of [math]F_n(x)[/math], the procedure may be inverted to give confidence limits on [math]F(x)[/math] itself. If one chooses a critical value of the test statistic, [math]D_{\alpha}[/math], such that [math]P(D_n \gt D_{\alpha}) = \alpha[/math], then a band of width [math]± D_{\alpha}[/math] around [math]F_n(x)[/math] will entirely contain [math]F(x)[/math] with probability [math]1-\alpha[/math].
References
- Marsaglia G, Tsang WW, Wang J (2003). "Evaluating Kolmogorov’s Distribution". Journal of Statistical Software 8 (18): 1–4.
- Table of critical values for the two-sample test
Wikipedia References
- Wikipedia contributors. "Kolmogorov–Smirnov test". Wikipedia. Wikipedia. Retrieved 31 May 2019.