guide:4c3055949a: Difference between revisions
No edit summary |
mNo edit summary |
||
Line 1: | Line 1: | ||
A '''likelihood ratio test''' is a statistical test used to compare the [[wikipedia:goodness of fit|goodness of fit]] of two models, one of which (the ''[[wikipedia:null hypothesis|null]] model'') is a special case of the other (the ''[[wikipedia:alternative hypothesis|alternative]] model''). The test is based on the likelihood, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its logarithm, can then be used to compute a [[wikipedia:p-value|''p''-value]], or compared to a critical value to decide whether to reject the null model in favor of the alternative model. When the logarithm of the likelihood ratio is used, the statistic is known as a '''log-likelihood ratio statistic''', and the probability distribution of this test statistic, assuming that the null model is true, can be approximated using [[#Wilks’ theorem|Wilks’ theorem]]. | |||
==Hypothesis Testing== | |||
We assume the usual setup: <math>X_1,\ldots,X_n </math> denotes a random sample drawn from an unknown distribution with probability density belonging to a certain (parametrized) family <math>\{f(x \mid \theta) : \theta \in \Theta \} </math> and define the likelihood function | |||
<math display = "block"> | |||
\mathcal{L}(\theta\,;X_1,\ldots,X_n) = f(X_1,\ldots,X_n\mid\theta) = \prod_{i=1}^n f(X_i\mid\theta). | |||
</math> | |||
=== Composite Hypothesis === | |||
A null hypothesis is often stated by saying the parameter <math>\theta</math> is in a specified subset <math>\Theta_0</math> of the parameter space <math>\Theta</math>.<math display="block"> | |||
\begin{equation} | |||
H_0: \theta \in \Theta_0, \hspace{5 pt} H_1: \theta \in \Theta_0^c | |||
\end{equation} | |||
</math> | |||
The [[wikipedia:likelihood function|likelihood function]] is <math>L(\theta|x) = f(x \mid \theta)</math> with <math>f(x \mid \theta)</math> being the probability density function or probability mass function, which is a function of the parameter <math>\theta</math> with <math>x</math> held fixed at the value that was actually observed, ''i.e.'', the data. The '''likelihood ratio test statistic''' is | |||
<math display="block">\Lambda = \frac{\sup\{\,\mathcal{L}(\theta\mid X_1,\ldots,X_n): \theta\in\Theta_0\,\}}{\sup\{\,\mathcal{L}(\theta\mid X_1,\ldots,X_n): \theta\in\Theta\,\}}.</math> | |||
Here, the <math>\sup</math> notation refers to the [[wikipedia:supremum|supremum]] function. | |||
A '''likelihood ratio test''' is any test with critical region (or rejection region) of the form <math>\{x \mid \Lambda \le c\}</math> where <math>c</math> is any number satisfying <math>0\le c\le 1</math>. | |||
====Interpretation==== | |||
The '''likelihood ratio test''' rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, ''i.e.'', on what probability of [[wikipedia:Type I error|Type I error]] is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true). | |||
The numerator corresponds to the maximum likelihood of an observed outcome under the [[wikipedia:null hypothesis|null hypothesis]]. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator — the likelihood ratio is between 0 and 1. Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as the alternative, and the null hypothesis cannot be rejected. | |||
==== <span id = "Wilks’ theorem"></span>Distribution: Wilks’ theorem==== | |||
If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result by [[wikipedia:Samuel S. Wilks|Samuel S. Wilks]], says that as the sample size <math>n</math> approaches infinity, the test statistic <math>-2 \log(\Lambda)</math> for a nested model will be asymptotically [[wikipedia:chi-squared distribution|<math>\chi^2</math>-distributed]] with [[wikipedia:degrees of freedom (statistics)|degrees of freedom]] equal to the difference in dimensionality of <math>\Theta</math> and <math>\Theta_0</math>. This means that for a great variety of hypotheses, a practitioner can compute the likelihood ratio <math>\Lambda</math> for the data and compare <math>-2\log(\Lambda)</math> to the <math>\chi^2</math> value corresponding to a desired [[wikipedia:statistical significance|statistical significance]] as an approximate statistical test. | |||
Wilks’ theorem assumes that the true but unknown values of the estimated parameters are in the interior of the parameter space. This is commonly violated in, for example, [[wikipedia:Random effects model|random]] or [[wikipedia:Mixed model|mixed effects models]] when one of the variance components is negligible relative to the others. In some such cases with one variance component essentially zero relative to the others or the models are not properly nested, Pinheiro and Bates showed that the true distribution of this likelihood ratio chi-square statistic could be substantially different from the naive <math>\chi^2</math>, often dramatically so.<ref>Pinheiro and Bates (2000)</ref> The naive assumptions could give [[wikipedia:p-value|significance probabilities (''p''-values)]] that are far too large on average in some cases and far too small in other. | |||
=== Use === | |||
Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (often denoted by <math>D</math>) is twice the log of the likelihoods ratio: | |||
<math display="block"> | |||
\begin{align} | |||
D & = -2\log \left( \Lambda \right ) \\ | |||
& = 2\sup_{\theta \in \Theta} \sum_{i=1}^{n} \log\left(\mathcal{L}(\theta\,;X_i)\right) - 2 \sup_{\theta \in \Theta_0} \sum_{i=1}^{n} \log \left(\mathcal{L}(\theta\,;X_i)\right) | |||
\end{align} | |||
</math> | |||
The model with more parameters (here ''alternative'') will always fit at least as well. Whether it fits significantly better and should thus be preferred is determined by deriving the probability or [[wikipedia:p-value|''p''-value]] of the difference <math>D</math>. Where the null hypothesis represents a special case of the alternative hypothesis, the probability distribution of the test statistic is approximately a [[wikipedia:chi-squared distribution|chi-squared distribution]] with [[wikipedia:degrees of freedom (statistics)|degrees of freedom]] equal to | |||
<math display="block"> \mathrm{Dim}(\Theta) - \mathrm{Dim}(\Theta_0) </math> | |||
where <math>\mathrm{Dim}(\cdot)</math> denotes the number of dimensions {{sfn|Huelsenbeck|Crandall|1997}}. | |||
{{alert-warning|The likelihood-ratio test requires [[wikipedia:Statistical model#Nested_models|nested models]] – models in which the more complex one can be transformed into the simpler model by imposing a set of constraints on the parameters.}} | |||
==References== | |||
<references/> | |||
==Wikipedia References== | |||
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Likelihood_function&oldid=898271841 | title= Likelihood function | author = Wikipedia contributors | website= Wikipedia | publisher= Wikipedia | access-date = 31 May 2019 }} | |||
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Likelihood-ratio_test&oldid=898484754 | title= Likelihood-ratio test | author = Wikipedia contributors | website= Wikipedia | publisher= Wikipedia | access-date = 31 May 2019 }} |
Latest revision as of 22:55, 22 August 2022
A likelihood ratio test is a statistical test used to compare the goodness of fit of two models, one of which (the null model) is a special case of the other (the alternative model). The test is based on the likelihood, which expresses how many times more likely the data are under one model than the other. This likelihood ratio, or equivalently its logarithm, can then be used to compute a p-value, or compared to a critical value to decide whether to reject the null model in favor of the alternative model. When the logarithm of the likelihood ratio is used, the statistic is known as a log-likelihood ratio statistic, and the probability distribution of this test statistic, assuming that the null model is true, can be approximated using Wilks’ theorem.
Hypothesis Testing
We assume the usual setup: [math]X_1,\ldots,X_n [/math] denotes a random sample drawn from an unknown distribution with probability density belonging to a certain (parametrized) family [math]\{f(x \mid \theta) : \theta \in \Theta \} [/math] and define the likelihood function
Composite Hypothesis
A null hypothesis is often stated by saying the parameter [math]\theta[/math] is in a specified subset [math]\Theta_0[/math] of the parameter space [math]\Theta[/math].
The likelihood function is [math]L(\theta|x) = f(x \mid \theta)[/math] with [math]f(x \mid \theta)[/math] being the probability density function or probability mass function, which is a function of the parameter [math]\theta[/math] with [math]x[/math] held fixed at the value that was actually observed, i.e., the data. The likelihood ratio test statistic is
Here, the [math]\sup[/math] notation refers to the supremum function.
A likelihood ratio test is any test with critical region (or rejection region) of the form [math]\{x \mid \Lambda \le c\}[/math] where [math]c[/math] is any number satisfying [math]0\le c\le 1[/math].
Interpretation
The likelihood ratio test rejects the null hypothesis if the value of this statistic is too small. How small is too small depends on the significance level of the test, i.e., on what probability of Type I error is considered tolerable ("Type I" errors consist of the rejection of a null hypothesis that is true).
The numerator corresponds to the maximum likelihood of an observed outcome under the null hypothesis. The denominator corresponds to the maximum likelihood of an observed outcome varying parameters over the whole parameter space. The numerator of this ratio is less than the denominator — the likelihood ratio is between 0 and 1. Low values of the likelihood ratio mean that the observed result was less likely to occur under the null hypothesis as compared to the alternative. High values of the statistic mean that the observed outcome was nearly as likely to occur under the null hypothesis as the alternative, and the null hypothesis cannot be rejected.
Distribution: Wilks’ theorem
If the distribution of the likelihood ratio corresponding to a particular null and alternative hypothesis can be explicitly determined then it can directly be used to form decision regions (to accept/reject the null hypothesis). In most cases, however, the exact distribution of the likelihood ratio corresponding to specific hypotheses is very difficult to determine. A convenient result by Samuel S. Wilks, says that as the sample size [math]n[/math] approaches infinity, the test statistic [math]-2 \log(\Lambda)[/math] for a nested model will be asymptotically [math]\chi^2[/math]-distributed with degrees of freedom equal to the difference in dimensionality of [math]\Theta[/math] and [math]\Theta_0[/math]. This means that for a great variety of hypotheses, a practitioner can compute the likelihood ratio [math]\Lambda[/math] for the data and compare [math]-2\log(\Lambda)[/math] to the [math]\chi^2[/math] value corresponding to a desired statistical significance as an approximate statistical test.
Wilks’ theorem assumes that the true but unknown values of the estimated parameters are in the interior of the parameter space. This is commonly violated in, for example, random or mixed effects models when one of the variance components is negligible relative to the others. In some such cases with one variance component essentially zero relative to the others or the models are not properly nested, Pinheiro and Bates showed that the true distribution of this likelihood ratio chi-square statistic could be substantially different from the naive [math]\chi^2[/math], often dramatically so.[1] The naive assumptions could give significance probabilities (p-values) that are far too large on average in some cases and far too small in other.
Use
Each of the two competing models, the null model and the alternative model, is separately fitted to the data and the log-likelihood recorded. The test statistic (often denoted by [math]D[/math]) is twice the log of the likelihoods ratio:
The model with more parameters (here alternative) will always fit at least as well. Whether it fits significantly better and should thus be preferred is determined by deriving the probability or p-value of the difference [math]D[/math]. Where the null hypothesis represents a special case of the alternative hypothesis, the probability distribution of the test statistic is approximately a chi-squared distribution with degrees of freedom equal to
where [math]\mathrm{Dim}(\cdot)[/math] denotes the number of dimensions [2].
References
- Pinheiro and Bates (2000)
- Huelsenbeck & Crandall 1997.
Wikipedia References
- Wikipedia contributors. "Likelihood function". Wikipedia. Wikipedia. Retrieved 31 May 2019.
- Wikipedia contributors. "Likelihood-ratio test". Wikipedia. Wikipedia. Retrieved 31 May 2019.