guide:A2317debd5: Difference between revisions
mNo edit summary |
mNo edit summary |
||
Line 84: | Line 84: | ||
===Definition=== | ===Definition=== | ||
The Fisher information is a way of measuring the amount of information that an observable random variable <math>X</math> carries about an unknown parameter <math>\theta</math> upon which the probability of <math>X</math> depends. The probability function for <math>X</math>, which is also the [[wikipedia:likelihood function|likelihood function]] for <math>\theta</math>, is a function <math>f(X;\theta)</math>; it is the probability mass (or probability density) of the random variable <math>X</math> conditional on the value of <math>\theta</math>. The partial derivative with respect to <math>\theta</math> of the natural logarithm of the likelihood function is called the [[wikipedia:score (statistics)|score]] | The Fisher information is a way of measuring the amount of information that an observable random variable <math>X</math> carries about an unknown parameter <math>\theta</math> upon which the probability of <math>X</math> depends. The probability function for <math>X</math>, which is also the [[wikipedia:likelihood function|likelihood function]] for <math>\theta</math>, is a function <math>f(X;\theta)</math>; it is the probability mass (or probability density) of the random variable <math>X</math> conditional on the value of <math>\theta</math>. The partial derivative with respect to <math>\theta</math> of the natural logarithm of the likelihood function is called the [[wikipedia:score (statistics)|score]]. | ||
<div> | <div class="card mb-4"><div class="card-header">Proposition (Expected value of score)</div><div class="card-body"> | ||
<p class="card-text"> | |||
Under certain regularity conditions,<ref name=SubaRao>{{cite web|last=Suba Rao|title=Lectures on statistical inference|url=http://www.stat.tamu.edu/~suhasini/teaching613/inference.pdf}}</ref> the first moment of the score is 0. | |||
< | </p> | ||
<span class="mw-customtoggle-theo.scoreev btn btn-primary" >Show Proof</span><div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-theo.scoreev"><div class="mw-collapsible-content p-3"> | |||
</div> | <math display="block"> | ||
\begin{align*} | |||
\operatorname{E} \left[\left. \frac{\partial}{\partial\theta} \log f(X;\theta)\right|\theta \right] | |||
&= | |||
\operatorname{E} \left[\left. \frac{\frac{\partial}{\partial\theta} f(X;\theta)}{f(X; \theta)}\right|\theta \right] \\ | |||
&= | |||
\int \frac{\frac{\partial}{\partial\theta} f(x;\theta)}{f(x; \theta)} f(x;\theta)\; \mathrm{d}x \\ | |||
&= | |||
\int \frac{\partial}{\partial\theta} f(x;\theta)\; \mathrm{d}x \\ | |||
&= | |||
\frac{\partial}{\partial\theta} \int f(x; \theta)\; \mathrm{d}x \\ | |||
&= | |||
\frac{\partial}{\partial\theta} \; 1 \\ | |||
&= 0. | |||
\end{align*} | |||
</math> | |||
<div class="text-end">■</div></div></div></div></div> | |||
The second moment is called the Fisher information: | The second moment is called the Fisher information: | ||
Line 98: | Line 123: | ||
where, for any given value of <math>\theta</math>, the expression E[...|<math>\theta</math>] denotes the conditional expectation over values for <math>X</math> with respect to the probability function <math>f(x;\theta)</math> given <math>\theta</math>. Note that <math>0 \leq \mathcal{I}(\theta) < \infty</math>. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable <math>X</math> has been averaged out. Since the expectation of the score is zero, the Fisher information is also the variance of the score. | where, for any given value of <math>\theta</math>, the expression E[...|<math>\theta</math>] denotes the conditional expectation over values for <math>X</math> with respect to the probability function <math>f(x;\theta)</math> given <math>\theta</math>. Note that <math>0 \leq \mathcal{I}(\theta) < \infty</math>. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable <math>X</math> has been averaged out. Since the expectation of the score is zero, the Fisher information is also the variance of the score. | ||
<div class="card mb-4"><div class="card-header">Proposition</div><div class="card-body"> | |||
<p class="card-text"> | |||
If <math>\log(f(x);\theta)</math> is twice differentiable with respect to <math>\theta</math>, then, under certain regularity conditions, the Fisher information may also be written as<ref>Lehmann & Casella, eq. (2.5.16).</ref> | If <math>\log(f(x);\theta)</math> is twice differentiable with respect to <math>\theta</math>, then, under certain regularity conditions, the Fisher information may also be written as<ref>Lehmann & Casella, eq. (2.5.16).</ref> | ||
Line 105: | Line 133: | ||
</math> | </math> | ||
<div> | </p> | ||
<span class="mw-customtoggle-theo.lassoPiecewiseLinear btn btn-primary" >Show Proof</span><div class="mw-collapsible mw-collapsed" id="mw-customcollapsible-theo.lassoPiecewiseLinear"><div class="mw-collapsible-content p-3"> | |||
<math display="block"> | |||
< | \frac{\partial^2}{\partial\theta^2} \log f(X;\theta) | ||
= | |||
</div> | \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} | ||
\;-\; | |||
\left( \frac{\frac{\partial}{\partial\theta} f(X;\theta)}{f(X; \theta)} \right)^2 | |||
= | |||
\frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)} | |||
\;-\; | |||
\left( \frac{\partial}{\partial\theta} \log f(X;\theta)\right)^2 | |||
</math> | |||
and | |||
:<math display="block"> | |||
\operatorname{E} \left[\left. \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)}\right|\theta \right] | |||
= | |||
\cdots | |||
= | |||
\frac{\partial^2}{\partial\theta^2} \int f(x; \theta)\; \mathrm{d}x | |||
= | |||
\frac{\partial^2}{\partial\theta^2} \; 1 = 0. | |||
</math> | |||
<div class="text-end">■</div></div></div></div></div> | |||
Thus, the Fisher information is the negative of the expectation of the second derivative with respect to <math>\theta</math> of the natural natural logarithm of <math>f</math>. Information may be seen to be a measure of the "curvature" of the [[wikipedia:support curve|support curve]] near the maximum likelihood estimate of <math>\theta</math>. A "blunt" support curve (one with a shallow maximum) would have a low negative expected second derivative, and thus low information; while a sharp one would have a high negative expected second derivative and thus high information. | Thus, the Fisher information is the negative of the expectation of the second derivative with respect to <math>\theta</math> of the natural natural logarithm of <math>f</math>. Information may be seen to be a measure of the "curvature" of the [[wikipedia:support curve|support curve]] near the maximum likelihood estimate of <math>\theta</math>. A "blunt" support curve (one with a shallow maximum) would have a low negative expected second derivative, and thus low information; while a sharp one would have a high negative expected second derivative and thus high information. |
Latest revision as of 15:28, 30 June 2023
In general, for a fixed set of data and underlying statistical model, the method of maximum likelihood selects the set of values of the model parameters that maximizes the likelihood function. Intuitively, this maximizes the "agreement" of the selected model with the observed data, and for discrete random variables it indeed maximizes the probability of the observed data under the resulting distribution.
Principles
Suppose there is a sample [math]X_1,\ldots,X_n[/math] of [math]n[/math] independent and identically distributed observations, coming from a distribution with an unknown probability density function [math] f_0 [/math]. It is however surmised that the function [math]f_0[/math] belongs to a certain family of distributions [math] \{f(\cdot | \theta) : \theta \in \Theta \} [/math] (where [math]\theta[/math] is a vector of parameters for this family), so that [math] f_0 = f(\cdot | \theta_0) [/math]. The value [math]\theta_0[/math] is unknown and is referred to as the true value of the parameter vector. It is desirable to find an estimator [math]\hat\theta[/math] which would be as close to the true value [math]\theta_0[/math] as possible. Either or both the observed variables [math]X_i[/math] and the parameter [math]\theta[/math] can be vectors.
To use the method of maximum likelihood, one first specifies the joint density function for all observations. For an independent and identically distributed sample, this joint density function is
Now we look at this function from a different perspective by considering the observed values [math]X_1,\ldots,X_n [/math] to be fixed "parameters" of this function, whereas [math]\theta[/math] will be the function's variable and allowed to vary freely; this function will be called the likelihood:
In practice it is often more convenient to work with the logarithm of the likelihood function, called the log-likelihood:
or the average log-likelihood:
The method of maximum likelihood estimates [math]\theta_0[/math], the true parameter of the distribution from which the sample is drawn, by finding a value of [math]\theta[/math] that maximizes [math]L_n(\theta)[/math]. This method of estimation defines a maximum-likelihood estimator (MLE) of [math]\theta_0[/math]
if a maximum exists. An MLE estimate is the same regardless of whether we maximize the likelihood or the log-likelihood function, since log is a monotonically increasing function.
For many models, a maximum likelihood estimator can be found as an explicit function of the observed data. For many other models, however, no closed-form solution to the maximization problem is known or available, and an MLE has to be found numerically. For some problems, there may be multiple estimates that maximize the likelihood. For other problems, no maximum likelihood estimate exists (meaning that the log-likelihood function increases without attaining the supremum value).
In the exposition above, it is assumed that the data are independent and identically distributed. The method can be applied however to a broader setting, as long as it is possible to write the joint density function [math]f(x_1,\ldots,x_n | \theta)[/math] and its parameter [math]\theta[/math] has a finite dimension which does not depend on the sample size [math]n[/math].
Properties
Maximum-likelihood estimators have no optimum properties for finite samples, in the sense that (when evaluated on finite samples) other estimators may have greater concentration around the true parameter-value.[1] However, like other estimation methods, maximum-likelihood estimation possesses a number of attractive limiting properties:
- Consistency: the sequence of MLEs converges in probability to the value being estimated.
- Asymptotic normality: as the sample size increases, the distribution of the MLE tends to the Gaussian distribution with mean [math]\theta_0[/math] and covariance matrix equal to the inverse of the Fisher information matrix.
- Efficiency, i.e., it achieves the Cramér–Rao lower bound when the sample size tends to infinity. This means that no consistent estimator has lower asymptotic mean squared error than the MLE (or other estimators attaining this bound).
- Second-order efficiency after correction for bias.
Under certain conditions, the maximum likelihood estimator is consistent. The consistency means that having a sufficiently large number of observations [math]n[/math], it is possible to find the value of [math]\theta_0[/math] with arbitrary precision. In mathematical terms this means that as [math]n[/math] goes to infinity the estimator [math]\hat\theta[/math] converges in probability to its true value:
Under slightly stronger conditions, the estimator converges almost surely (or strongly) to:
Asymptotic normality
In a wide range of situations, maximum likelihood parameter estimates exhibit asymptotic normality - that is, they are equal to the true parameters plus a random error that is approximately normal (given sufficient data), and the error's variance decays as 1/n:
where [math]N[/math] denotes the Normal distribution and [math]I(\theta_0)[/math] is the Fisher information matrix.
Fisher Information
Tthe Fisher information is a way of measuring the amount of information that an observable random variable [math]X[/math] carries about an unknown parameter [math]\theta[/math] of a distribution that models [math]X[/math]. Formally, it is the variance of the score, or the expected value of the observed information. The Fisher-information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates.
Definition
The Fisher information is a way of measuring the amount of information that an observable random variable [math]X[/math] carries about an unknown parameter [math]\theta[/math] upon which the probability of [math]X[/math] depends. The probability function for [math]X[/math], which is also the likelihood function for [math]\theta[/math], is a function [math]f(X;\theta)[/math]; it is the probability mass (or probability density) of the random variable [math]X[/math] conditional on the value of [math]\theta[/math]. The partial derivative with respect to [math]\theta[/math] of the natural logarithm of the likelihood function is called the score.
Under certain regularity conditions,[2] the first moment of the score is 0.
Show ProofThe second moment is called the Fisher information:
where, for any given value of [math]\theta[/math], the expression E[...|[math]\theta[/math]] denotes the conditional expectation over values for [math]X[/math] with respect to the probability function [math]f(x;\theta)[/math] given [math]\theta[/math]. Note that [math]0 \leq \mathcal{I}(\theta) \lt \infty[/math]. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable [math]X[/math] has been averaged out. Since the expectation of the score is zero, the Fisher information is also the variance of the score.
If [math]\log(f(x);\theta)[/math] is twice differentiable with respect to [math]\theta[/math], then, under certain regularity conditions, the Fisher information may also be written as[3]
- [[math]] \operatorname{E} \left[\left. \frac{\frac{\partial^2}{\partial\theta^2} f(X;\theta)}{f(X; \theta)}\right|\theta \right] = \cdots = \frac{\partial^2}{\partial\theta^2} \int f(x; \theta)\; \mathrm{d}x = \frac{\partial^2}{\partial\theta^2} \; 1 = 0. [[/math]]
Thus, the Fisher information is the negative of the expectation of the second derivative with respect to [math]\theta[/math] of the natural natural logarithm of [math]f[/math]. Information may be seen to be a measure of the "curvature" of the support curve near the maximum likelihood estimate of [math]\theta[/math]. A "blunt" support curve (one with a shallow maximum) would have a low negative expected second derivative, and thus low information; while a sharp one would have a high negative expected second derivative and thus high information.
Information is additive, in that the information yielded by two independent experiments is the sum of the information from each experiment separately:
This result follows from the elementary fact that if random variables are independent, the variance of their sum is the sum of their variances. In particular, the information in a random sample of size [math]n[/math] is [math]n[/math] times that in a sample of size 1, when observations are independent and identically distributed.
Reparametrization
The Fisher information depends on the parametrization of the problem. If [math]\theta[/math] and [math]\eta[/math] are two scalar parametrizations of an estimation problem, and [math]\theta[/math] is a continuously differentiable function of [math]\eta[/math], then
where [math]{\mathcal I}_\eta[/math] and [math]{\mathcal I}_\theta[/math] are the Fisher information measures of [math]\eta[/math] and [math]\theta[/math], respectively.[4]
References
- Pfanzagl (1994, p. 206)
- Suba Rao. "Lectures on statistical inference" (PDF).
- Lehmann & Casella, eq. (2.5.16).
- Lehmann & Casella, eq. (2.5.11).
Wikipedia References
- Wikipedia contributors. "Maximum likelihood estimation". Wikipedia. Wikipedia. Retrieved 31 July 2021.