guide:3285677816: Difference between revisions
No edit summary |
mNo edit summary |
||
Line 1: | Line 1: | ||
The '''delta method''' is a result concerning the approximate probability distribution for a function of an [[wikipedia:Asymptotic distribution|asymptotically normal]] statistical estimator from knowledge of the limiting variance of that estimator. | |||
==Method== | |||
While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables <math>X_n</math> satisfying | |||
<math display="block">{\sqrt{n}[X_n-\theta]\,\xrightarrow{D}\,\mathcal{N}(0,\sigma^2)},</math> | |||
where <math>\theta</math> and <math>\sigma^2</math> are finite valued constants and <math>\xrightarrow{D}</math> denotes [[wikipedia:convergence in distribution|convergence in distribution]], then | |||
<math display="block" > | |||
{\sqrt{n}[g(X_n)-g(\theta)]\,\xrightarrow{D}\,\mathcal{N}(0,\sigma^2[g'(\theta)]^2)} | |||
</math> | |||
for any function <math>g</math> satisfying the property that <math>g'(\theta) </math> exists and is non-zero valued. | |||
The method extends to the multivariate case. By definition, a [[wikipedia:consistency (statistics)|consistent]] estimator <math>B</math> converges in probability to its true value <math>\beta</math>, and often a [[wikipedia:central limit theorem|central limit theorem]] can be applied to obtain [[wikipedia:Estimator#Asymptotic normality|asymptotic normality]]:<math display="block">\sqrt{n} (B-\beta )\,\xrightarrow{D}\,\mathcal{N}(0, \Sigma ),</math> | |||
where ''n'' is the number of observations and <math>\Sigma</math> is a covariance matrix. The ''multivariate delta method'' yields the following asymptotic property of a function <math>h</math> of the estimator <math>B</math> under the assumption that the gradient <math>\nabla h</math> is non-zero: | |||
<math display="block">\sqrt{n}(h(B)-h(\beta))\,\xrightarrow{D}\,\mathcal{N}(0, \nabla h(\beta)^T \cdot \Sigma \cdot \nabla h(\beta)).</math> | |||
<div class = "text-right"> | |||
<proofs page = "guide_proofs:6b004aad05" section = "proof" label = "The Delta Method" /> | |||
</div> | |||
==References== | |||
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Delta_method&oldid=885377245 | title= Delta method | author = Wikipedia contributors | website= Wikipedia |publisher= Wikipedia |access-date = 30 May 2019}} |
Revision as of 22:51, 22 August 2022
The delta method is a result concerning the approximate probability distribution for a function of an asymptotically normal statistical estimator from knowledge of the limiting variance of that estimator.
Method
While the delta method generalizes easily to a multivariate setting, careful motivation of the technique is more easily demonstrated in univariate terms. Roughly, if there is a sequence of random variables [math]X_n[/math] satisfying
where [math]\theta[/math] and [math]\sigma^2[/math] are finite valued constants and [math]\xrightarrow{D}[/math] denotes convergence in distribution, then
for any function [math]g[/math] satisfying the property that [math]g'(\theta) [/math] exists and is non-zero valued.
The method extends to the multivariate case. By definition, a consistent estimator [math]B[/math] converges in probability to its true value [math]\beta[/math], and often a central limit theorem can be applied to obtain asymptotic normality:
where n is the number of observations and [math]\Sigma[/math] is a covariance matrix. The multivariate delta method yields the following asymptotic property of a function [math]h[/math] of the estimator [math]B[/math] under the assumption that the gradient [math]\nabla h[/math] is non-zero:
<proofs page = "guide_proofs:6b004aad05" section = "proof" label = "The Delta Method" />
References
- Wikipedia contributors. "Delta method". Wikipedia. Wikipedia. Retrieved 30 May 2019.