guide:8fd39b6050: Difference between revisions

From Stochiki
mNo edit summary
mNo edit summary
 
(3 intermediate revisions by the same user not shown)
Line 7: Line 7:
The notation AR(<math>p</math>) indicates an autoregressive model of order <math>p</math>. The AR(<math>p</math>) model is defined as
The notation AR(<math>p</math>) indicates an autoregressive model of order <math>p</math>. The AR(<math>p</math>) model is defined as


<math display = "block"> X_t = c + \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon_t \,</math>
<math display = "block"> Y_t = c + \sum_{i=1}^p \beta_i Y_{t-i}+ \varepsilon_t \,</math>


where <math>\varphi_1, \ldots, \varphi_p</math> are the ''parameters'' of the model, <math>c</math> is a constant, and <math>\varepsilon_t</math> is [[wikipedia:white noise|white noise]]. This can be equivalently written using the [[wikipedia:backshift operator|backshift operator]] ''B'' as
where <math>\beta_1, \ldots, \beta_p</math> are the ''parameters'' of the model, <math>c</math> is a constant, and <math>\varepsilon_t</math> is white noise. This can be equivalently written using the backshift operator <math>B</math> as


<math display = "block"> X_t = c + \sum_{i=1}^p \varphi_i B^i X_t + \varepsilon_t </math>
<math display = "block"> Y_t = c + \sum_{i=1}^p \beta_i B^i Y_t + \varepsilon_t </math>


so that, moving the summation term to the left side and using [[wikipedia:polynomial notation|polynomial notation]], we have
so that, moving the summation term to the left side and using polynomial notation, we have


<math display = "block">\phi [B]X_t= c + \varepsilon_t \, .</math>
<math display = "block">\phi [B]Y_t= c + \varepsilon_t \, .</math>


Some parameter constraints are necessary for the model to remain [[wikipedia:wide-sense stationary|wide-sense stationary]].  For example, processes in the AR(1) model with <math>|\varphi_1 | \geq 1</math> are not stationary. More generally, for an AR(<math>p</math>) model to be wide-sense stationary, the roots of the polynomial <math>\Phi(z):=\textstyle 1 - \sum_{i=1}^p \varphi_i z^{i}</math> must lie outside the [[wikipedia:unit circle|unit circle]], i.e., each (complex) root <math>z_i</math> must satisfy <math>|z_i |>1</math> (see pages 89,92 <ref>{{cite book |last1=Shumway |first1=Robert |last2=Stoffer |first2=David |title=Time series analysis and its applications : with R examples |date=2010 |publisher=Springer |isbn=144197864X |edition=3rd}}</ref>).
Some parameter constraints are necessary for the model to remain [[guide:C4cbbce9b2#Weak or wide-sense stationarity|wide-sense stationary]].  For example, processes in the AR(1) model with <math>|\beta_1 | \geq 1</math> are not stationary. More generally, for an AR(<math>p</math>) model to be wide-sense stationary, the roots of the polynomial <math display = "block">\Phi(z):=\textstyle 1 - \sum_{i=1}^p \beta_i z^{i}</math> must lie outside the unit circle, i.e., each (complex) root <math>z_i</math> must satisfy <math>|z_i |>1</math> (see pages 89,92 <ref>{{cite book |last1=Shumway |first1=Robert |last2=Stoffer |first2=David |title=Time series analysis and its applications : with R examples |date=2010 |publisher=Springer |isbn=9781441921253 |edition=3rd}}</ref>).


==Intertemporal effect of shocks==
==Intertemporal effect of shocks==


In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model <math> X_t = c + \varphi_1 X_{t-1} + \varepsilon_t</math>. A non-zero value for <math>\varepsilon_t</math> at say time ''t''=1 affects <math>X_1</math> by the amount  <math>\varepsilon_1</math>. Then by the AR equation for <math>X_2</math> in terms of <math>X_1</math>, this affects <math>X_2</math> by the amount <math>\varphi_1 \varepsilon_1</math>. Then by the AR equation for <math>X_3</math> in terms of <math>X_2</math>, this affects <math>X_3</math> by the amount <math>\varphi_1^2 \varepsilon_1</math>. Continuing this process shows that the effect of <math>\varepsilon_1</math> never ends, although if the process is [[wikipedia:stationary process|stationary]] then the effect diminishes toward zero in the limit.
In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model <math> Y_t = c + \beta_1 Y_{t-1} + \varepsilon_t</math>. A non-zero value for <math>\varepsilon_t</math> at say time <math>t=1</math> affects <math>Y_1</math> by the amount  <math>\varepsilon_1</math>. Then by the AR equation for <math>Y_2</math> in terms of <math>Y_1</math>, this affects <math>Y_2</math> by the amount <math>\beta_1 \varepsilon_1</math>. Then by the AR equation for <math>Y_3</math> in terms of <math>Y_2</math>, this affects <math>Y_3</math> by the amount <math>\beta_1^2 \varepsilon_1</math>. Continuing this process shows that the effect of <math>\varepsilon_1</math> never ends, although if the process is [[guide:C4cbbce9b2|stationary]] then the effect diminishes toward zero in the limit.


Because each shock affects ''X'' values infinitely far into the future from when they occur, any given value ''X''<sub>''t''</sub> is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression
Because each shock affects <math>Y</math> values infinitely far into the future from when they occur, any given value <math>Y</math><sub>''t''</sub> is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression


<math display = "block">\phi (B)X_t=  \varepsilon_t \,</math>
<math display = "block">\phi (B)Y_t=  \varepsilon_t \,</math>


(where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as
(where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as


<math display = "block">X_t= \frac{1}{\phi (B)}\varepsilon_t \, .</math>
<math display = "block">Y_t= \frac{1}{\phi (B)}\varepsilon_t \, .</math>


When the [[wikipedia:polynomial long division|polynomial division]] on the right side is carried out, the polynomial in the backshift operator applied to <math>\varepsilon_t</math> has an infinite order—that is, an infinite number of lagged values of <math>\varepsilon_t</math> appear on the right side of the equation.
When the [[wikipedia:polynomial long division|polynomial division]] on the right side is carried out, the polynomial in the backshift operator applied to <math>\varepsilon_t</math> has an infinite order—that is, an infinite number of lagged values of <math>\varepsilon_t</math> appear on the right side of the equation.
==Characteristic polynomial==
The [[wikipedia:autocorrelation function|autocorrelation function]] of an AR(<math>p</math>) process can be expressed as {{Citation needed|date=October 2011|reason=a_k not defined and seems wrong}}
<math display = "block">\rho(\tau) = \sum_{k=1}^p a_k y_k^{-|\tau|} ,</math>
where <math>y_k</math> are the roots of the polynomial
<math display = "block">\phi(B) = 1- \sum_{k=1}^p \varphi_k B^k </math>
where ''B'' is the [[wikipedia:backshift operator|backshift operator]], where <math>\phi(\cdot)</math> is the function defining the autoregression, and where <math>\varphi_k</math> are the coefficients in the autoregression. The formula is valid only if all the roots have multiplicity 1.
The autocorrelation function of an AR(<math>p</math>) process is a sum of decaying exponentials.
* Each real root contributes a component to the autocorrelation function that decays exponentially.
* Similarly, each pair of complex conjugate roots contributes an exponentially damped oscillation.


==Graphs of AR(<math>p</math>) processes==
==Graphs of AR(<math>p</math>) processes==
Line 50: Line 37:
The simplest AR process is AR(0), which has no dependence between the terms.  Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise.
The simplest AR process is AR(0), which has no dependence between the terms.  Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise.


For an AR(1) process with a positive <math>\varphi</math>, only the previous term in the process and the noise term contribute to the output.  If <math>\varphi</math> is close to 0, then the process still looks like white noise, but as <math>\varphi</math> approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output, similar to a [[wikipedia:low pass filter|low pass filter]].
For an AR(1) process with a positive <math>\beta</math>, only the previous term in the process and the noise term contribute to the output.  If <math>\beta</math> is close to 0, then the process still looks like white noise, but as <math>\beta</math> approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output.


For an AR(2) process, the previous two terms and the noise term contribute to the output. If both <math>\varphi_1</math> and <math>\varphi_2</math> are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If <math>\varphi_1</math> is positive while <math>\varphi_2</math> is negative, then the process favors changes in sign between terms of the process.  The output oscillates. This can be likened to edge detection or detection of change in direction.
For an AR(2) process, the previous two terms and the noise term contribute to the output. If both <math>\beta_1</math> and <math>\beta_2</math> are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If <math>\beta_1</math> is positive while <math>\beta_2</math> is negative, then the process favors changes in sign between terms of the process.  The output oscillates. This can be likened to edge detection or detection of change in direction.


==Example: An AR(1) process==
==Example: An AR(1) process==
Line 58: Line 45:
An AR(1) process is given by:
An AR(1) process is given by:


<math display = "block">X_t = c + \varphi X_{t-1}+\varepsilon_t\,</math>
<math display = "block">Y_t = c + \beta Y_{t-1}+\varepsilon_t\,</math>


where <math>\varepsilon_t</math> is a white noise process with zero mean and constant variance <math>\sigma_\varepsilon^2</math>.
where <math>\varepsilon_t</math> is a white noise process with zero mean and constant variance <math>\sigma_\varepsilon^2</math>.
(Note: The subscript on <math>\varphi_1</math> has been dropped.) The process is [[wikipedia:wide-sense stationary|wide-sense stationary]] if <math>|\varphi|<1</math> since it is obtained as the output of a stable filter whose input is white noise.  (If <math>\varphi=1</math> then the variance of <math>X_t</math> depends on time lag t, so that the variance of the series diverges to infinity as t goes to infinity, and is therefore not wide sense stationary.) Assuming <math>|\varphi|<1</math>, the mean <math>\operatorname{E} (X_t)</math> is identical for all values of ''t'' by the very definition of wide sense stationarity. If the mean is denoted by <math>\mu</math>, it follows from
(Note: The subscript on <math>\beta_1</math> has been dropped.) The process is [[guide:C4cbbce9b2#Weak or wide-sense stationarity|wide-sense stationary]] if <math>|\beta|<1</math> since it is obtained as the output of a stable filter whose input is white noise.  (If <math>\beta=1</math> then the variance of <math>Y_t</math> depends on time lag t, so that the variance of the series diverges to infinity as t goes to infinity, and is therefore not wide sense stationary.) Assuming <math>|\beta|<1</math>, the mean <math>\operatorname{E} (Y_t)</math> is identical for all values of ''t'' by the very definition of wide sense stationarity. If the mean is denoted by <math>\mu</math>, it follows from


<math display = "block">\operatorname{E} (X_t)=\operatorname{E} (c)+\varphi\operatorname{E} (X_{t-1})+\operatorname{E}(\varepsilon_t),
<math display = "block">\operatorname{E} (Y_t)=\operatorname{E} (c)+\beta\operatorname{E} (Y_{t-1})+\operatorname{E}(\varepsilon_t),
</math>
</math>
that
that
<math display = "block"> \mu=c+\varphi\mu+0,</math>
<math display = "block"> \mu=c+\beta\mu+0,</math>


and hence
and hence


<math display = "block">\mu=\frac{c}{1-\varphi}.</math>
<math display = "block">\mu=\frac{c}{1-\beta}.</math>


In particular, if <math>c = 0</math>, then the mean is 0.
In particular, if <math>c = 0</math>, then the mean is 0.


The [[wikipedia:variance|variance]] is
The variance is


<math display = "block">\textrm{var}(X_t)=\operatorname{E}(X_t^2)-\mu^2=\frac{\sigma_\varepsilon^2}{1-\varphi^2},</math>
<math display = "block">\textrm{var}(Y_t)=\operatorname{E}(Y_t^2)-\mu^2=\frac{\sigma_\varepsilon^2}{1-\beta^2},</math>
where <math>\sigma_\varepsilon</math> is the standard deviation of <math>\varepsilon_t</math>. This can be shown by noting that
where <math>\sigma_\varepsilon</math> is the standard deviation of <math>\varepsilon_t</math>. This can be shown by noting that
<math display = "block">\textrm{var}(X_t) = \varphi^2\textrm{var}(X_{t-1}) + \sigma_\varepsilon^2,</math>
<math display = "block">\textrm{var}(Y_t) = \beta^2\textrm{var}(Y_{t-1}) + \sigma_\varepsilon^2,</math>
and then by noticing that the quantity above is a stable fixed point of this relation.
and then by noticing that the quantity above is a stable fixed point of this relation.


The [[wikipedia:autocovariance|autocovariance]] is given by
The autocovariance is given by
 
<math display = "block">B_n=\operatorname{E}(X_{t+n}X_t)-\mu^2=\frac{\sigma_\varepsilon^2}{1-\varphi^2}\,\,\varphi^{|n|}.</math>
 
It can be seen that the autocovariance function decays with a decay time (also called [[wikipedia:time constant|time constant]]) of <math>\tau=-1/\ln(\varphi)</math> [to see this, write <math>B_n=K\varphi^{|n|}</math> where <math>K</math> is independent of <math>n</math>.  Then note that <math>\varphi^{|n|}=e^{|n|\ln\varphi}</math> and match this to the exponential decay law <math>e^{-n/\tau}</math>].
 
The [[wikipedia:spectral density|spectral density]] function is the [[wikipedia:Fourier transform|Fourier transform]] of the autocovariance function. In discrete terms this will be the discrete-time Fourier transform:
 
<math display = "block">\Phi(\omega)=
\frac{1}{\sqrt{2\pi}}\,\sum_{n=-\infty}^\infty B_n e^{-i\omega n}
=\frac{1}{\sqrt{2\pi}}\,\left(\frac{\sigma_\varepsilon^2}{1+\varphi^2-2\varphi\cos(\omega)}\right).
</math>
 
This expression is periodic due to the discrete nature of the <math>X_j</math>, which is manifested as the cosine term in the denominator.  If we assume that the sampling time (<math>\Delta t=1</math>) is much smaller than the decay time (<math>\tau</math>), then we can use a continuum approximation to <math>B_n</math>:


<math display = "block">B(t)\approx \frac{\sigma_\varepsilon^2}{1-\varphi^2}\,\,\varphi^{|t|}</math>
<math display = "block">B_n=\operatorname{E}(Y_{t+n}Y_t)-\mu^2=\frac{\sigma_\varepsilon^2}{1-\beta^2}\,\,\beta^{|n|}.</math>


which yields a [[wikipedia:Cauchy distribution|Lorentzian profile]] for the spectral density:
It can be seen that the autocovariance function decays with a decay time of <math>\tau=-1/\ln(\beta)</math> [to see this, write <math>B_n=K\beta^{|n|}</math> where <math>K</math> is independent of <math>n</math>.  Then note that <math>\beta^{|n|}=e^{|n|\ln\beta}</math> and match this to the exponential decay law <math>e^{-n/\tau}</math>].


<math display = "block">\Phi(\omega)=
An alternative expression for <math>Y_t</math> can be derived by first substituting <math>c+\beta Y_{t-2}+\varepsilon_{t-1}</math> for <math>Y_{t-1}</math> in the defining equation. Continuing this process <math>N</math> times yields
\frac{1}{\sqrt{2\pi}}\,\frac{\sigma_\varepsilon^2}{1-\varphi^2}\,\frac{\gamma}{\pi(\gamma^2+\omega^2)}</math>


where <math>\gamma=1/\tau</math> is the angular frequency associated with the decay time <math>\tau</math>.
<math display = "block">Y_t=c\sum_{k=0}^{N-1}\beta^k+\beta^NY_{t-N}+\sum_{k=0}^{N-1}\beta^k\varepsilon_{t-k}.</math>


An alternative expression for <math>X_t</math> can be derived by first substituting <math>c+\varphi X_{t-2}+\varepsilon_{t-1}</math> for <math>X_{t-1}</math> in the defining equation. Continuing this process ''N'' times yields
For <math>N</math> approaching infinity, <math>\beta^N</math> will approach zero and:


<math display = "block">X_t=c\sum_{k=0}^{N-1}\varphi^k+\varphi^NX_{t-N}+\sum_{k=0}^{N-1}\varphi^k\varepsilon_{t-k}.</math>
<math display = "block">Y_t=\frac{c}{1-\beta}+\sum_{k=0}^\infty\beta^k\varepsilon_{t-k}.</math>


For ''N'' approaching infinity, <math>\varphi^N</math> will approach zero and:
It is seen that <math>Y_t</math> is white noise convolved with the <math>\beta^k</math> kernel plus the constant mean. If the white noise <math>\varepsilon_t</math> is a [[wikipedia:Gaussian process|Gaussian process]] then <math>Y_t</math> is also a Gaussian process. In other cases, the [[wikipedia:central limit theorem|central limit theorem]] indicates that <math>Y_t</math> will be approximately normally distributed when <math>\beta</math> is close to one.


<math display = "block">X_t=\frac{c}{1-\varphi}+\sum_{k=0}^\infty\varphi^k\varepsilon_{t-k}.</math>
For <math>c = \varepsilon_t = 0</math>, the process <math>Y_t = \beta Y_{t-1}</math> will be a geometric progression (''exponential'' growth or decay). In this case, the solution can be found analytically: <math>Y_t = a \beta^t</math> whereby <math>a</math> is an unknown constant (initial condition).
 
It is seen that <math>X_t</math> is white noise convolved with the <math>\varphi^k</math> kernel plus the constant mean. If the white noise <math>\varepsilon_t</math> is a [[wikipedia:Gaussian process|Gaussian process]] then <math>X_t</math> is also a Gaussian process. In other cases, the [[wikipedia:central limit theorem|central limit theorem]] indicates that <math>X_t</math> will be approximately normally distributed when <math>\varphi</math> is close to one.
 
For <math>c = \varepsilon_t = 0</math>, the process <math>X_t = \varphi X_{t-1}</math> will be a [[wikipedia:Geometric progression|geometric progression]] (''exponential'' growth or decay). In this case, the solution can be found analytically: <math>X_t = a \varphi^t</math> whereby <math>a</math> is an unknown constant ([[wikipedia:initial condition|initial condition]]).


==Calculation of the AR parameters==
==Calculation of the AR parameters==


There are many ways to estimate the coefficients, such as the [[wikipedia:ordinary least squares|ordinary least squares]] procedure or [[wikipedia:Method of moments (statistics)|method of moments]] (through Yule–Walker equations).
There are many ways to estimate the coefficients, such as the [[guide:F7d7868547#Ordinary Least Squares (OLS)|ordinary least squares]] procedure or [[wikipedia:Method of moments (statistics)|method of moments]] (through Yule–Walker equations).


The AR(<math>p</math>) model is given by the equation
The AR(<math>p</math>) model is given by the equation


<math display = "block"> X_t = \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon_t.\,</math>
<math display = "block"> Y_t = \sum_{i=1}^p \beta_i Y_{t-i}+ \varepsilon_t.\,</math>


It is based on parameters <math>\varphi_i</math> where ''i'' = 1, ..., <math>p</math>. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule–Walker equations.
It is based on parameters <math>\beta_i</math> where <math>i = 1, \ldots,p</math>. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule–Walker equations.


===Yule–Walker equations===
===Yule–Walker equations===


The Yule–Walker equations, named for [[wikipedia:Udny Yule|Udny Yule]] and [[wikipedia:Gilbert Walker (physicist)|Gilbert Walker]],<ref>Yule, G. Udny (1927) [http://visualiseur.bnf.fr/Visualiseur?Destination=Gallica&O=NUMM-56031 "On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer's Sunspot Numbers"], ''[[wikipedia:Philosophical Transactions of the Royal Society|Philosophical Transactions of the Royal Society]] of London'', Ser. A, Vol. 226, 267–298.]</ref><ref>Walker, Gilbert  (1931) [http://visualiseur.bnf.fr/Visualiseur?Destination=Gallica&O=NUMM-56224  "On Periodicity in Series of Related Terms"], ''[[wikipedia:Proceedings of the Royal Society|Proceedings of the Royal Society]] of London'', Ser. A, Vol. 131,  518–532.</ref> are the following set of equations.<ref>{{cite book |last=Theodoridis |first=Sergios |title=Machine Learning: A Bayesian and Optimization Perspective |publisher=Academic Press, 2015 |chapter=Chapter 1. Probability and Stochastic Processes |pages=9–51 |isbn=978-0-12-801522-3 |date=2015-04-10 }}</ref>
The Yule–Walker equations, named for Udny Yule and Gilbert Walker,<ref>Yule, G. Udny (1927) [http://visualiseur.bnf.fr/Visualiseur?Destination=Gallica&O=NUMM-56031 "On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer's Sunspot Numbers"], ''[[Philosophical Transactions of the Royal Society|Philosophical Transactions of the Royal Society]] of London'', Ser. A, Vol. 226, 267–298.]</ref><ref>Walker, Gilbert  (1931) [http://visualiseur.bnf.fr/Visualiseur?Destination=Gallica&O=NUMM-56224  "On Periodicity in Series of Related Terms"], ''[[Proceedings of the Royal Society|Proceedings of the Royal Society]] of London'', Ser. A, Vol. 131,  518–532.</ref> are the following set of equations.<ref>{{cite book |last=Theodoridis |first=Sergios |title=Machine Learning: A Bayesian and Optimization Perspective |publisher=Academic Press, 2015 |chapter=Chapter 1. Probability and Stochastic Processes |pages=9–51 |isbn=978-0-12-801522-3 |date=2015-04-10 }}</ref>


<math display = "block">\gamma_m = \sum_{k=1}^p \varphi_k \gamma_{m-k} + \sigma_\varepsilon^2\delta_{m,0},</math>
<math display = "block">\gamma_m = \sum_{k=1}^p \beta_k \gamma_{m-k} + \sigma_\varepsilon^2\delta_{m,0},</math>


where {{nowrap|1=''m'' = 0, …, <math>p</math>}}, yielding {{nowrap|<math>p</math> + 1}} equations. Here <math>\gamma_m</math> is the autocovariance function of X<sub>t</sub>, <math>\sigma_\varepsilon</math> is the standard deviation of the input noise process, and <math>\delta_{m,0}</math> is the [[wikipedia:Kronecker delta function|Kronecker delta function]].
where <math>m=0,\ldots,p</math>, yielding <math>p+1</math> equations. Here <math>\gamma_m</math> is the autocovariance function of <math>Y_t</math>, <math>\sigma_\varepsilon</math> is the standard deviation of the input noise process, and <math>\delta_{m,0}</math> is the [[wikipedia:Kronecker delta function|Kronecker delta function]].


Because the last part of an individual equation is non-zero only if {{nowrap|1=''m'' = 0}}, the set of equations can be solved by representing the equations for {{nowrap|''m'' > 0}} in matrix form, thus getting the equation
Because the last part of an individual equation is non-zero only if <math>m=0</math>, the set of equations can be solved by representing the equations for <math>m>0</math> in matrix form, thus getting the equation


<math display = "block">\begin{bmatrix}
<math display = "block">\begin{bmatrix}
Line 156: Line 125:


\begin{bmatrix}
\begin{bmatrix}
\varphi_{1} \\
\beta_{1} \\
\varphi_{2} \\
\beta_{2} \\
\varphi_{3} \\
\beta_{3} \\
  \vdots \\
  \vdots \\
\varphi_{p} \\
\beta_{p} \\
\end{bmatrix}
\end{bmatrix}


</math>
</math>


which can be solved for all <math>\{\varphi_m; m=1,2, \dots ,p\}.</math> The remaining equation for ''m'' = 0 is
which can be solved for all <math>\{\beta_m; m=1,2, \dots ,p\}.</math> The remaining equation for <math>m = 0</math> is


<math display = "block">\gamma_0 = \sum_{k=1}^p \varphi_k \gamma_{-k} + \sigma_\varepsilon^2 ,</math>
<math display = "block">\gamma_0 = \sum_{k=1}^p \beta_k \gamma_{-k} + \sigma_\varepsilon^2 ,</math>


which, once  <math>\{\varphi_m ; m=1,2, \dots ,p \}</math> are known, can be solved for <math>\sigma_\varepsilon^2 .</math>
which, once  <math>\{\beta_m ; m=1,2, \dots ,p \}</math> are known, can be solved for <math>\sigma_\varepsilon^2 .</math>


An alternative formulation is in terms of the [[wikipedia:autocorrelation function|autocorrelation function]]. The AR parameters are determined by the first <math>p</math>+1 elements <math>\rho(\tau)</math> of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating
An alternative formulation is in terms of the [[guide:C4cbbce9b2|autocorrelation function]]. The AR parameters are determined by the first <math>p</math>+1 elements <math>\rho(\tau)</math> of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating
<ref name=Storch>{{Cite book
<ref name=Storch>{{Cite book
| publisher = Cambridge Univ Pr
| publisher = Cambridge Univ Pr
Line 180: Line 149:
| title = Statistical analysis in climate research
| title = Statistical analysis in climate research
| year = 2001
| year = 2001
}}{{Page needed|date=March 2011}}</ref> <math display = "block">\rho(\tau) = \sum_{k=1}^p \varphi_k \rho(k-\tau).</math>
}}{{Page needed|date=March 2011}}</ref> <math display = "block">\rho(\tau) = \sum_{k=1}^p \beta_k \rho(k-\tau).</math>
Examples for some Low-order AR(<math>p</math>) processes:
Examples for some Low-order AR(<math>p</math>) processes:


{| class="table table-bordered"
{| class="table table-bordered"
|-
|-
|  <math>p=1</math> || <math>\gamma_1 = \varphi_1 \gamma_0</math>
|  <math>p=1</math> || <math>\gamma_1 = \beta_1 \gamma_0</math>
|-
|-
|  <math>p=2</math> || The Yule–Walker equations for an AR(2) process are <math display = "block">\begin{align*}\gamma_1 &= \varphi_1 \gamma_0 + \varphi_2 \gamma_{-1} \\ \gamma_2 &= \varphi_1 \gamma_1 + \varphi_2 \gamma_0\end{align*}</math>  Remember that <math>\gamma_{-k} = \gamma_k</math>. Using the first equation yields <math>\rho_1 = \gamma_1 / \gamma_0 = \frac{\varphi_1}{1-\varphi_2}</math>. Using the recursion formula yields <math>\rho_2 = \gamma_2 / \gamma_0 = \frac{\varphi_1^2 - \varphi_2^2 + \varphi_2}{1 - \varphi_2}</math>
|  <math>p=2</math> || The Yule–Walker equations for an AR(2) process are <math display = "block">\begin{align*}\gamma_1 &= \beta_1 \gamma_0 + \beta_2 \gamma_{-1} \\ \gamma_2 &= \beta_1 \gamma_1 + \beta_2 \gamma_0\end{align*}</math>  Remember that <math>\gamma_{-k} = \gamma_k</math>. Using the first equation yields <math>\rho_1 = \gamma_1 / \gamma_0 = \frac{\beta_1}{1-\beta_2}</math>. Using the recursion formula yields <math>\rho_2 = \gamma_2 / \gamma_0 = \frac{\beta_1^2 - \beta_2^2 + \beta_2}{1 - \beta_2}</math>
|}
|}


Line 195: Line 164:


*Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices.
*Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices.
*Formulation as a [[wikipedia:least squares regression|least squares regression]] problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of ''X''<sub>''t''</sub> on the <math>p</math> previous values of the same series. This can be thought of as a forward-prediction scheme. The [[wikipedia:normal equations|normal equations]] for this problem can be seen to correspond to an approximation of the matrix form of the Yule–Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate.
*Formulation as a [[guide:F7d7868547|least squares regression]] problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of <math>Y</math><sub>''t''</sub> on the <math>p</math> previous values of the same series. This can be thought of as a forward-prediction scheme. The [[wikipedia:normal equations|normal equations]] for this problem can be seen to correspond to an approximation of the matrix form of the Yule–Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate.
*Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model:
*Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model:
<math display = "block"> X_t = c + \sum_{i=1}^p \varphi_i X_{t-i}+ \varepsilon^*_t \,.</math> Here predicted values of ''X''<sub>''t''</sub> would be based on the <math>p</math> future values of the same series. This way of estimating the AR parameters is due to Burg,<ref name=Burg/> and is called the Burg method:<ref name=Brockwell/> Burg and later authors called these particular estimates "maximum entropy estimates",<ref name=Burg1/> but the reasoning behind this applies to the use of any set of estimated AR parameters. Compared to the estimation scheme using only the forward prediction equations, different estimates of the autocovariances are produced, and the estimates have different stability properties. Burg estimates are particularly associated with [[wikipedia:maximum entropy spectral estimation|maximum entropy spectral estimation]].<ref name=Bos/>
<math display = "block"> Y_t = c + \sum_{i=1}^p \beta_i Y_{t-i}+ \varepsilon^*_t \,.</math> Here predicted values of <math>Y</math><sub>''t''</sub> would be based on the <math>p</math> future values of the same series. This way of estimating the AR parameters is due to Burg,<ref name=Burg/> and is called the Burg method:<ref name=Brockwell/>


Other possible approaches to estimation include [[wikipedia:maximum likelihood estimation|maximum likelihood estimation]]. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial <math>p</math> values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.
Other possible approaches to estimation include [[wikipedia:maximum likelihood estimation|maximum likelihood estimation]]. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial <math>p</math> values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.
==Forecasting==
Once the AR model parameters have been estimated using observations <math>\{Y_t\}_{t=1}^T</math>, ''one-step ahead'' prevision is performed as follows:
<math display = "block">
\hat{Y}_{T+1} = \hat{\beta}_0 + \sum_{i=1}^p  \hat{\beta}_i Y_{T+1-i},
</math>
where <math>\{\hat{\beta}_0,\ldots,\hat{\beta}_p\}</math> are autoregressive coefficient estimates. When performing h-step ahead previsions (h > 1), previous forecasts are used as predictors. For instance, in an AR(2) process, 3-step ahead prevision writes
<math display = "block">
\begin{aligned}
\hat{Y}_{T+1} &=\hat{\beta}_0 + \hat{\beta}_1\hat{Y}_{T} + \hat{\beta}_2 \hat{Y}_{T-1} \\
\hat{Y}_{T+2} &=\hat{\beta}_0 + \hat{\beta}_1\hat{Y}_{T+1} + \hat{\beta}_2 \hat{Y}_{T} \\
\hat{Y}_{T+3} &= \hat{\beta}_0 + \hat{\beta}_1\hat{Y}_{T+2} + \hat{\beta}_2 \hat{Y}_{T+1}\\
\end{aligned}
</math>
where the unobserved terms <math>Y_{T+1}</math> and <math>Y_{T+2}</math> have been substituted with their predictions <math>\hat{Y}_{T+1}</math> and <math>\hat{Y}_{T+2}</math>. It can be shown that these substitutions increase the variance of forecast errors. Following the previous example, the forecast errors are
<math display = "block">
\begin{aligned}
Y_{T+1}-\hat{Y}_{T+1} &= \varepsilon_{T+1} \\
Y_{T+2}-\hat{Y}_{T+2} &= \varepsilon_{T+2} + \hat{\beta}_1(Y_{T+1}-\hat{Y}_{T+1}) = \varepsilon_{T+2} + \hat{\beta}_1  \varepsilon_{T+1} \\
Y_{T+3}-\hat{Y}_{T+3} &=\varepsilon_{T+3} + \hat{\beta}_1(Y_{T+1}-\hat{Y}_{T+1}) + \hat{\beta}_2(Y_{T+2}-\hat{Y}_{T+2})  \\
&= \varepsilon_{T+3} + \hat{\beta}_1 \varepsilon_{T+2}  + (\hat{\beta}^2_1 + \hat{\beta}_2) \varepsilon_{T+1}
\end{aligned}
</math>
As <math>\varepsilon_t</math>'s are i.i.d <math>(0,\sigma^2_{\varepsilon})</math>the means of forecast errors are equal to zero and their variances equal
<math display = "block">
\begin{aligned}
\operatorname{Var}(Y_{T+1}-\hat{Y}_{T+1}) &=\sigma^2_{\varepsilon}, \\
\operatorname{Var}(Y_{T+2}-\hat{Y}_{T+2}) &=\sigma^2_{\varepsilon}(1 +  \hat{\beta}^2_1), \\
\operatorname{Var}(Y_{T+3}-\hat{Y}_{T+3}) &= \sigma^2_{\varepsilon}(1 +  \hat{\beta}^2_1 + (\hat{\beta}^2_1+\hat{\beta}_2)^2),\\
\end{aligned}
</math>


==Evaluating the quality of forecasts==
==Evaluating the quality of forecasts==
Line 205: Line 213:
The predictive performance of the autoregressive model can be assessed as soon as estimation has been done if [[wikipedia:cross-validation (statistics)|cross-validation]] is used. In this approach, some of the initially available data was used for parameter estimation purposes, and some (from available observations later in the data set) was held back for out-of-sample testing. Alternatively, after some time has passed after the parameter estimation was conducted, more data will have become available and predictive performance can be evaluated then using the new data.
The predictive performance of the autoregressive model can be assessed as soon as estimation has been done if [[wikipedia:cross-validation (statistics)|cross-validation]] is used. In this approach, some of the initially available data was used for parameter estimation purposes, and some (from available observations later in the data set) was held back for out-of-sample testing. Alternatively, after some time has passed after the parameter estimation was conducted, more data will have become available and predictive performance can be evaluated then using the new data.


In either case, there are two aspects of predictive performance that can be evaluated: one-step-ahead and ''n''-step-ahead performance. For one-step-ahead performance, the estimated parameters are used in the autoregressive equation along with observed values of ''X'' for all periods prior to the one being predicted, and the output of the equation is the one-step-ahead forecast; this procedure is used to obtain forecasts for each of the out-of-sample observations. To evaluate the quality of ''n''-step-ahead forecasts, the forecasting procedure in the previous section is employed to obtain the predictions.
In either case, there are two aspects of predictive performance that can be evaluated: one-step-ahead and <math>n</math>-step-ahead performance. For one-step-ahead performance, the estimated parameters are used in the autoregressive equation along with observed values of <math>Y</math> for all periods prior to the one being predicted, and the output of the equation is the one-step-ahead forecast; this procedure is used to obtain forecasts for each of the out-of-sample observations. To evaluate the quality of <math>n</math>-step-ahead forecasts, the forecasting procedure in the previous section is employed to obtain the predictions.


Given a set of predicted values and a corresponding set of actual values for ''X'' for various time periods, a common evaluation technique is to use the [[wikipedia:mean squared prediction error|mean squared prediction error]]; other measures are also available (see [[wikipedia:Forecasting#Forecasting accuracy|forecasting#forecasting accuracy]]).
Given a set of predicted values and a corresponding set of actual values for <math>Y</math> for various time periods, a common evaluation technique is to use the [[guide:E0f9e256bf|mean squared prediction error]].


The question of how to interpret the measured forecasting accuracy arises—for example, what is a "high" (bad) or a "low" (good) value for the mean squared prediction error? There are two possible points of comparison. First, the forecasting accuracy of an alternative model, estimated under different modeling assumptions or different estimation techniques, can be used for comparison purposes. Second, the out-of-sample accuracy measure can be compared to the same measure computed for the in-sample data points (that were used for parameter estimation) for which enough prior data values are available (that is, dropping the first <math>p</math> data points, for which <math>p</math> prior data points are not available). Since the model was estimated specifically to fit the in-sample points as well as possible, it will usually be the case that the out-of-sample predictive performance will be poorer than the in-sample predictive performance. But if the predictive quality deteriorates out-of-sample by "not very much" (which is not precisely definable), then the forecaster may be satisfied with the performance.
The question of how to interpret the measured forecasting accuracy arises—for example, what is a "high" (bad) or a "low" (good) value for the mean squared prediction error? There are two possible points of comparison. First, the forecasting accuracy of an alternative model, estimated under different modeling assumptions or different estimation techniques, can be used for comparison purposes. Second, the out-of-sample accuracy measure can be compared to the same measure computed for the in-sample data points (that were used for parameter estimation) for which enough prior data values are available (that is, dropping the first <math>p</math> data points, for which <math>p</math> prior data points are not available). Since the model was estimated specifically to fit the in-sample points as well as possible, it will usually be the case that the out-of-sample predictive performance will be poorer than the in-sample predictive performance. But if the predictive quality deteriorates out-of-sample by "not very much" (which is not precisely definable), then the forecaster may be satisfied with the performance.
Line 213: Line 221:
==Autoregressive conditional heteroskedasticity==
==Autoregressive conditional heteroskedasticity==


In [[wikipedia:econometrics|econometrics]], the '''autoregressive conditional heteroskedasticity''' ('''ARCH''') model is a [[wikipedia:statistical model|statistical model]] for [[wikipedia:time series|time series]] data that describes the [[wikipedia:variance|variance]] of the current [[wikipedia:Errors and residuals in statistics|error term]] or [[wikipedia:Innovation (signal processing)|innovation]] as a function of the actual sizes of the previous time periods' error terms;<ref>{{cite journal |jstor=1912773 |title=Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation |author-link=Robert F. Engle |last=Engle |first=Robert F. |journal=[[wikipedia:Econometrica|Econometrica]] |volume=50 |issue=4 |year=1982 |pages=987–1007 |doi=10.2307/1912773 }}</ref> often the variance is related to the squares of the previous [[wikipedia:innovation (signal processing)|innovation]]s. The ARCH model is appropriate when the error variance in a time series follows an [[wikipedia:autoregressive|autoregressive]] (AR) model; if an [[wikipedia:autoregressive moving average model|autoregressive moving average]] (ARMA) model is assumed for the error variance, the model is a '''generalized autoregressive conditional heteroskedasticity''' ('''GARCH''') model.<ref name="GARCH 1986">{{cite journal |author-link=Tim Bollerslev |last=Bollerslev |first=Tim |year=1986 |title=Generalized Autoregressive Conditional Heteroskedasticity |journal=[[wikipedia:Journal of Econometrics|Journal of Econometrics]] |volume=31 |issue=3 |pages=307–327 |doi=10.1016/0304-4076(86)90063-1 |citeseerx=10.1.1.468.2892 }}</ref>
The '''autoregressive conditional heteroskedasticity''' ('''ARCH''') model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms;<ref>{{cite journal |jstor=1912773 |title=Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation |author-link=Robert F. Engle |last=Engle |first=Robert F. |journal=[[Econometrica|Econometrica]] |volume=50 |issue=4 |year=1982 |pages=987–1007 |doi=10.2307/1912773 }}</ref> often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model.


ARCH models are commonly employed in modeling [[wikipedia:mathematical finance|financial]] [[wikipedia:time series|time series]] that exhibit time-varying [[wikipedia:volatility (finance)|volatility]] and [[wikipedia:volatility clustering|volatility clustering]], i.e. periods of swings interspersed with periods of relative calm. ARCH-type models are sometimes considered to be in the family of [[wikipedia:stochastic volatility|stochastic volatility]] models, although this is strictly incorrect since at time ''t'' the volatility is completely pre-determined (deterministic) given previous values.<ref>{{cite book |last=Brooks |first=Chris |author-link=Chris Brooks (academic) |date=2014 |title=Introductory Econometrics for Finance |edition=3rd |location=Cambridge |publisher=Cambridge University Press |page=461 |isbn=9781107661455}}</ref>
ARCH models are commonly employed in modeling financial time series that exhibit time-varying volatility and volatility clustering, i.e. periods of swings interspersed with periods of relative calm.  


===Model specification===
===Model specification===


To model a time series using an ARCH process, let <math> ~\epsilon_t~ </math>denote the error terms (return residuals, with respect to a mean process), i.e. the series terms. These <math> ~\epsilon_t~ </math> are split into a stochastic piece <math>z_t</math> and a time-dependent standard deviation <math>\sigma_t</math> characterizing the typical size of the terms so that
To model a time series using an ARCH process, let <math> \varepsilon_t </math>denote the error terms (return residuals, with respect to a mean process), i.e. the series terms. These <math> \varepsilon_t </math> are split into a stochastic piece <math>z_t</math> and a time-dependent standard deviation <math>\sigma_t</math> characterizing the typical size of the terms so that


<math display = "block"> ~\epsilon_t=\sigma_t z_t ~</math>
<math display = "block"> ~\varepsilon_t=\sigma_t z_t ~</math>


The random variable <math>z_t</math> is a strong [[wikipedia:white noise|white noise]] process. The series <math> \sigma_t^2 </math> is modeled by
The random variable <math>z_t</math> is a strong white noise process. The series <math> \sigma_t^2 </math> is modeled by


<math display = "block"> \sigma_t^2=\alpha_0+\alpha_1 \epsilon_{t-1}^2+\cdots+\alpha_q \epsilon_{t-q}^2 = \alpha_0 + \sum_{i=1}^q \alpha_{i} \epsilon_{t-i}^2 </math>,
<math display = "block"> \sigma_t^2=\alpha_0+\alpha_1 \varepsilon_{t-1}^2+\cdots+\alpha_q \varepsilon_{t-q}^2 = \alpha_0 + \sum_{i=1}^q \alpha_{i} \varepsilon_{t-i}^2 </math>,
where <math> ~\alpha_0>0~ </math> and <math> \alpha_i\ge 0,~i>0</math>.
where <math> ~\alpha_0>0~ </math> and <math> \alpha_i\ge 0,~i>0</math>.


An ARCH(<math>q</math>) model can be estimated using [[wikipedia:Least squares|ordinary least squares]]. A method for testing whether the residuals <math> \epsilon_t </math> exhibit time-varying heteroskedasticity using the [[wikipedia:Lagrange multiplier test|Lagrange multiplier test]] was proposed by [[wikipedia:Robert F. Engle|Engle]] (1982). This procedure is as follows:
An ARCH(<math>q</math>) model can be estimated using [[guide:F7d7868547#Ordinary Least Squares (OLS)|ordinary least squares]]. A method for testing whether the residuals <math> \varepsilon_t </math> exhibit time-varying heteroskedasticity using the [[wikipedia:Lagrange multiplier test|Lagrange multiplier test]] was proposed by [[wikipedia:Robert F. Engle|Engle]] (1982). This procedure is as follows:


<proc label="Heteroskedasticity Test">
<proc label="Heteroskedasticity Test">
# Estimate the best fitting [[wikipedia:autoregressive model|autoregressive model]] AR(<math>q</math>) <math display = "block"> y_t = a_0 + a_1 y_{t-1} + \cdots + a_q y_{t-q} + \epsilon_t = a_0 + \sum_{i=1}^q a_i y_{t-i} + \epsilon_t. </math>
# Estimate the best fitting autoregressive model AR(<math>q</math>) <math display = "block"> y_t = a_0 + a_1 y_{t-1} + \cdots + a_q y_{t-q} + \varepsilon_t = a_0 + \sum_{i=1}^q a_i y_{t-i} + \varepsilon_t. </math>
# Obtain the squares of the error <math> \hat \epsilon^2 </math> and regress them on a constant and <math>q</math> lagged values:
# Obtain the squares of the error <math> \hat \epsilon^2 </math> and regress them on a constant and <math>q</math> lagged values:
#: <math display = "block"> \hat \epsilon_t^2 = \hat \alpha_0 + \sum_{i=1}^{q} \hat \alpha_i \hat \epsilon_{t-i}^2</math> where <math>q</math> is the length of ARCH lags.
#: <math display = "block"> \hat \varepsilon_t^2 = \hat \alpha_0 + \sum_{i=1}^{q} \hat \alpha_i \hat \varepsilon_{t-i}^2</math> where <math>q</math> is the length of ARCH lags.
#The [[wikipedia:null hypothesis|null hypothesis]] is that, in the absence of ARCH components, we have <math> \alpha_i = 0 </math> for all <math> i = 1, \cdots, q </math>. The alternative hypothesis is that, in the presence of ARCH components, at least one of the estimated <math> \alpha_i </math> coefficients must be significant. In a sample of ''T'' residuals under the null hypothesis of no ARCH errors, the test statistic ''T'R²'' follows <math> \chi^2 </math> distribution with <math>q</math> degrees of freedom, where <math> T' </math> is the number of equations in the model which fits the residuals vs the lags (i.e. <math> T'=T-q </math>). If ''T'R²'' is greater than the Chi-square table value, we ''reject'' the null hypothesis and conclude there is an ARCH effect in the [[wikipedia:Autoregressive moving average model|ARMA model]]. If ''T'R²'' is smaller than the Chi-square table value, we do not reject the null hypothesis.
#The null hypothesis is that, in the absence of ARCH components, we have <math> \alpha_i = 0 </math> for all <math> i = 1, \cdots, q </math>. The alternative hypothesis is that, in the presence of ARCH components, at least one of the estimated <math> \alpha_i </math> coefficients must be significant. In a sample of <math>T</math> residuals under the null hypothesis of no ARCH errors, the test statistic <math>T'R^2</math> follows <math> \chi^2 </math> distribution with <math>q</math> degrees of freedom, where <math> T' </math> is the number of equations in the model which fits the residuals vs the lags (i.e. <math> T'=T-q </math>). If <math>T'R^2</math> is greater than the Chi-square table value, we ''reject'' the null hypothesis and conclude there is an ARCH effect. If <math>T'R^2</math> is smaller than the Chi-square table value, we do not reject the null hypothesis.
</proc>
</proc>


==References==
==References==
{{Reflist | refs = <ref name=Burg>Burg, J. P. (1968). "A new analysis technique for time series data". In ''Modern Spectrum Analysis'' (Edited by D. G. Childers), NATO Advanced Study Institute of Signal Processing with emphasis on Underwater Acoustics. IEEE Press, New York.</ref>


<ref name=Burg1>Burg, J.P. (1967) "Maximum Entropy Spectral Analysis", ''Proceedings of the 37th Meeting of the Society of
{{Reflist | refs = <ref name=Burg>Burg, J. P. (1968). "A new analysis technique for time series data". In ''Modern Spectrum Analysis'' (Edited by D. G. Childers), NATO Advanced Study Institute of Signal Processing with emphasis on Underwater Acoustics. IEEE Press, New York.</ref><ref name=Brockwell>{{cite journal
Exploration Geophysicists'', Oklahoma City, Oklahoma.</ref>
<ref name=Brockwell>{{cite journal
  |first1=Peter J.  
  |first1=Peter J.  
  |last1=Brockwell  
  |last1=Brockwell  
Line 258: Line 263:
  |archive-url=https://web.archive.org/web/20121021015413/http://www3.stat.sinica.edu.tw/statistica/oldpdf/A15n112.pdf  
  |archive-url=https://web.archive.org/web/20121021015413/http://www3.stat.sinica.edu.tw/statistica/oldpdf/A15n112.pdf  
  |archive-date=2012-10-21  
  |archive-date=2012-10-21  
}}</ref>
}}</ref>}}
<ref name=Bos>{{Cite journal | last1 = Bos | first1 = R. | last2 = De Waele | first2 = S. | last3 = Broersen | first3 = P. M. T. | doi = 10.1109/TIM.2002.808031 | title = Autoregressive spectral estimation by application of the burg algorithm to irregularly sampled data | journal = IEEE Transactions on Instrumentation and Measurement | volume = 51 | issue = 6 | pages = 1289 | year = 2002 | url = http://resolver.tudelft.nl/uuid:870559f7-f1e9-4968-83da-edd19485eaaf }}</ref>
 
}}
==General References==
 
*{{cite arXiv |last1=Fatoumata|first1=Dama|last2=Sinoquet |first2=Christine  |date=2021 |title= Time Series Analysis and Modeling to Forecast: a Survey|eprint=2104.00164v2 |class=cs.LG}}


==Wikipedia References==
==Wikipedia References==
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Autoregressive_model&oldid=1100507226|title=  Autoregressive model | author = Wikipedia contributors |website= Wikipedia |publisher= Wikipedia |access-date = 17 August 2022 }}
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Autoregressive_model&oldid=1100507226|title=  Autoregressive model | author = Wikipedia contributors |website= Wikipedia |publisher= Wikipedia |access-date = 17 August 2022 }}
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Autoregressive_conditional_heteroskedasticity&oldid=1057683014|title= Autoregressive conditional heteroskedasticity | author = Wikipedia contributors |website= Wikipedia |publisher= Wikipedia |access-date = 17 August 2022 }}
*{{cite web |url = https://en.wikipedia.org/w/index.php?title=Autoregressive_conditional_heteroskedasticity&oldid=1057683014|title= Autoregressive conditional heteroskedasticity | author = Wikipedia contributors |website= Wikipedia |publisher= Wikipedia |access-date = 17 August 2022 }}

Latest revision as of 20:47, 16 April 2024

In statistics, econometrics and signal processing, an autoregressive (AR) model is a representation of a type of random process; as such, it is used to describe certain time-varying processes in nature, economics, etc. The autoregressive model specifies that the output variable depends linearly on its own previous values and on a stochastic term (an imperfectly predictable term); thus the model is in the form of a stochastic difference equation (or recurrence relation which should not be confused with differential equation). Together with the moving-average (MA) model, it is a special case and key component of the more general autoregressive–moving-average (ARMA) and autoregressive integrated moving average (ARIMA) models of time series, which have a more complicated stochastic structure; it is also a special case of the vector autoregressive model (VAR), which consists of a system of more than one interlocking stochastic difference equation in more than one evolving random variable.

Contrary to the moving-average (MA) model, the autoregressive model is not always stationary as it may contain a unit root.

Definition

The notation AR([math]p[/math]) indicates an autoregressive model of order [math]p[/math]. The AR([math]p[/math]) model is defined as

[[math]] Y_t = c + \sum_{i=1}^p \beta_i Y_{t-i}+ \varepsilon_t \,[[/math]]

where [math]\beta_1, \ldots, \beta_p[/math] are the parameters of the model, [math]c[/math] is a constant, and [math]\varepsilon_t[/math] is white noise. This can be equivalently written using the backshift operator [math]B[/math] as

[[math]] Y_t = c + \sum_{i=1}^p \beta_i B^i Y_t + \varepsilon_t [[/math]]

so that, moving the summation term to the left side and using polynomial notation, we have

[[math]]\phi [B]Y_t= c + \varepsilon_t \, .[[/math]]

Some parameter constraints are necessary for the model to remain wide-sense stationary. For example, processes in the AR(1) model with [math]|\beta_1 | \geq 1[/math] are not stationary. More generally, for an AR([math]p[/math]) model to be wide-sense stationary, the roots of the polynomial

[[math]]\Phi(z):=\textstyle 1 - \sum_{i=1}^p \beta_i z^{i}[[/math]]

must lie outside the unit circle, i.e., each (complex) root [math]z_i[/math] must satisfy [math]|z_i |\gt1[/math] (see pages 89,92 [1]).

Intertemporal effect of shocks

In an AR process, a one-time shock affects values of the evolving variable infinitely far into the future. For example, consider the AR(1) model [math] Y_t = c + \beta_1 Y_{t-1} + \varepsilon_t[/math]. A non-zero value for [math]\varepsilon_t[/math] at say time [math]t=1[/math] affects [math]Y_1[/math] by the amount [math]\varepsilon_1[/math]. Then by the AR equation for [math]Y_2[/math] in terms of [math]Y_1[/math], this affects [math]Y_2[/math] by the amount [math]\beta_1 \varepsilon_1[/math]. Then by the AR equation for [math]Y_3[/math] in terms of [math]Y_2[/math], this affects [math]Y_3[/math] by the amount [math]\beta_1^2 \varepsilon_1[/math]. Continuing this process shows that the effect of [math]\varepsilon_1[/math] never ends, although if the process is stationary then the effect diminishes toward zero in the limit.

Because each shock affects [math]Y[/math] values infinitely far into the future from when they occur, any given value [math]Y[/math]t is affected by shocks occurring infinitely far into the past. This can also be seen by rewriting the autoregression

[[math]]\phi (B)Y_t= \varepsilon_t \,[[/math]]

(where the constant term has been suppressed by assuming that the variable has been measured as deviations from its mean) as

[[math]]Y_t= \frac{1}{\phi (B)}\varepsilon_t \, .[[/math]]

When the polynomial division on the right side is carried out, the polynomial in the backshift operator applied to [math]\varepsilon_t[/math] has an infinite order—that is, an infinite number of lagged values of [math]\varepsilon_t[/math] appear on the right side of the equation.

Graphs of AR([math]p[/math]) processes

The simplest AR process is AR(0), which has no dependence between the terms. Only the error/innovation/noise term contributes to the output of the process, so in the figure, AR(0) corresponds to white noise.

For an AR(1) process with a positive [math]\beta[/math], only the previous term in the process and the noise term contribute to the output. If [math]\beta[/math] is close to 0, then the process still looks like white noise, but as [math]\beta[/math] approaches 1, the output gets a larger contribution from the previous term relative to the noise. This results in a "smoothing" or integration of the output.

For an AR(2) process, the previous two terms and the noise term contribute to the output. If both [math]\beta_1[/math] and [math]\beta_2[/math] are positive, the output will resemble a low pass filter, with the high frequency part of the noise decreased. If [math]\beta_1[/math] is positive while [math]\beta_2[/math] is negative, then the process favors changes in sign between terms of the process. The output oscillates. This can be likened to edge detection or detection of change in direction.

Example: An AR(1) process

An AR(1) process is given by:

[[math]]Y_t = c + \beta Y_{t-1}+\varepsilon_t\,[[/math]]

where [math]\varepsilon_t[/math] is a white noise process with zero mean and constant variance [math]\sigma_\varepsilon^2[/math]. (Note: The subscript on [math]\beta_1[/math] has been dropped.) The process is wide-sense stationary if [math]|\beta|\lt1[/math] since it is obtained as the output of a stable filter whose input is white noise. (If [math]\beta=1[/math] then the variance of [math]Y_t[/math] depends on time lag t, so that the variance of the series diverges to infinity as t goes to infinity, and is therefore not wide sense stationary.) Assuming [math]|\beta|\lt1[/math], the mean [math]\operatorname{E} (Y_t)[/math] is identical for all values of t by the very definition of wide sense stationarity. If the mean is denoted by [math]\mu[/math], it follows from

[[math]]\operatorname{E} (Y_t)=\operatorname{E} (c)+\beta\operatorname{E} (Y_{t-1})+\operatorname{E}(\varepsilon_t), [[/math]]

that

[[math]] \mu=c+\beta\mu+0,[[/math]]

and hence

[[math]]\mu=\frac{c}{1-\beta}.[[/math]]

In particular, if [math]c = 0[/math], then the mean is 0.

The variance is

[[math]]\textrm{var}(Y_t)=\operatorname{E}(Y_t^2)-\mu^2=\frac{\sigma_\varepsilon^2}{1-\beta^2},[[/math]]

where [math]\sigma_\varepsilon[/math] is the standard deviation of [math]\varepsilon_t[/math]. This can be shown by noting that

[[math]]\textrm{var}(Y_t) = \beta^2\textrm{var}(Y_{t-1}) + \sigma_\varepsilon^2,[[/math]]

and then by noticing that the quantity above is a stable fixed point of this relation.

The autocovariance is given by

[[math]]B_n=\operatorname{E}(Y_{t+n}Y_t)-\mu^2=\frac{\sigma_\varepsilon^2}{1-\beta^2}\,\,\beta^{|n|}.[[/math]]

It can be seen that the autocovariance function decays with a decay time of [math]\tau=-1/\ln(\beta)[/math] [to see this, write [math]B_n=K\beta^{|n|}[/math] where [math]K[/math] is independent of [math]n[/math]. Then note that [math]\beta^{|n|}=e^{|n|\ln\beta}[/math] and match this to the exponential decay law [math]e^{-n/\tau}[/math]].

An alternative expression for [math]Y_t[/math] can be derived by first substituting [math]c+\beta Y_{t-2}+\varepsilon_{t-1}[/math] for [math]Y_{t-1}[/math] in the defining equation. Continuing this process [math]N[/math] times yields

[[math]]Y_t=c\sum_{k=0}^{N-1}\beta^k+\beta^NY_{t-N}+\sum_{k=0}^{N-1}\beta^k\varepsilon_{t-k}.[[/math]]

For [math]N[/math] approaching infinity, [math]\beta^N[/math] will approach zero and:

[[math]]Y_t=\frac{c}{1-\beta}+\sum_{k=0}^\infty\beta^k\varepsilon_{t-k}.[[/math]]

It is seen that [math]Y_t[/math] is white noise convolved with the [math]\beta^k[/math] kernel plus the constant mean. If the white noise [math]\varepsilon_t[/math] is a Gaussian process then [math]Y_t[/math] is also a Gaussian process. In other cases, the central limit theorem indicates that [math]Y_t[/math] will be approximately normally distributed when [math]\beta[/math] is close to one.

For [math]c = \varepsilon_t = 0[/math], the process [math]Y_t = \beta Y_{t-1}[/math] will be a geometric progression (exponential growth or decay). In this case, the solution can be found analytically: [math]Y_t = a \beta^t[/math] whereby [math]a[/math] is an unknown constant (initial condition).

Calculation of the AR parameters

There are many ways to estimate the coefficients, such as the ordinary least squares procedure or method of moments (through Yule–Walker equations).

The AR([math]p[/math]) model is given by the equation

[[math]] Y_t = \sum_{i=1}^p \beta_i Y_{t-i}+ \varepsilon_t.\,[[/math]]

It is based on parameters [math]\beta_i[/math] where [math]i = 1, \ldots,p[/math]. There is a direct correspondence between these parameters and the covariance function of the process, and this correspondence can be inverted to determine the parameters from the autocorrelation function (which is itself obtained from the covariances). This is done using the Yule–Walker equations.

Yule–Walker equations

The Yule–Walker equations, named for Udny Yule and Gilbert Walker,[2][3] are the following set of equations.[4]

[[math]]\gamma_m = \sum_{k=1}^p \beta_k \gamma_{m-k} + \sigma_\varepsilon^2\delta_{m,0},[[/math]]

where [math]m=0,\ldots,p[/math], yielding [math]p+1[/math] equations. Here [math]\gamma_m[/math] is the autocovariance function of [math]Y_t[/math], [math]\sigma_\varepsilon[/math] is the standard deviation of the input noise process, and [math]\delta_{m,0}[/math] is the Kronecker delta function.

Because the last part of an individual equation is non-zero only if [math]m=0[/math], the set of equations can be solved by representing the equations for [math]m\gt0[/math] in matrix form, thus getting the equation

[[math]]\begin{bmatrix} \gamma_1 \\ \gamma_2 \\ \gamma_3 \\ \vdots \\ \gamma_p \\ \end{bmatrix} = \begin{bmatrix} \gamma_0 & \gamma_{-1} & \gamma_{-2} & \cdots \\ \gamma_1 & \gamma_0 & \gamma_{-1} & \cdots \\ \gamma_2 & \gamma_1 & \gamma_0 & \cdots \\ \vdots & \vdots & \vdots & \ddots \\ \gamma_{p-1} & \gamma_{p-2} & \gamma_{p-3} & \cdots \\ \end{bmatrix} \begin{bmatrix} \beta_{1} \\ \beta_{2} \\ \beta_{3} \\ \vdots \\ \beta_{p} \\ \end{bmatrix} [[/math]]

which can be solved for all [math]\{\beta_m; m=1,2, \dots ,p\}.[/math] The remaining equation for [math]m = 0[/math] is

[[math]]\gamma_0 = \sum_{k=1}^p \beta_k \gamma_{-k} + \sigma_\varepsilon^2 ,[[/math]]

which, once [math]\{\beta_m ; m=1,2, \dots ,p \}[/math] are known, can be solved for [math]\sigma_\varepsilon^2 .[/math]

An alternative formulation is in terms of the autocorrelation function. The AR parameters are determined by the first [math]p[/math]+1 elements [math]\rho(\tau)[/math] of the autocorrelation function. The full autocorrelation function can then be derived by recursively calculating [5]

[[math]]\rho(\tau) = \sum_{k=1}^p \beta_k \rho(k-\tau).[[/math]]

Examples for some Low-order AR([math]p[/math]) processes:

[math]p=1[/math] [math]\gamma_1 = \beta_1 \gamma_0[/math]
[math]p=2[/math] The Yule–Walker equations for an AR(2) process are
[[math]]\begin{align*}\gamma_1 &= \beta_1 \gamma_0 + \beta_2 \gamma_{-1} \\ \gamma_2 &= \beta_1 \gamma_1 + \beta_2 \gamma_0\end{align*}[[/math]]
Remember that [math]\gamma_{-k} = \gamma_k[/math]. Using the first equation yields [math]\rho_1 = \gamma_1 / \gamma_0 = \frac{\beta_1}{1-\beta_2}[/math]. Using the recursion formula yields [math]\rho_2 = \gamma_2 / \gamma_0 = \frac{\beta_1^2 - \beta_2^2 + \beta_2}{1 - \beta_2}[/math]

Estimation of AR parameters

The above equations (the Yule–Walker equations) provide several routes to estimating the parameters of an AR([math]p[/math]) model, by replacing the theoretical covariances with estimated values.[6] Some of these variants can be described as follows:

  • Estimation of autocovariances or autocorrelations. Here each of these terms is estimated separately, using conventional estimates. There are different ways of doing this and the choice between these affects the properties of the estimation scheme. For example, negative estimates of the variance can be produced by some choices.
  • Formulation as a least squares regression problem in which an ordinary least squares prediction problem is constructed, basing prediction of values of [math]Y[/math]t on the [math]p[/math] previous values of the same series. This can be thought of as a forward-prediction scheme. The normal equations for this problem can be seen to correspond to an approximation of the matrix form of the Yule–Walker equations in which each appearance of an autocovariance of the same lag is replaced by a slightly different estimate.
  • Formulation as an extended form of ordinary least squares prediction problem. Here two sets of prediction equations are combined into a single estimation scheme and a single set of normal equations. One set is the set of forward-prediction equations and the other is a corresponding set of backward prediction equations, relating to the backward representation of the AR model:

[[math]] Y_t = c + \sum_{i=1}^p \beta_i Y_{t-i}+ \varepsilon^*_t \,.[[/math]]

Here predicted values of [math]Y[/math]t would be based on the [math]p[/math] future values of the same series. This way of estimating the AR parameters is due to Burg,[7] and is called the Burg method:[8]

Other possible approaches to estimation include maximum likelihood estimation. Two distinct variants of maximum likelihood are available: in one (broadly equivalent to the forward prediction least squares scheme) the likelihood function considered is that corresponding to the conditional distribution of later values in the series given the initial [math]p[/math] values in the series; in the second, the likelihood function considered is that corresponding to the unconditional joint distribution of all the values in the observed series. Substantial differences in the results of these approaches can occur if the observed series is short, or if the process is close to non-stationarity.

Forecasting

Once the AR model parameters have been estimated using observations [math]\{Y_t\}_{t=1}^T[/math], one-step ahead prevision is performed as follows:

[[math]] \hat{Y}_{T+1} = \hat{\beta}_0 + \sum_{i=1}^p \hat{\beta}_i Y_{T+1-i}, [[/math]]

where [math]\{\hat{\beta}_0,\ldots,\hat{\beta}_p\}[/math] are autoregressive coefficient estimates. When performing h-step ahead previsions (h > 1), previous forecasts are used as predictors. For instance, in an AR(2) process, 3-step ahead prevision writes

[[math]] \begin{aligned} \hat{Y}_{T+1} &=\hat{\beta}_0 + \hat{\beta}_1\hat{Y}_{T} + \hat{\beta}_2 \hat{Y}_{T-1} \\ \hat{Y}_{T+2} &=\hat{\beta}_0 + \hat{\beta}_1\hat{Y}_{T+1} + \hat{\beta}_2 \hat{Y}_{T} \\ \hat{Y}_{T+3} &= \hat{\beta}_0 + \hat{\beta}_1\hat{Y}_{T+2} + \hat{\beta}_2 \hat{Y}_{T+1}\\ \end{aligned} [[/math]]

where the unobserved terms [math]Y_{T+1}[/math] and [math]Y_{T+2}[/math] have been substituted with their predictions [math]\hat{Y}_{T+1}[/math] and [math]\hat{Y}_{T+2}[/math]. It can be shown that these substitutions increase the variance of forecast errors. Following the previous example, the forecast errors are

[[math]] \begin{aligned} Y_{T+1}-\hat{Y}_{T+1} &= \varepsilon_{T+1} \\ Y_{T+2}-\hat{Y}_{T+2} &= \varepsilon_{T+2} + \hat{\beta}_1(Y_{T+1}-\hat{Y}_{T+1}) = \varepsilon_{T+2} + \hat{\beta}_1 \varepsilon_{T+1} \\ Y_{T+3}-\hat{Y}_{T+3} &=\varepsilon_{T+3} + \hat{\beta}_1(Y_{T+1}-\hat{Y}_{T+1}) + \hat{\beta}_2(Y_{T+2}-\hat{Y}_{T+2}) \\ &= \varepsilon_{T+3} + \hat{\beta}_1 \varepsilon_{T+2} + (\hat{\beta}^2_1 + \hat{\beta}_2) \varepsilon_{T+1} \end{aligned} [[/math]]

As [math]\varepsilon_t[/math]'s are i.i.d [math](0,\sigma^2_{\varepsilon})[/math]the means of forecast errors are equal to zero and their variances equal

[[math]] \begin{aligned} \operatorname{Var}(Y_{T+1}-\hat{Y}_{T+1}) &=\sigma^2_{\varepsilon}, \\ \operatorname{Var}(Y_{T+2}-\hat{Y}_{T+2}) &=\sigma^2_{\varepsilon}(1 + \hat{\beta}^2_1), \\ \operatorname{Var}(Y_{T+3}-\hat{Y}_{T+3}) &= \sigma^2_{\varepsilon}(1 + \hat{\beta}^2_1 + (\hat{\beta}^2_1+\hat{\beta}_2)^2),\\ \end{aligned} [[/math]]

Evaluating the quality of forecasts

The predictive performance of the autoregressive model can be assessed as soon as estimation has been done if cross-validation is used. In this approach, some of the initially available data was used for parameter estimation purposes, and some (from available observations later in the data set) was held back for out-of-sample testing. Alternatively, after some time has passed after the parameter estimation was conducted, more data will have become available and predictive performance can be evaluated then using the new data.

In either case, there are two aspects of predictive performance that can be evaluated: one-step-ahead and [math]n[/math]-step-ahead performance. For one-step-ahead performance, the estimated parameters are used in the autoregressive equation along with observed values of [math]Y[/math] for all periods prior to the one being predicted, and the output of the equation is the one-step-ahead forecast; this procedure is used to obtain forecasts for each of the out-of-sample observations. To evaluate the quality of [math]n[/math]-step-ahead forecasts, the forecasting procedure in the previous section is employed to obtain the predictions.

Given a set of predicted values and a corresponding set of actual values for [math]Y[/math] for various time periods, a common evaluation technique is to use the mean squared prediction error.

The question of how to interpret the measured forecasting accuracy arises—for example, what is a "high" (bad) or a "low" (good) value for the mean squared prediction error? There are two possible points of comparison. First, the forecasting accuracy of an alternative model, estimated under different modeling assumptions or different estimation techniques, can be used for comparison purposes. Second, the out-of-sample accuracy measure can be compared to the same measure computed for the in-sample data points (that were used for parameter estimation) for which enough prior data values are available (that is, dropping the first [math]p[/math] data points, for which [math]p[/math] prior data points are not available). Since the model was estimated specifically to fit the in-sample points as well as possible, it will usually be the case that the out-of-sample predictive performance will be poorer than the in-sample predictive performance. But if the predictive quality deteriorates out-of-sample by "not very much" (which is not precisely definable), then the forecaster may be satisfied with the performance.

Autoregressive conditional heteroskedasticity

The autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms;[9] often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model.

ARCH models are commonly employed in modeling financial time series that exhibit time-varying volatility and volatility clustering, i.e. periods of swings interspersed with periods of relative calm.

Model specification

To model a time series using an ARCH process, let [math] \varepsilon_t [/math]denote the error terms (return residuals, with respect to a mean process), i.e. the series terms. These [math] \varepsilon_t [/math] are split into a stochastic piece [math]z_t[/math] and a time-dependent standard deviation [math]\sigma_t[/math] characterizing the typical size of the terms so that

[[math]] ~\varepsilon_t=\sigma_t z_t ~[[/math]]

The random variable [math]z_t[/math] is a strong white noise process. The series [math] \sigma_t^2 [/math] is modeled by

[[math]] \sigma_t^2=\alpha_0+\alpha_1 \varepsilon_{t-1}^2+\cdots+\alpha_q \varepsilon_{t-q}^2 = \alpha_0 + \sum_{i=1}^q \alpha_{i} \varepsilon_{t-i}^2 [[/math]]

, where [math] ~\alpha_0\gt0~ [/math] and [math] \alpha_i\ge 0,~i\gt0[/math].

An ARCH([math]q[/math]) model can be estimated using ordinary least squares. A method for testing whether the residuals [math] \varepsilon_t [/math] exhibit time-varying heteroskedasticity using the Lagrange multiplier test was proposed by Engle (1982). This procedure is as follows:

Heteroskedasticity Test
  1. Estimate the best fitting autoregressive model AR([math]q[/math])
    [[math]] y_t = a_0 + a_1 y_{t-1} + \cdots + a_q y_{t-q} + \varepsilon_t = a_0 + \sum_{i=1}^q a_i y_{t-i} + \varepsilon_t. [[/math]]
  2. Obtain the squares of the error [math] \hat \epsilon^2 [/math] and regress them on a constant and [math]q[/math] lagged values:
    [[math]] \hat \varepsilon_t^2 = \hat \alpha_0 + \sum_{i=1}^{q} \hat \alpha_i \hat \varepsilon_{t-i}^2[[/math]]
    where [math]q[/math] is the length of ARCH lags.
  3. The null hypothesis is that, in the absence of ARCH components, we have [math] \alpha_i = 0 [/math] for all [math] i = 1, \cdots, q [/math]. The alternative hypothesis is that, in the presence of ARCH components, at least one of the estimated [math] \alpha_i [/math] coefficients must be significant. In a sample of [math]T[/math] residuals under the null hypothesis of no ARCH errors, the test statistic [math]T'R^2[/math] follows [math] \chi^2 [/math] distribution with [math]q[/math] degrees of freedom, where [math] T' [/math] is the number of equations in the model which fits the residuals vs the lags (i.e. [math] T'=T-q [/math]). If [math]T'R^2[/math] is greater than the Chi-square table value, we reject the null hypothesis and conclude there is an ARCH effect. If [math]T'R^2[/math] is smaller than the Chi-square table value, we do not reject the null hypothesis.

References

  1. Shumway, Robert; Stoffer, David (2010). Time series analysis and its applications : with R examples (3rd ed.). Springer. ISBN 9781441921253.
  2. Yule, G. Udny (1927) "On a Method of Investigating Periodicities in Disturbed Series, with Special Reference to Wolfer's Sunspot Numbers", Philosophical Transactions of the Royal Society of London, Ser. A, Vol. 226, 267–298.]
  3. Walker, Gilbert (1931) "On Periodicity in Series of Related Terms", Proceedings of the Royal Society of London, Ser. A, Vol. 131, 518–532.
  4. Theodoridis, Sergios (2015-04-10). "Chapter 1. Probability and Stochastic Processes". Machine Learning: A Bayesian and Optimization Perspective. Academic Press, 2015. pp. 9–51. ISBN 978-0-12-801522-3.
  5. Von Storch, H.; F. W Zwiers (2001). Statistical analysis in climate research. Cambridge Univ Pr. ISBN 0-521-01230-9.[page needed]
  6. Eshel, Gidon. "The Yule Walker Equations for the AR Coefficients" (PDF). stat.wharton.upenn.edu.
  7. Burg, J. P. (1968). "A new analysis technique for time series data". In Modern Spectrum Analysis (Edited by D. G. Childers), NATO Advanced Study Institute of Signal Processing with emphasis on Underwater Acoustics. IEEE Press, New York.
  8. "Modified Burg Algorithms for Multivariate Subset Autoregression" (2005). Statistica Sinica 15: 197–213. 
  9. Engle, Robert F. (1982). "Autoregressive Conditional Heteroskedasticity with Estimates of the Variance of United Kingdom Inflation". Econometrica 50 (4): 987–1007. doi:10.2307/1912773. 

General References

  • Fatoumata, Dama; Sinoquet, Christine (2021). "Time Series Analysis and Modeling to Forecast: a Survey". arXiv:2104.00164v2 [cs.LG].

Wikipedia References