autocorrelation · pdf file · 2017-10-25consequences of autocorrelation detecting...
TRANSCRIPT
AUTOCORRELATION
Phung Thanh Binh
▪ Time series Gauss-Markov conditions
▪ The nature of autocorrelation
▪ Causes of autocorrelation
▪ Consequences of autocorrelation
▪ Detecting autocorrelation
▪ Remedial measures
OUTLINE
3) No perfect collinearity
2) Zero conditional mean of error: E[ut|Xjk] = 0
1) Linear: Yt = 1 + 2X2t + 3X3t + ut
Unbiased
5) No serial correlation: Cov(ut,us|Xjk) = 0
4) Homoscedasticity: Var(ut|Xjk) = 2
if violated => ARCH family models: Lecture 16 (for T.S data)
BLUE
Lecture 13violated
Time series Gauss Markov conditions
▪ Terminological issue:
▪ Common practice: the terms autocorrelation and
serial correlation are the same.
▪ Some authors prefer to distinguish the two terms as
follows:
▪ Autocorrelation: Lag correlation of a given series with
itself, lagged by a number of time units.
▪ Serial correlation: Lag correlation between two different
series.
The nature of autocorrelation
The nature of autocorrelation
The nature of autocorrelation
The nature of autocorrelation
Positive
autocorrelation
Negative
autocorrelation
No
autocorrelation
The meaning of ρ: The error term ut at time t is a linear
combination of the current and past disturbance.
▪ The possible strong correlation between the
observation i with the observation j could be due to:
▪ Inertia
▪ Cobweb phenomenon
▪ Lags
▪ Nonstationarity
▪ Specification bias: Excluded variables case
▪ Specification bias: Incorrect functional form
▪ etc … [see Gujarati (2009). Basic Econometrics]
Pure autocorrelation
Impure
autocorrelation
The causes of autocorrelation
The consequences of autocorrelation
▪ The estimated coefficients are still unbiased.
▪ The variance of the is no longer the smallest.
▪ The standard error of the estimated coefficient,
becomes large.
The consequences of autocorrelation
Table6_1.dta (in Econometrics by examples)
Detecting autocorrelation
▪ Graphical method
▪ Plot the values of the residuals, et, chronologically
▪ If discernible pattern exists, autocorrelation likely a
problem.
▪ Durbin-Watson test: Durbin-Watson’s d statistic
▪ Durbin’s h statistic
▪ Breusch-Godfrey (BG) test
-2-1
01
23
0 20 40 60Time
Residuals Standardized residuals
. predict s1, resid
. gen s1_100=100*s1
. label var s1_100 "Residuals"
. predict s2, rstandard
. twoway (line s1_100 time) (line s2 time)
Assumptions are:
1. The regression model includes an intercept term.
2. The regressors are fixed in repeated sampling.
3. The error term follows the first-order autoregressive
(AR1) scheme:
4. The regressors do not include the lagged value(s) of
the dependent variable, Yt.
5. No missing observation.
1t t tu u v
Durbin-Watson d Statistic
Durbin-Watson d Statistic
d (ei ei1)
2ei
2, for n and K -1 d.f.
Positive Zone of No Autocorrelation Zone of Negative
autocorrelation indecision indecision autocorrelation
|_______________|__________________|_____________|_____________|__________________|___________________|
0 d-lower d-upper 2 4-d-upper 4-d-lower 4
Autocorrelation is clearly evident
Ambiguous – cannot rule out autocorrelation
Autocorrelation in not evident
Durbin-Watson d Statistic
▪ This test allows for:
(1) Lagged values of the dependent variables to be included as
regressors
(2) Higher-order autoregressive schemes, such as AR(2), AR(3),
etc.
(3) Moving average terms of the error term, such as ut-1, ut-2, etc.
▪ The error term in the main equation follows the following AR(p)
autoregressive structure:
▪ The null hypothesis of no serial correlation is:
1 2 ... 0p
1 1 2 2 ...t t t p t p tu u u u v
Breusch-Godfrey (BG) test
The BG test involves the following steps:
▪ Regress et, the residuals from our main regression, on the
regressors in the model and the p autoregressive terms given in
the equation on the previous slide, and obtain R2 from this
auxiliary regression.
▪ If the sample size is large, BG have shown that: (n – p)R2 ~ X2p
▪ That is, in large samples, (n – p) times R2 follows the chi-square distribution with
p degrees of freedom.
▪ Rejection of the null hypothesis implies evidence of
autocorrelation.
▪ As an alternative, we can use the F value obtained from the
auxiliary regression.
▪ This F value has (p , n-k-p) degrees of freedom in the numerator and
denominator, respectively, where k represents the number of parameters in the
auxiliary regression (including the intercept term).
Breusch-Godfrey (BG) test
Breusch-Godfrey (BG) test
▪ First-Difference Transformation
▪ If autocorrelation is of AR(1) type, we have:
▪ Assume ρ=1 and run first-difference model (taking first difference
of dependent variable and all regressors)
▪ Generalized Transformation
▪ Estimate value of ρ through regression of residual on lagged
residual and use value to run transformed regression
▪ Newey-West Method
▪ Generates HAC (heteroscedasticity and autocorrelation
consistent) standard errors
▪ Model Evaluation
1t t tu u v
Remedial Measures
First-Difference Method
▪ This outcome could be due to the wrong value of ρ (ρ
= 1) chosen for transformation.
▪ Notes:
▪ There if no intercept in the first-difference model.
▪ If there is an intercept term, what does it stand for?
▪ Rule of thumb:
▪ Use the first-difference form whenever d < R2 (Maddala).
▪ Use the first-difference form when ut is nonstationary (or
differently, Yt and Xt are not cointegrated).
First-Difference Method
Feasible Generalized Least Squares (FGLS)
Feasible Generalized Least Squares (FGLS)
FGLS: Prais-Winsten Transformation
FGLS: Cochrane-Orcutt Transformation
One of the iterative
methods of
estimating ρ
FGLS: Cochrane-Orcutt Transformation
How does the Cochrane-Orcutt procedure work?
How does the Cochrane-Orcutt procedure work?
Prais-Winsten
procedure is
similar, but it
transforms the
first observation
differently.
How does the Cochrane-Orcutt procedure work?
The Newey-West Method
▪ This method still uses OLS, but corrects the
standard errors for autocorrelation.
▪ This is an extension of White’s heteroscedasticity-
consistent standard errors.
▪ The corrected standard errors are known as HAC
(heteroscedasticity- and autocorrelation-consistent)
standard errors or simply Newey-West standard
errors.
▪ This is strictly valid in large samples.
The Newey-West Method
The Newey-West Method