2. A time series is said to be stationary if there is no systematic
change in mean (no trend) overtime, if there is no systematic
change in variance and if period variations have been removed
3. Typesof
stationarities
Strict Stationary: A strict stationary series satisfies
the mathematical definition of a stationary process.
For a strict stationary series, the mean, variance and
covariance are not the function of time. The aim is to
convert a non-stationary series into a strict
stationary series for making predictions.
Trend Stationary: A series that has no unit root but
exhibits a trend is referred to as a trend stationary
series. Once the trend is removed, the resulting
series will be strict stationary. This means that the
series can be strict stationary or trend stationary.
Difference Stationary: A time series that can be made
strict stationary by differencing falls under
difference stationary. ADF test is also known as a
difference stationarity test.
5. Autocorrelation function – ACF (ρk) at lag k is defined as
𝑝𝑘 =
𝛾𝑘
𝛾0
=
𝑐𝑜𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒 𝑎𝑡 𝑙𝑎𝑔 𝑘
𝑣𝑎𝑟𝑖𝑎𝑛𝑐𝑒
It lies between −1 and +1.
If we plot ρk against k, the graph is known as the population correlogram.
Partial autocorrelation function – PAFC (𝒑𝒌).
𝑝𝑘 =
𝛾𝑘
𝛾0
𝛾𝑘 =
(𝑌𝑡 − 𝑌)(𝑌𝑡+𝑘 − 𝑌)
𝑛
𝛾0 =
(𝑌𝑡 − 𝑌)2
𝑛
6. Figure 1. Correlogram of white noise error term u
AC – autocorrelation; PAC – partial autocorrelation; Q-stat - Q
statistic; Prob - probability
Figure 2. Correlogram of random time series
AC – autocorrelation; PAC – partial autocorrelation; Q-stat - Q
statistic; Prob - probability
stationary time series nonstationary time series
7. Figure 3. Correlogram of US GDP, 1970-I to 1991-IV
AC – autocorrelation; PAC – partial autocorrelation; Q-
stat - Q statistic; Prob - probability
?!? The Choice of Lag Length
A rule of thumb is to compute ACF up
to one-third to one-quarter the length
of the time series
Since for our economic data we have
88 quarterly observations, by this rule
lags of 22 to 29 quarters will do.
8. A unit root (also called a unit root process or a difference stationary process)
is a stochastic trend in a time series, sometimes called a “random walk with
drift”; If a time series has a unit root, it shows a systematic pattern that is
unpredictable.
Tests to identify stationarity include
The Dickey Fuller Test (sometimes called a Dickey Pantula test), which is based on linear
regression. Serial correlation can be an issue, in which case the Augmented Dickey-Fuller (ADF)
test can be used.
The Elliott–Rothenberg–Stock Test, which has two subtypes:
The P-test takes the error term’s serial correlation into account,
The DF-GLS test can be applied to detrended data without intercept.
The Schmidt–Phillips Test includes the coefficients of the deterministic variables in the null and
alternate hypotheses. Subtypes are the rho-test and the tau-test.
The Phillips–Perron (PP) Test is a modification of the Dickey Fuller test, and corrects for
autocorrelation and heteroscedasticity in the errors.
The Zivot-Andrews test allows a break at an unknown point in the intercept or linear trend.
9. 1) DICKEY–FULLER TEST 1 (N O CONSTANT AND NO TREND)
∆𝑦𝑡 = 𝜌 − 1 𝑦𝑡−1 + 𝜈𝑡 = 𝛾𝑦𝑡−1 + 𝜈𝑡
where γ = ρ – 1 and ∆𝑦𝑡 = 𝑦𝑡 − 𝑦𝑡−1 . Then,the hypotheses can be written in terms of either γ or ρ:
𝐻0 ∶ 𝜌 = 1 ; 𝐻0 ∶ 𝛾 = 0
𝐻1 ∶ 𝜌 < 1 ; 𝐻0 ∶ 𝛾 < 0
Null hypothesis is that the series is nonstationary. In other words, if we do not reject the null, we conclude
that it is a nonstationary process; if we reject the null hypothesis that γ = 0, then we conclude that the series
is stationary.
2) DICKEY–FULLER TEST 2 (WITH CONSTANT BUT NO TREND)
∆𝑦𝑡 = 𝛼 + 𝛾𝑦𝑡−1 + 𝜈𝑡
3) DICKEY–FULLER TEST 3 (WITH CONSTANT AND WITH TREND)
∆𝑦𝑡 = 𝛼 + 𝛾𝑦𝑡−1 + 𝜆𝑡 + 𝜈𝑡
10. Critical Values for the Dickey-Fuller Test
Augmented Dickey–Fuller tests
∆𝑦𝑡 = 𝛼 + 𝛾𝑦𝑡−1 +
𝑠=1
𝑚
𝑎𝑠∆ 𝑦𝑡−𝑠 + 𝜈𝑡
11. TRANSFORMING NONSTATIONARY TIME SERIES
Differencing subtracting the values of the observations from one
another in some prescribed time dependent order
yt
‘ = yt – y(t-1)
Seasonal Differencing calculating the difference between
consecutive values, we calculate the difference between an
observation and a previous observation from the same season
yt‘ = yt – y(t-n)
Transformations include power transform and log transform
etc