SlideShare a Scribd company logo
Adaptive Quasi-Maximum Likelihood Estimation of GARCH models
with Studentā€™s t Likelihood 1
Xiaorui Zhu 2, Li Xie3
Abstract This paper proposes an adaptive quasi-maximum likelihood estimation when forecast-
ing the volatility of ļ¬nancial data with the generalized autoregressive conditional heteroscedas-
ticity(GARCH) model. When the distribution of volatility data is unspeciļ¬ed or heavy-tailed,
we worked out adaptive quasi-maximum likelihood estimation based on data by using the scale
parameter Ī·f to identify the discrepancy between wrongly speciļ¬ed innovation density and the
true innovation density. With only a few assumptions, this adaptive approach is consistent and
asymptotically normal. Moreover, it gains better eļ¬ƒciency under the condition that innovation
error is heavy-tailed. Finally, simulation studies and an application show its advantage.
Keywords quasi likelihood, GARCH Model, adaptive estimator, heavy-tailed error
JEL Classiļ¬cation: C13; C22
1 Introduction
With the development of derivatives, volatility has been a crucial variable in not only
modeling ļ¬nancial data, but also designing trading strategies and implementing risk man-
agement. Among various models of analysis of volatility, GARCH(generalized autoregres-
sive conditional heteroscedasticity) model is a well-known and useful one. It was proposed
by Bollerslev(1986) as follows:
ļ£±
ļ£²
ļ£³
ut = Ļƒt|tāˆ’1Īµt
Ļƒ2
t|tāˆ’1 = Ļ‰ +
āˆ‘p
i=1 Ī±iu2
tāˆ’i +
āˆ‘q
j=1 Ī²jĻƒ2
tāˆ’j
(1)
Primarily, the estimation of ARCH/GARCH model is based on the maximum likeli-
hood estimation(MLE) when the innovation subjects to conditional Gaussian distribution.
However, if the distribution of innovation Īµt is not normal, as is prevalent in plenty of em-
pirical data, quasi-maximum likelihood estimation would be more suitable. At ļ¬rst, plenty
of literature discussed Gaussian quasi-maximum likelihood estimation when the innovation
distribution is not normal. Weissā€™s(1986) [19] research showed that even under special con-
dition that the data is un-normalized and have ļ¬nite fourth moments, the Gaussian-MLE
is consistent asymptotically normal. After that, Bollerslev(1986) [1], Hsieh(1989) [12],
and Nelson(1991) [14] proposed the issue of parameter estimation by generalized Gaussian
quasi-maximum likelihood estimation(GQMLE) when the innovation distribution is not
normal, and has also derived the consistency and eļ¬ƒciency of this method. Bougerol and
Picard(1992) [2] discussed the necessary stationarity and ergodicity of GARCH models. To
1:We appreciate all the helpful suggestions from the editor and the reviewers, thoughtful comments from
A.P.Gaorong Li and Yuyang Zhang, and ļ¬nancial support from the National Natural Science Foundation
(Grant No.11171011) and the National Social Science Foundation(Grant No.13BGL007).
2:College of Applied Sciences, Beijing University of Technology, Pingleyuan 100, Chaoyang District, Bei-
jing, 100124, China.
E-mail: xiaorui.zhu@emails.bjut.edu.cn
3:College of Applied Sciences, Beijing University of Technology, Pingleyuan 100, Chaoyang District, Bei-
jing, 100124, China.
E-mail:xieli@bjut.edu.cn
2
get an estimation when the innovation distribution is unknown, Elie and Jeantheau(1995)
proposed the Gaussian quasi-maximum likelihood estimator that is consistent and asymp-
totically normal. Besides, there are also crucial achievements of Gaussian QMLE in recent
years. Berkes, Horvath and Kokoszka (2003) [11] studied the structure of a GARCH(p,q)
and proved the consistency and asymptotic normality of the QMLE under mild conditions.
Strong consistency and asymptotic normality of QMLE were also proved in the study of
Francq and Zakoian (2004) [9].
Other researches that involve improving Gaussian QMLE include: Engle and Gonzalez-
Rivera(1991) [5] published a procedure that can improve the eļ¬ƒciency of GQMLE. Drost
and Klaassen (1997) [4] put forward the adaptive estimation in ARCH model. Sun and
Stengos (2006) [18] proposed adaptive two-step semi-parametric procedures on the condi-
tion of symmetric error or asymmetric error separately. A Self-weighted and local QMLE
for ARMA-GARCH models has been discussed in the study of Ling (2007) [13].
With the developing of quasi-maximum likelihood estimation, some non-gaussian QM-
LEs were proposed to improve the estimator when the innovations are heavy-tailed or
skewed. Xiu(2010) [21] discussed quasi-maximum likelihood estimation of stochastic volatil-
ity model with high frequency data. OssandĀ“on and Bahamonde(2011) [16] proposed
a novel estimation for GARCH models based on the Extended Kalman Filter(EKF).
Zhu(2012) [22] put forward a mixed portmanteau test for ARMA-GARCH model by
quasi-maximum likelihood estimator. Francq et al.(2011) [8] has developed a two-stage
non-Gaussian quasi-maximum likelihood estimation to rectify the value of parameter es-
timation. This procedure allows the use of generalized Gaussian likelihood and proposes
a test that can determine whether the more eļ¬ƒcient Quasi-MLE is required with a non-
Gaussian density. Other notable achievements lay in that the Studentā€™s t likelihood func-
tion has been taken into consideration, which is called three-step non-Gaussian Quasi-MLE
approach in the study of Fan et al.(2013) [6]. Because the Pearsonā€™s Type IV(PIV) distri-
bution can capture a large range of the asymmetry and leptokurtosis of innovation error,
Zhu and Li (2014) [23] have proposed a novel Pearson-type QMLE of GARCH(p,q) models
to capture not only the heavy-tailed but also the skewed innovations.
This article focuses on adaptive analyzing procedure which will increase eļ¬ƒciency of
the estimator of the GARCH model, and proposes an adaptive procedure of QMLE aiming
at minimizing the discrepancy between true and speciļ¬ed distribution of innovation. The
scale parameter Ī·f is built in the sense of Kullback-Leibler Information Criterion (KLIC).
With the Adaptive-QMLE procedure can not only ļ¬nd the approximate degree of freedom
of innovation distribution, but also get the optimized quasi-maximum likelihood estimator
after several iterations. This general estimation doesnā€™t rely on models, so it may be
used in other general models. And the idea of Adaptive-QMLE can also be used in
other methods such as GQMLE, NGQMLE and PQMLE. The simulation studies conļ¬rm
that convergence rate of this adaptive QMLE procedure is very high, especially when the
innovation is heavy-tailed. It performes well with high frequency data when the empirical
distribution of innovations is often heavy-tailed.
This paper is organized as follows. In Section 2, we introduce the GARCH model
and quasi-maximum likelihood estimation. In Section 3, we describe the assumptions,
propositions of our new adaptive quasi-maximum likelihood estimation, where we also
explain the details and proofs of this procedure. Some simulation studies are provieded in
Section 4. Real data analyses are shown in Section 5. The Section 6 concludes this paper.
3
2 Quasi-MLE in GARCH model
2.1 The GARCH Model
Common type of GARCH(p,q) model has been shown in the Section 1. Let Īø =
(Ļ‰, āƒ—Ī±ā€², āƒ—Ī²ā€²)ā€² be the unknown parameters of GARCH(p,q), where āƒ—Ī± = (Ī±1, Ā· Ā· Ā· , Ī±p)ā€², āƒ—Ī² =
(Ī²1, Ā· Ā· Ā· , Ī²q)ā€² are the heteroscedastic parameters. {Īµt, āˆ’āˆž < t < āˆž} are innovation of
model. Ī˜ āˆˆ R1+p+q
o is the parameter space and Ro = [0, āˆž). The following assumptions
for GARCH model are necessary:
Assumption 1.
(i)The GARCH process {ut} is strictly stationary and ergodic.
(ii)For each Īø āˆˆ Ī˜, Ī±(z) and Ī²(z) have no common root, Ī±(1) Ģø= 0, Ī±p + Ī²q Ģø= 0 and
āˆ‘q
j=1 Ī²j < 1, where Ī±(z) =
āˆ‘p
i=1 Ī±izi and Ī²(z) = 1 āˆ’
āˆ‘q
j=1 Ī²jzj.
(iii)Īµt is a nondegenerate and i.i.d random variable with EĪµt = 0, EĪµ2
t = 1 and un-
known density g(Ā·).
The stationarity and ergodicity of GARCH models in Assumption 1(i) can be found in
Bougerol and Picard(1992) [2]. The identiļ¬ability conditions for GARCH(p,q) are given in
Berkes, Horvath and Kokoszka (2003) [11]. For a general GARCH model the conditional
variance Ļƒ2 cannot be expressed in terms of a ļ¬nite number of the past observations utāˆ’1,
utāˆ’2.... In many researches, conditional variance ĖœĻƒ2
t is [7]:
ĖœĻƒ2
t =
Ļ‰
1 āˆ’ Ī£q
j=1Ī±j
+
pāˆ‘
i=1
Ī±iu2
tāˆ’i +
pāˆ‘
i=1
Ī±j
āˆžāˆ‘
k=1
qāˆ‘
j1=1
Ā· Ā· Ā·
qāˆ‘
jk=1
Ī²j1 Ā· Ā· Ā· Ī²jk
u2
tāˆ’iāˆ’j1āˆ’Ā·Ā·Ā·āˆ’jk
(2)
By doing so, {ĖœĻƒ2
t } in (2) is a function of sample uāˆ—
tāˆ’1 = {us, āˆ’āˆž < s ā‰¤ t āˆ’ 1}.
2.2 Quasi-Maximum Likelihood Estimation
Fortunately, it is easy to derive the likelihood function of a GARCH model with normal
error. Under the Eu2
t < āˆž assumption, the log-likelihood function of the GARCH model
is as follow:
L(Ļ‰, āƒ—Ī±, āƒ—Ī²) = āˆ’
n
2
log(2Ļ€) āˆ’
1
2
nāˆ‘
t=1
{log(Ļƒ2
t|tāˆ’1) +
u2
t
Ļƒ2
t|tāˆ’1
} (3)
Under mild conditions that Īµt isnā€™t speciļ¬ed as standard normal, Berkes, Horvath and
Kokoszka (2003) [11] proved the consistency and asymptotic normality of quasi-MLE.
Apart from normal distribution, studentā€™s t-distributions and generalized Gaussian dis-
tributions are considered frequently. The deduction of the generalized quasi-maximum
likelihood estimator of GARCH model is as follows:
Ė†Īø = argmaxĪø
1
2
nāˆ‘
t=1
{āˆ’ log(Ļƒt|tāˆ’1) + log g(
ut
Ļƒt
)} (4)
In this paper, we consider innovation is standardized t-distribution, so the true prob-
ability density function of {Īµt, āˆ’āˆž < t < āˆž} which is the g(Ā·) in the above formula
is:
g(x) =
Ī“((Ī½ + 1)/2)
(Ļ€Ī½)1/2Ī“(Ī½/2)
(1 +
x2
Ī½
)āˆ’
(Ī½+1)
2 (5)
where:Ī½ > 0 may be treated as continuous parameter.
4
3 Adaptive Quasi Maximum likelihood Estimation
If the true innovation distribution cannot be speciļ¬ed, gaussian quasi-maximum likeli-
hood estimation(GQMLE) is always inconsistent as shown in Newey and Steigerwald(1997)
[15]. Weiss(1984), Lee and Hansen(1994) study the asymptotic distributions of GQMLE.
Francq et al.(2011) [8] proposes a two-stage GQMLE that can improve eļ¬ƒciency of esti-
mator. Based on a three step quasi-maximum likelihood estimation, Fan et al.(2013) [6]
derived the asymptotic theory of non-GQMLE when heavy-tailed studentā€™s t distribution
is taken into consideration. These methods need to specify the degree of freedom Ī½(t
distribution) or the parameter r of the Generalized Error Distribution(GED(r)). And
they adjust estimator one time when the innovation distribution is heavy-tailed. However,
they couldnā€™t totaly capture the heavy-tail characteristic. Therefore we put forward the
adaptive QMLE procedure by choosing the optimized quasi likelihood function at mini-
mal divergence between quasi innovation density f and the true innovation distribution
density g. This iterative procedure will gain better eļ¬ƒciency than other methods when
the distribution of innovation is heavy-tailed or unknown.
3.1 KLIC and Scale Parameter
In order to measure the divergence between true innovation density g and speciļ¬ed
likelihood function f, Kullback-Leibler divergence is necessary:
I(g; f) =
āˆ«
[log g(u)]g(u)du āˆ’
āˆ«
[log f(u)]g(u)du (6)
The scale parameter Ī·f we use was proposed by White(1982) [20] and Fan et al. [6] as:
Ī·f = arg maxĪ·>0E{āˆ’ log Ī· + log f(
Īµ
Ī·
)}, (7)
where Ī·f can be computed by using maximum likelihood estimation.
Let W(Ī·) = E{āˆ’ log Ī· + log f(Īµ
Ī· )}. In order to derive the consistency property of Ė†Īø,
another assumption is needed as follows:
Assumption 2. The quasi likelihood is chose from t-distribution family in (5) such
that W(Ī·) has a unique maximmizer Ī·f > 0.
This assumption helps our ļ¬nd the optimal likelihood within the t-distribution family
that best captured the heavy-tailed characteristic of innovation than others. In other
words, this assumption and the proposition 1 below will determine the best degree of
freedom in adaptive-QMLE, and the best likelihood function will make adaptive-QMLE
have better eļ¬ƒciency. And the scale parameter Ī·f had a crucial proposition:
Proposition 1. If f āˆ exp(āˆ’x2/2) or f = g, then Ī·f = 1.
Proof. [6] Deļ¬ne the likelihood ratio function G (Ī·) = E
(
log
(
f(Īµ/Ī·)
Ī·Ā·f(Īµ)
))
. Suppose G(Ī·)
has no local extremal values. And since log(x) ā‰¤ 2(
āˆš
x āˆ’ 1),
E
(
log
(
f (Īµ/Ī·)
Ī· Ā· f (Īµ)
))
ā‰¤ 2E
(āˆš
f (Īµ/Ī·)
Ī· Ā· f (Īµ)
āˆ’ 1
)
= 2
āˆ« +āˆž
āˆ’āˆž
āˆš
1
Ī·
f
(
x
Ī·
)
f (x) dx āˆ’ 2
ā‰¤ āˆ’
āˆ« +āˆž
āˆ’āˆž
(āˆš
1
Ī·
f
(
x
Ī·
)
āˆ’
āˆš
f(x)
)2
dx
ā‰¤ 0
3.2 Adaptive QMLE
5
Now, we propose adaptive quasi-maximum likelihood estimation which can be used to
estimate parameters of GARCH model. This method can approximate degree of freedom
of quasi likelihood function based on the Proposition 1. So quasi-maximum likelihood
estimation with the approximative degree will gain eļ¬ƒcency signiļ¬cantly.
(a) First, we estimate ĖœĪø(0)(number in the upper right corner is serial number of esti-
mator) with GQMLE under the assumption of normality by
ĖœĪø(0)
= argmaxĪø
1
2
nāˆ‘
t=1
{āˆ’ log(Ļƒt|tāˆ’1) āˆ’
u2
t
Ļƒ2
t|tāˆ’1
} (8)
.
(b) The {Īµt} we need to caculate Ī·f can be replaced by computing from ĖœĪµ
(0)
t =
ut/(ĖœĻƒ(ĖœĪø(0))) with gaussian maximum likelihood estimator ĖœĪø(0) in the step (a). Then ĖœĪ·f is
obtained in
ĖœĪ·
(1)
f = arg maxĪ·>0E{āˆ’ log Ī· + log f(
ĖœĪµ
(0)
t
Ī·
)} (9)
by changing the degree of freedom (df1) of studentā€™s t denstity function (f(Ā·)) above until
|ĖœĪ·
(1)
f āˆ’ 1| ā‰¤ Ī“ (Ī“ can be set).
In other words, this step ļ¬nds the approximate f(df1)(Ā·) of ĖœĪµ
(0)
t under the Proposition
1. Therefore, we can obtain ĖœĪø(1) under the new density function f(df1) which is more
reliable for true innovation:
ĖœĪø(1)
= arg maxĪø
Tāˆ‘
t=1
(āˆ’ log(Ļƒt|tāˆ’1) + log f(df1)(
ut
Ļƒt|tāˆ’1
)) (10)
(c) Apply the similar procedure in (b) and we will get {ĖœĪµ
(1)
t = ut/(ĖœĻƒt|tāˆ’1)} , ĖœĪ·
(2)
f in
formula (9), and f(df2)(Ā·) under the condition |ĖœĪ·
(2)
f āˆ’ 1| ā‰¤ Ī“. Then, estimate ĖœĪø(2) with the
likelihood function f(df2) , and so on.
(d) Finally, obtain a Adaptive estimator Ė†Īø = ĖœĪø(n) until |ĖœĪø(n) āˆ’ ĖœĪø(nāˆ’1)| < Ī».
Usually, we set Ī“ at about 0.1 and Ī» less than 0.5. The smaller the Ī“ and Ī», the more
times of iteration are needed.
The adaptive-QMLE is consistent as follow:
THEOREM 3.1. [6] Suppose that Assumption 1(i,ii,iii) and 2 hold. Then Ė†Īø
p
ā†’ Īø0.
Remark 1. The Adaptive-QMLE needs a ļ¬nite fourth moment for the innovation
because we simply adopt GQMLE in the ļ¬rst step. The ļ¬nite fourth moment condition
is essential to obtain asymptotic normality. In the future studies, We will use other
alternative estimators in the ļ¬rst step to remove this condition. And the condition EĪµ2
t = 1
assure the Proposition 1 can be used in this procedure. This paper discusses the case that
EĪµ2
t < āˆž, when EĪµ2
t = āˆž, lots of estimators in this context were studied. Such as Chen
and Zhu(2014) [3], Hill(2014) [10], Peng and Yao(2003) [17] and so on.
This procedure is helpful when the family of distribution of innovation is known. With
the help of KLIC, adaptive-QMLE will increase the eļ¬ƒciency of the estimator and decrease
the discrepancy between true density of innovation and speciļ¬ed density of innovation. So
it will be a better method to process varies situations and a wide variety of data.
4 Simulation studies
We show variation of Ī·f in Figure 1. Each line represents the variation of the Ī·f which
6
the distribution of {Īµt} in formula (9) is ļ¬xed as noted in the upper left. The horizontal
axis represents degree of freedom of likelihood function. We compares 4 types of Studentā€™s
t distribution and three types of Generalized Gaussian distribution with diļ¬€erent shape
parameter. It is shown that if the degree of freedom of quasi likelihood function is larger(or
smaller) than the degree of freedom of {Īµt}, the Ī·f > 1(or < 1). When the degree of
freedom of likelihood function is smaller than the true innovation degree freedom the
Ī·f < 1 . Itā€™s no doubt that Ī·f approximately equals to 1 when the speciļ¬ed the quasi
likelihood function equals to the true innovation function. For example, we can ļ¬nd that
the bold line in Figure 1, when innovation is t2 and quasi likelihood function is Studentā€™s
t density with degree of freedom equal to 2, the Ī·f is approximately equal to 1. The same
things occured in each line.
In order to show the advantages of adaptive-QMLE clearly, we took the ordinary
GARCH(1,1) model with true parameters (Ļ‰, Ī±1, Ī²1) = (0.02, 0.6, 0.3) into consideration.
With the bound of |ĖœĪ·
(2)
f āˆ’1| ā‰¤ 0.2, adaptive estimator is convergent after several iterations
as showed in Table 1. The innovation distribution ranges from thin-tailed t20, which is
approximately normal distribution, to heavy-tailed t2. At the same time, we compared
diļ¬€erent situation with sample size ranging among 500, 1000 or 2000. The ā€œStepā€ column
display the order of iteration. ā€œdfā€ column is the degree of freedom of studentā€™s t quasi
likelihood function which was found with the bound |ĖœĪ·
(1)
f āˆ’1| ā‰¤ 0.2. ā€œdf=Gaussā€ means the
gaussian assumption of innovation distribution. What can be found when the innovations
are heavy-tailed, such as t2, t3, is that the maximum-likelihood estimator with Gaussian
assumption is intolerable. However, from the results in the table we could see the Adaptive-
Quasi-Maximum likelihood estimators are better than Maximum likelihood estimators.
And, the adaptive estimation procedure approximate the true degree of freedom and true
parameters.
In Table 2, we use the same setting of GARCH(1,1) model in Table 1. The sample size
is various among 250,500 and 1000. And repeat time of simulation is 500 at each sample
size. The innovation was set as Studentā€™s t distribution for various degrees of freedom from
heavy-tailed to thin-tailed in Table 2. There are three things need to be emphasized in
Table 2. The ļ¬rst one is that NGQMLE is the three-step estimation procedure proposed
by Fan et al(2013) [6]. The auxiliary innovation distribution of NG-QMLE is t4. The
second is GQMLE represent maximum-likelihood estimation with the assumption that
distribution of innovations is normal. Third is that MLE is ordinary maximum likelihood
estimation with the true distribution of innovation set at the beginning of simulations.
In Table 2, gaussian quasi-maximum likelihood estimators are always terrible when
the innovations are heavy-tailed such as t2 or t3. Especially, when the innovations are
t2 or t3, a fourth moment doesnā€™t exist, the RMSE of Ļ‰ of GQMLE is intolerable. So
the relative RMSE ratio of other estimators against GQMLE is meaningless. We wonā€™t
display them in the Table 2. But we can ļ¬nd that when the innovations are heavy-tailed,
the Adaptive-QMLE is better than the other two estimations, NGQMLE and GQMLE. In
the table, the relative RMSE ratio of A-QML estimator against GQMLE is smaller than 1.
Meanwhile, the relative RMSE ration of adaptive-QMLE shown that the adaptive-QMLE
is better than NGQMLE. It is clearly shown in the Table 2 that most relative RMSE
ratio of A-QMLE against GQMLE is close to relative RMSE ratio of MLE, which means
A-QMLE can be an best alternative estimator when the innovation is unknown. On the
other hand, approximate df of true innovation, showed in ā€œdfBā€ column, is close to the
exact degree of freedom. These evidences implies adaptive QMLE, no matter whether
7
the tail of innovation is heavy or thin, is an optimized estimator which is close to the
maximum likelihood estimator with true innovation distribution.
5 Application
In this section, ļ¬rst, we summary daily returns of six indexes, namely S&P500, FTSE,
NADQ, CAC, DAX, HSI. The time period we considered is taken from January 2, 2000
to March 31, 2014. In the Table 3, the number of samples, the mean, standard deviation
and excess kurtosis of daily return are showed. The we can ļ¬nd that the excess kurtosis of
all the indexes are bigger than zero, which means that they are all heavy-tailed. So, when
one want to analyze stock returns with GARCH model, the Adaptive-QMLE precedure
is not only necessary but also helpful. Second, we use this adaptive-QMLE procedure
to estimate parameters of GARCH model. Table 4 displays estimators of GARCH(1,1)
model of quasi-maximum likelihood estimation and adaptive quasi-maximum likelihood.
In the table 4, ā€œdfBā€ represent approximate degree of freedom of studentā€™s t density. From
the results in Table 4 we could ļ¬nd that although the adaptive QML estimator is a little
diļ¬€erent from the quasi-maximum likelihood estimator, we get an approximate degree of
freedom. These estimated degree of freedom imply the same heavy-tailed characteristic
of data in Table 3. S&P500, NASDAQ and HSI have bigger kurtosis, so these three
approximate degree of freedom are smaller than the other three indexes, in other words
their tails are heavier than othersā€™.
6 Conclusions
This article focuses on improving eļ¬ƒciency of the estimator of the GARCH model when
innovation distribution is unknown, and proposes the Adaptive-QMLE which gains better
eļ¬ƒciency. By using Ī·f = 1 in the sense of KLIC which can identify the quasi likelihood
function f, the Adaptive-QMLE is stable no matter whether the tail of innovation is heavy
or not.
The most important thing is that, without specifying the distribution of innovation, the
Adaptive-QMLE is very close to MLE with true innovation distribution. Hence, it is very
helpful and accurate when we donā€™t know the distribution of innovation, especially when
the innovation is heavy-tailed. So itā€™s a general quasi-maximum likelihood estimation
which can be used in more situations, such as the ļ¬nancial ļ¬eld or genetics. Possible
extension of the Adaptive-QMLE includes introducing other distributions of innovation
and considering more models.
8
List of Figures and Tables
Figure 1: Variations of Ī·f across Studentā€™s-t likelihood QMLE
2 4 6 8 10
0.61.01.4
Degree of freedom
Ī·
t2
t4
t6
t10
9
Table 1: Some samples to show Convergence of Adaptive-QMLE
Innov. Step N = 500 N = 1000 N = 2000
Si Ļ‰ Ī± Ī² df Ļ‰ Ī± Ī² df Ļ‰ Ī± Ī² df
t2 S1 2.571 0.387 0.613 Gauss. 0.386 0.267 0.733 Gauss. 0.984 0.336 0.663 Gauss.
t2 S2 1.748 0.576 0.420 6 0.196 0.463 0.536 20 0.191 0.568 0.432 11
t2 S3 0.022 0.618 0.365 2.5 0.069 0.620 0.375 5 0.042 0.649 0.349 4
t2 S4 0.013 0.627 0.312 2.0 0.033 0.656 0.326 3 0.032 0.679 0.317 3
t2 S5 0.013 0.605 0.305 1.8 0.027 0.643 0.312 2.5 0.032 0.679 0.317 3
t2 S6 0.027 0.643 0.312 2.5
t3 S1 0.000 0.758 0.298 Gauss. 0.029 0.709 0.285 Gauss. 0.371 0.540 0.459 Gauss.
t3 S2 0.015 0.686 0.269 2.8 0.021 0.656 0.309 3.8 0.040 0.644 0.356 20
t3 S3 0.016 0.690 0.274 3 0.020 0.643 0.302 3.5 0.302 0.678 0.321 9
t3 S4 0.016 0.690 0.274 3 0.027 0.692 0.307 7
t3 S5 0.025 0.701 0.298 6
t3 S6 0.024 0.707 0.293 5.5
t5 S1 0.014 0.666 0.249 Gauss. 0.022 0.608 0.220 Gauss. 0.018 0.416 0.356 Gauss.
t5 S2 0.012 0.542 0.328 4 0.019 0.606 0.248 3 0.039 0.619 0.378 20
t5 S3 0.012 0.542 0.328 4 0.019 0.606 0.248 3 0.032 0.632 0.364 15
t5 S4 0.029 0.637 0.357 10
t5 S5 0.028 0.641 0.351 5
t10 S1 0.082 0.689 0.302 Gauss. 0.096 0.639 0.359 Gauss. 0.087 0.755 0.243 Gauss.
t10 S2 0.018 0.579 0.354 20 0.027 0.706 0.272 20 0.023 0.750 0.223 20
t10 S3 0.017 0.566 0.356 15 0.026 0.703 0.267 15 0.023 0.750 0.223 20
t10 S4 0.017 0.566 0.356 15 0.025 0.695 0.261 11
t10 S5 0.024 0.684 0.256 10
t10 S6 0.023 0.675 0.254 9
t20 S1 0.056 0.613 0.382 Gauss. 0.080 0.735 0.261 Gauss. 0.078 0.582 0.291 Gauss.
t20 S2 0.017 0.537 0.365 20 0.024 0.630 0.253 20 0.017 0.623 0.306 20
t20 S3 0.017 0.537 0.365 20 0.024 0.630 0.253 20 0.017 0.623 0.306 20
a True paremeters (Ļ‰, Ī±1, Ī²1) = (0.02, 0.6, 0.3).
10
Table 2: Relative RMSE ratio comparison with Studentā€™s t innovation
Para Īµ T dfB A-tQMLE NGQMLE GQMLE MLE
Ļ‰ t5
250 7.807 0.084 0.094 0.157 0.059
500 7.555 0.064 0.055 0.139 0.044
1000 7.825 0.051 0.031 0.123 0.032
Ļ‰ t10
250 11.27 0.148 0.282 0.060 0.130
500 11.961 0.126 0.107 0.055 0.087
1000 10.752 0.137 0.128 0.044 0.078
Ļ‰ t20
250 13.292 0.158 0.180 0.055 0.148
500 14.151 0.141 0.679 0.048 0.123
1000 14.545 0.127 0.075 0.044 0.084
Ī± t2
250 2.937 0.478 0.801 0.203 0.639
500 2.699 0.408 0.601 0.218 0490
1000 2.644 0.339 0.495 0.254 0.351
Ī± t3
250 4.544 0.701 0.726 0.160 0.828
500 4.010 0.584 0.667 0.140 0.733
1000 4.223 0.647 0.763 0.125 0.747
Ī± t5
250 7.807 0.787 0.773 0.172 0.784
500 7.555 0.653 0.683 0.173 0.557
1000 7.825 0.596 0.561 0.151 0.502
Ī± t10
250 11.27 0.741 0.800 0.228 0.552
500 11.961 0.665 0.749 0.224 0.394
1000 10.752 0.648 0.740 0.249 0.264
Ī± t20
250 13.292 0.665 0.746 0.235 0.508
500 14.151 0.684 0.796 0.220 0.421
1000 14.545 0.667 0.781 0.226 0.293
Ī² t2
250 2.937 0.253 0.757 0.305 0.213
500 2.699 0.172 0.642 0.317 0.136
1000 2.644 0.098 0.562 0.367 0.087
Ī² t3
250 4.544 0.343 0.701 0.234 0.226
500 4.010 0.188 0.630 0.260 0.195
1000 4.223 0.152 0.679 0.258 0.155
Ī² t5
250 7.807 0.530 0.682 0.184 0.184
500 7.555 0.358 0.528 0.171 0.353
1000 7.825 0.252 0.505 0.170 0.267
Ī² t10
250 11.27 0.669 0.720 0.161 0.659
500 11.961 0.477 0.488 0.148 0.450
1000 10.752 0.288 0.359 0.156 0.270
Ī² t20
250 13.292 0.682 0.732 0.166 0.666
500 14.151 0.521 0.566 0.150 0.497
1000 14.545 0.405 0.418 0.131 0.394
a The RMSE of GQMLE are showed between the two bold lines. The
others are the relative RMSE ratio of other estimators against GQMLE.
Almost all are less than 1 means these three estimators are better than
GQMLE.
11
Table 3: Summary of Six stock indexes
ut of Index n mean sd kurtosis
S&P500 3581 0.0002 0.0132 7.9423
FTSE 3596 0.0001 0.0125 6.1308
NASDAQ 3582 0.0007 0.0302 7.2389
CAC 3632 0.0002 0.0414 4.9996
DAX 3633 0.0019 0.0458 4.4939
HSI 3548 0.0019 0.0471 8.2166
a The data are 6 daily stock market returns,
from January 2, 2000 to March 31, 2014. And
the kurtosis in the table is sample excess kur-
tosis.
12
Table 4: QMLE and Adaptive-tQMLE of GARCH(1,1) models.
Index Estimator Ļ‰ Ī± Ī² dfB
S&P500
GQMLE 0.003 0.085 0.902
15
AtQMLE 0.007 0.083 0.905
FTSE
GQMLE 0.013 0.098 0.892
30
AtQMLE 0.021 0.082 0.891
NASDAQ
GQMLE 0.005 0.715 0.236
13
AtQMLE 0.006 0.139 0.824
CAC
GQMLE 0.0002 0.084 0.906
56
AtQMLE 0.0003 0.085 0.906
DAX
GQMLE 0.0002 0.085 0.905
50
AtQMLE 0.0002 0.088 0.899
HSI
GQMLE 0.0002 0.064 0.912
12
AtQMLE 0.0002 0.054 0.929
a The data are 6 daily stock market returns, from Jan-
uary 2, 2000 to March 31, 2014.
13
References
[1] Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of
econometrics, 31(3):307ā€“327, 1986.
[2] Philippe Bougerol and Nico Picard. Stationarity of garch processes and of some
nonnegative time series. Journal of econometrics, 52(1):115ā€“127, 1992.
[3] Min Chen and Ke Zhu. Sign-based portmanteau test for arch-type models with heavy-
tailed innovations. Working paper, 2014.
[4] Feike C Drost, Chris AJ Klaassen, and Bas JM Werker. Adaptive estimation in
time-series models. The Annals of Statistics, 25(2):786ā€“817, 1997.
[5] Robert F Engle and Gloria Gonzalez-Rivera. Semiparametric arch models. Journal
of Business & Economic Statistics, 9(4):345ā€“359, 1991.
[6] Jianqing Fan, Lei Qi, and Dacheng Xiu. Quasi maximum likelihood estimation of
garch models with heavy-tailed likelihoods. Journal of Business & Economic Statis-
tics, pages justā€“accepted, 2013.
[7] Jianquing Fan. Nonlinear time series: nonparametric and parametric methods.
Springer, 2003.
[8] Christian Francq, Guillaume Lepage, and Jean-Michel ZakoĀØıan. Two-stage non gaus-
sian qml estimation of garch models and testing the eļ¬ƒciency of the gaussian qmle.
Journal of Econometrics, 165(2):246ā€“257, 2011.
[9] Christian Francq, Jean-Michel Zakoian, et al. Maximum likelihood estimation of pure
garch and arma-garch processes. Bernoulli, 10(4):605ā€“637, 2004.
[10] Jonathan B Hill. Robust estimation and inference for heavy tailed garch. Unpublished
Manuscript, Department of Economics, University of North Carolina, 2014.
[11] Lajos Horv, Piotr Kokoszka, et al. Garch processes: structure and estimation.
Bernoulli, 9(2):201ā€“227, 2003.
[12] David A Hsieh. Modeling heteroscedasticity in daily foreign-exchange rates. Journal
of Business & Economic Statistics, 7(3):307ā€“317, 1989.
[13] Shiqing Ling. Self-weighted and local quasi-maximum likelihood estimators for arma-
garch/igarch models. Journal of Econometrics, 140(2):849ā€“873, 2007.
[14] Daniel B Nelson. Conditional heteroskedasticity in asset returns: A new approach.
Econometrica: Journal of the Econometric Society, pages 347ā€“370, 1991.
[15] Whitney K Newey and Douglas G Steigerwald. Asymptotic bias for quasi-maximum-
likelihood estimators in conditional heteroskedasticity models. Econometrica: Journal
of the Econometric Society, pages 587ā€“599, 1997.
[16] SebastiĀ“an OssandĀ“on and Natalia Bahamonde. On the nonlinear estimation of garch
models using an extended kalman ļ¬lter. In Proceedings of the World Congress on
Engineering, volume 1, 2011.
14
[17] Liang Peng and Qiwei Yao. Least absolute deviations estimation for arch and garch
models. Biometrika, pages 967ā€“975, 2003.
[18] Yiguo Sun and Thanasis Stengos. Semiparametric eļ¬ƒcient adaptive estimation of
asymmetric garch models. Journal of Econometrics, 133(1):373ā€“386, 2006.
[19] Andrew A Weiss. Asymptotic theory for arch models: estimation and testing. Econo-
metric theory, pages 107ā€“131, 1986.
[20] Halbert White. Maximum likelihood estimation of misspeciļ¬ed models. Econometrica,
50(1):1ā€“25, 1982.
[21] Dacheng Xiu. Quasi-maximum likelihood estimation of volatility with high frequency
data. Journal of Econometrics, 159(1):235ā€“250, 2010.
[22] Ke Zhu. A mixed portmanteau test for arma-garch models by the quasi-maximum
exponential likelihood estimation approach. Journal of Time Series Analysis, 2012.
[23] Ke Zhu and Wai Keung Li. A new pearson-type qmle for conditionally heteroskedastic
models. Working paper, 2014.

More Related Content

What's hot

Principal component analysis in modelling
Principal component analysis in modellingPrincipal component analysis in modelling
Principal component analysis in modelling
harvcap
Ā 
Real time clustering of time series
Real time clustering of time seriesReal time clustering of time series
Real time clustering of time series
csandit
Ā 
gamdependence_revision1
gamdependence_revision1gamdependence_revision1
gamdependence_revision1Thibault Vatter
Ā 
On Modeling Murder Crimes in Nigeria
On Modeling Murder Crimes in NigeriaOn Modeling Murder Crimes in Nigeria
On Modeling Murder Crimes in Nigeria
Scientific Review SR
Ā 
Assessing Discriminatory Performance of a Binary Logistic Regression Model
Assessing Discriminatory Performance of a Binary Logistic Regression ModelAssessing Discriminatory Performance of a Binary Logistic Regression Model
Assessing Discriminatory Performance of a Binary Logistic Regression Model
sajjalp
Ā 
Advanced microeconometric project
Advanced microeconometric projectAdvanced microeconometric project
Advanced microeconometric project
LaurentCyrus
Ā 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive Models
Bhaskar T
Ā 
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cut
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cutEnhance interval width of crime forecasting with ARIMA model-fuzzy alpha cut
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cut
TELKOMNIKA JOURNAL
Ā 
Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...
Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...
Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...
Project KRIT
Ā 
Risk Aggregation Inanoglu Jacobs 6 09 V1
Risk Aggregation Inanoglu Jacobs 6 09 V1Risk Aggregation Inanoglu Jacobs 6 09 V1
Risk Aggregation Inanoglu Jacobs 6 09 V1Michael Jacobs, Jr.
Ā 
Cointegration among biotech stocks
Cointegration among biotech stocksCointegration among biotech stocks
Cointegration among biotech stocks
Peter Zobel
Ā 
Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...
Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...
Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...
iosrjce
Ā 
Bayesian Analysis Influences Autoregressive Models
Bayesian Analysis Influences Autoregressive ModelsBayesian Analysis Influences Autoregressive Models
Bayesian Analysis Influences Autoregressive Models
AI Publications
Ā 
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...
Waqas Tariq
Ā 
Building the Professional of 2020: An Approach to Business Change Process Int...
Building the Professional of 2020: An Approach to Business Change Process Int...Building the Professional of 2020: An Approach to Business Change Process Int...
Building the Professional of 2020: An Approach to Business Change Process Int...
Dr Harris Apostolopoulos EMBA, PfMP, PgMP, PMP, IPMO-E
Ā 
Qm0021 statistical process control
Qm0021 statistical process controlQm0021 statistical process control
Qm0021 statistical process control
smumbahelp
Ā 
Qm0021 statistical process control
Qm0021 statistical process controlQm0021 statistical process control
Qm0021 statistical process control
Study Stuff
Ā 
1100163YifanGuo
1100163YifanGuo1100163YifanGuo
1100163YifanGuoYifan Guo
Ā 
Non-life claims reserves using Dirichlet random environment
Non-life claims reserves using Dirichlet random environmentNon-life claims reserves using Dirichlet random environment
Non-life claims reserves using Dirichlet random environment
IJERA Editor
Ā 

What's hot (20)

Principal component analysis in modelling
Principal component analysis in modellingPrincipal component analysis in modelling
Principal component analysis in modelling
Ā 
Real time clustering of time series
Real time clustering of time seriesReal time clustering of time series
Real time clustering of time series
Ā 
gamdependence_revision1
gamdependence_revision1gamdependence_revision1
gamdependence_revision1
Ā 
Arellano bond
Arellano bondArellano bond
Arellano bond
Ā 
On Modeling Murder Crimes in Nigeria
On Modeling Murder Crimes in NigeriaOn Modeling Murder Crimes in Nigeria
On Modeling Murder Crimes in Nigeria
Ā 
Assessing Discriminatory Performance of a Binary Logistic Regression Model
Assessing Discriminatory Performance of a Binary Logistic Regression ModelAssessing Discriminatory Performance of a Binary Logistic Regression Model
Assessing Discriminatory Performance of a Binary Logistic Regression Model
Ā 
Advanced microeconometric project
Advanced microeconometric projectAdvanced microeconometric project
Advanced microeconometric project
Ā 
Time Series - Auto Regressive Models
Time Series - Auto Regressive ModelsTime Series - Auto Regressive Models
Time Series - Auto Regressive Models
Ā 
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cut
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cutEnhance interval width of crime forecasting with ARIMA model-fuzzy alpha cut
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cut
Ā 
Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...
Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...
Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...
Ā 
Risk Aggregation Inanoglu Jacobs 6 09 V1
Risk Aggregation Inanoglu Jacobs 6 09 V1Risk Aggregation Inanoglu Jacobs 6 09 V1
Risk Aggregation Inanoglu Jacobs 6 09 V1
Ā 
Cointegration among biotech stocks
Cointegration among biotech stocksCointegration among biotech stocks
Cointegration among biotech stocks
Ā 
Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...
Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...
Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...
Ā 
Bayesian Analysis Influences Autoregressive Models
Bayesian Analysis Influences Autoregressive ModelsBayesian Analysis Influences Autoregressive Models
Bayesian Analysis Influences Autoregressive Models
Ā 
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...
Ā 
Building the Professional of 2020: An Approach to Business Change Process Int...
Building the Professional of 2020: An Approach to Business Change Process Int...Building the Professional of 2020: An Approach to Business Change Process Int...
Building the Professional of 2020: An Approach to Business Change Process Int...
Ā 
Qm0021 statistical process control
Qm0021 statistical process controlQm0021 statistical process control
Qm0021 statistical process control
Ā 
Qm0021 statistical process control
Qm0021 statistical process controlQm0021 statistical process control
Qm0021 statistical process control
Ā 
1100163YifanGuo
1100163YifanGuo1100163YifanGuo
1100163YifanGuo
Ā 
Non-life claims reserves using Dirichlet random environment
Non-life claims reserves using Dirichlet random environmentNon-life claims reserves using Dirichlet random environment
Non-life claims reserves using Dirichlet random environment
Ā 

Similar to GARCH

NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORK
NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORKNONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORK
NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORK
cscpconf
Ā 
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework
csandit
Ā 
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
ijscmcj
Ā 
Volatility forecasting a_performance_mea
Volatility forecasting a_performance_meaVolatility forecasting a_performance_mea
Volatility forecasting a_performance_mea
ijscmcj
Ā 
Dong Zhang's project
Dong Zhang's projectDong Zhang's project
Dong Zhang's projectDong Zhang
Ā 
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdfAn Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
Nancy Ideker
Ā 
Historical Simulation with Component Weight and Ghosted Scenarios
Historical Simulation with Component Weight and Ghosted ScenariosHistorical Simulation with Component Weight and Ghosted Scenarios
Historical Simulation with Component Weight and Ghosted Scenariossimonliuxinyi
Ā 
A Tabu Search Heuristic For The Generalized Assignment Problem
A Tabu Search Heuristic For The Generalized Assignment ProblemA Tabu Search Heuristic For The Generalized Assignment Problem
A Tabu Search Heuristic For The Generalized Assignment Problem
Sandra Long
Ā 
Garch Models in Value-At-Risk Estimation for REIT
Garch Models in Value-At-Risk Estimation for REITGarch Models in Value-At-Risk Estimation for REIT
Garch Models in Value-At-Risk Estimation for REIT
IJERDJOURNAL
Ā 
The Odd Generalized Exponential Log Logistic Distribution
The Odd Generalized Exponential Log Logistic DistributionThe Odd Generalized Exponential Log Logistic Distribution
The Odd Generalized Exponential Log Logistic Distribution
inventionjournals
Ā 
Abrigo and love_2015_
Abrigo and love_2015_Abrigo and love_2015_
Abrigo and love_2015_
Murtaza Khan
Ā 
Multinomial Logistic Regression.pdf
Multinomial Logistic Regression.pdfMultinomial Logistic Regression.pdf
Multinomial Logistic Regression.pdf
AlemAyahu
Ā 
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Xin-She Yang
Ā 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Generalized Probabilis...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Generalized Probabilis...MUMS: Bayesian, Fiducial, and Frequentist Conference - Generalized Probabilis...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Generalized Probabilis...
The Statistical and Applied Mathematical Sciences Institute
Ā 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
The Statistical and Applied Mathematical Sciences Institute
Ā 
Metaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsMetaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open Problems
Xin-She Yang
Ā 
Paper473
Paper473Paper473
Paper473
carlosceal
Ā 
SigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerSigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerIan Dewancker
Ā 

Similar to GARCH (20)

NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORK
NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORKNONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORK
NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORK
Ā 
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework
Ā 
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...
Ā 
Volatility forecasting a_performance_mea
Volatility forecasting a_performance_meaVolatility forecasting a_performance_mea
Volatility forecasting a_performance_mea
Ā 
StatsModelling
StatsModellingStatsModelling
StatsModelling
Ā 
Dong Zhang's project
Dong Zhang's projectDong Zhang's project
Dong Zhang's project
Ā 
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdfAn Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
An Efficient Genetic Algorithm for Solving Knapsack Problem.pdf
Ā 
Historical Simulation with Component Weight and Ghosted Scenarios
Historical Simulation with Component Weight and Ghosted ScenariosHistorical Simulation with Component Weight and Ghosted Scenarios
Historical Simulation with Component Weight and Ghosted Scenarios
Ā 
A Tabu Search Heuristic For The Generalized Assignment Problem
A Tabu Search Heuristic For The Generalized Assignment ProblemA Tabu Search Heuristic For The Generalized Assignment Problem
A Tabu Search Heuristic For The Generalized Assignment Problem
Ā 
Garch Models in Value-At-Risk Estimation for REIT
Garch Models in Value-At-Risk Estimation for REITGarch Models in Value-At-Risk Estimation for REIT
Garch Models in Value-At-Risk Estimation for REIT
Ā 
The Odd Generalized Exponential Log Logistic Distribution
The Odd Generalized Exponential Log Logistic DistributionThe Odd Generalized Exponential Log Logistic Distribution
The Odd Generalized Exponential Log Logistic Distribution
Ā 
Forecasting_GAMLSS
Forecasting_GAMLSSForecasting_GAMLSS
Forecasting_GAMLSS
Ā 
Abrigo and love_2015_
Abrigo and love_2015_Abrigo and love_2015_
Abrigo and love_2015_
Ā 
Multinomial Logistic Regression.pdf
Multinomial Logistic Regression.pdfMultinomial Logistic Regression.pdf
Multinomial Logistic Regression.pdf
Ā 
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Applications and Analysis of Bio-Inspired Eagle Strategy for Engineering Opti...
Ā 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Generalized Probabilis...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Generalized Probabilis...MUMS: Bayesian, Fiducial, and Frequentist Conference - Generalized Probabilis...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Generalized Probabilis...
Ā 
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
Ā 
Metaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open ProblemsMetaheuristic Optimization: Algorithm Analysis and Open Problems
Metaheuristic Optimization: Algorithm Analysis and Open Problems
Ā 
Paper473
Paper473Paper473
Paper473
Ā 
SigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_PrimerSigOpt_Bayesian_Optimization_Primer
SigOpt_Bayesian_Optimization_Primer
Ā 

GARCH

  • 1. Adaptive Quasi-Maximum Likelihood Estimation of GARCH models with Studentā€™s t Likelihood 1 Xiaorui Zhu 2, Li Xie3 Abstract This paper proposes an adaptive quasi-maximum likelihood estimation when forecast- ing the volatility of ļ¬nancial data with the generalized autoregressive conditional heteroscedas- ticity(GARCH) model. When the distribution of volatility data is unspeciļ¬ed or heavy-tailed, we worked out adaptive quasi-maximum likelihood estimation based on data by using the scale parameter Ī·f to identify the discrepancy between wrongly speciļ¬ed innovation density and the true innovation density. With only a few assumptions, this adaptive approach is consistent and asymptotically normal. Moreover, it gains better eļ¬ƒciency under the condition that innovation error is heavy-tailed. Finally, simulation studies and an application show its advantage. Keywords quasi likelihood, GARCH Model, adaptive estimator, heavy-tailed error JEL Classiļ¬cation: C13; C22 1 Introduction With the development of derivatives, volatility has been a crucial variable in not only modeling ļ¬nancial data, but also designing trading strategies and implementing risk man- agement. Among various models of analysis of volatility, GARCH(generalized autoregres- sive conditional heteroscedasticity) model is a well-known and useful one. It was proposed by Bollerslev(1986) as follows: ļ£± ļ£² ļ£³ ut = Ļƒt|tāˆ’1Īµt Ļƒ2 t|tāˆ’1 = Ļ‰ + āˆ‘p i=1 Ī±iu2 tāˆ’i + āˆ‘q j=1 Ī²jĻƒ2 tāˆ’j (1) Primarily, the estimation of ARCH/GARCH model is based on the maximum likeli- hood estimation(MLE) when the innovation subjects to conditional Gaussian distribution. However, if the distribution of innovation Īµt is not normal, as is prevalent in plenty of em- pirical data, quasi-maximum likelihood estimation would be more suitable. At ļ¬rst, plenty of literature discussed Gaussian quasi-maximum likelihood estimation when the innovation distribution is not normal. Weissā€™s(1986) [19] research showed that even under special con- dition that the data is un-normalized and have ļ¬nite fourth moments, the Gaussian-MLE is consistent asymptotically normal. After that, Bollerslev(1986) [1], Hsieh(1989) [12], and Nelson(1991) [14] proposed the issue of parameter estimation by generalized Gaussian quasi-maximum likelihood estimation(GQMLE) when the innovation distribution is not normal, and has also derived the consistency and eļ¬ƒciency of this method. Bougerol and Picard(1992) [2] discussed the necessary stationarity and ergodicity of GARCH models. To 1:We appreciate all the helpful suggestions from the editor and the reviewers, thoughtful comments from A.P.Gaorong Li and Yuyang Zhang, and ļ¬nancial support from the National Natural Science Foundation (Grant No.11171011) and the National Social Science Foundation(Grant No.13BGL007). 2:College of Applied Sciences, Beijing University of Technology, Pingleyuan 100, Chaoyang District, Bei- jing, 100124, China. E-mail: xiaorui.zhu@emails.bjut.edu.cn 3:College of Applied Sciences, Beijing University of Technology, Pingleyuan 100, Chaoyang District, Bei- jing, 100124, China. E-mail:xieli@bjut.edu.cn
  • 2. 2 get an estimation when the innovation distribution is unknown, Elie and Jeantheau(1995) proposed the Gaussian quasi-maximum likelihood estimator that is consistent and asymp- totically normal. Besides, there are also crucial achievements of Gaussian QMLE in recent years. Berkes, Horvath and Kokoszka (2003) [11] studied the structure of a GARCH(p,q) and proved the consistency and asymptotic normality of the QMLE under mild conditions. Strong consistency and asymptotic normality of QMLE were also proved in the study of Francq and Zakoian (2004) [9]. Other researches that involve improving Gaussian QMLE include: Engle and Gonzalez- Rivera(1991) [5] published a procedure that can improve the eļ¬ƒciency of GQMLE. Drost and Klaassen (1997) [4] put forward the adaptive estimation in ARCH model. Sun and Stengos (2006) [18] proposed adaptive two-step semi-parametric procedures on the condi- tion of symmetric error or asymmetric error separately. A Self-weighted and local QMLE for ARMA-GARCH models has been discussed in the study of Ling (2007) [13]. With the developing of quasi-maximum likelihood estimation, some non-gaussian QM- LEs were proposed to improve the estimator when the innovations are heavy-tailed or skewed. Xiu(2010) [21] discussed quasi-maximum likelihood estimation of stochastic volatil- ity model with high frequency data. OssandĀ“on and Bahamonde(2011) [16] proposed a novel estimation for GARCH models based on the Extended Kalman Filter(EKF). Zhu(2012) [22] put forward a mixed portmanteau test for ARMA-GARCH model by quasi-maximum likelihood estimator. Francq et al.(2011) [8] has developed a two-stage non-Gaussian quasi-maximum likelihood estimation to rectify the value of parameter es- timation. This procedure allows the use of generalized Gaussian likelihood and proposes a test that can determine whether the more eļ¬ƒcient Quasi-MLE is required with a non- Gaussian density. Other notable achievements lay in that the Studentā€™s t likelihood func- tion has been taken into consideration, which is called three-step non-Gaussian Quasi-MLE approach in the study of Fan et al.(2013) [6]. Because the Pearsonā€™s Type IV(PIV) distri- bution can capture a large range of the asymmetry and leptokurtosis of innovation error, Zhu and Li (2014) [23] have proposed a novel Pearson-type QMLE of GARCH(p,q) models to capture not only the heavy-tailed but also the skewed innovations. This article focuses on adaptive analyzing procedure which will increase eļ¬ƒciency of the estimator of the GARCH model, and proposes an adaptive procedure of QMLE aiming at minimizing the discrepancy between true and speciļ¬ed distribution of innovation. The scale parameter Ī·f is built in the sense of Kullback-Leibler Information Criterion (KLIC). With the Adaptive-QMLE procedure can not only ļ¬nd the approximate degree of freedom of innovation distribution, but also get the optimized quasi-maximum likelihood estimator after several iterations. This general estimation doesnā€™t rely on models, so it may be used in other general models. And the idea of Adaptive-QMLE can also be used in other methods such as GQMLE, NGQMLE and PQMLE. The simulation studies conļ¬rm that convergence rate of this adaptive QMLE procedure is very high, especially when the innovation is heavy-tailed. It performes well with high frequency data when the empirical distribution of innovations is often heavy-tailed. This paper is organized as follows. In Section 2, we introduce the GARCH model and quasi-maximum likelihood estimation. In Section 3, we describe the assumptions, propositions of our new adaptive quasi-maximum likelihood estimation, where we also explain the details and proofs of this procedure. Some simulation studies are provieded in Section 4. Real data analyses are shown in Section 5. The Section 6 concludes this paper.
  • 3. 3 2 Quasi-MLE in GARCH model 2.1 The GARCH Model Common type of GARCH(p,q) model has been shown in the Section 1. Let Īø = (Ļ‰, āƒ—Ī±ā€², āƒ—Ī²ā€²)ā€² be the unknown parameters of GARCH(p,q), where āƒ—Ī± = (Ī±1, Ā· Ā· Ā· , Ī±p)ā€², āƒ—Ī² = (Ī²1, Ā· Ā· Ā· , Ī²q)ā€² are the heteroscedastic parameters. {Īµt, āˆ’āˆž < t < āˆž} are innovation of model. Ī˜ āˆˆ R1+p+q o is the parameter space and Ro = [0, āˆž). The following assumptions for GARCH model are necessary: Assumption 1. (i)The GARCH process {ut} is strictly stationary and ergodic. (ii)For each Īø āˆˆ Ī˜, Ī±(z) and Ī²(z) have no common root, Ī±(1) Ģø= 0, Ī±p + Ī²q Ģø= 0 and āˆ‘q j=1 Ī²j < 1, where Ī±(z) = āˆ‘p i=1 Ī±izi and Ī²(z) = 1 āˆ’ āˆ‘q j=1 Ī²jzj. (iii)Īµt is a nondegenerate and i.i.d random variable with EĪµt = 0, EĪµ2 t = 1 and un- known density g(Ā·). The stationarity and ergodicity of GARCH models in Assumption 1(i) can be found in Bougerol and Picard(1992) [2]. The identiļ¬ability conditions for GARCH(p,q) are given in Berkes, Horvath and Kokoszka (2003) [11]. For a general GARCH model the conditional variance Ļƒ2 cannot be expressed in terms of a ļ¬nite number of the past observations utāˆ’1, utāˆ’2.... In many researches, conditional variance ĖœĻƒ2 t is [7]: ĖœĻƒ2 t = Ļ‰ 1 āˆ’ Ī£q j=1Ī±j + pāˆ‘ i=1 Ī±iu2 tāˆ’i + pāˆ‘ i=1 Ī±j āˆžāˆ‘ k=1 qāˆ‘ j1=1 Ā· Ā· Ā· qāˆ‘ jk=1 Ī²j1 Ā· Ā· Ā· Ī²jk u2 tāˆ’iāˆ’j1āˆ’Ā·Ā·Ā·āˆ’jk (2) By doing so, {ĖœĻƒ2 t } in (2) is a function of sample uāˆ— tāˆ’1 = {us, āˆ’āˆž < s ā‰¤ t āˆ’ 1}. 2.2 Quasi-Maximum Likelihood Estimation Fortunately, it is easy to derive the likelihood function of a GARCH model with normal error. Under the Eu2 t < āˆž assumption, the log-likelihood function of the GARCH model is as follow: L(Ļ‰, āƒ—Ī±, āƒ—Ī²) = āˆ’ n 2 log(2Ļ€) āˆ’ 1 2 nāˆ‘ t=1 {log(Ļƒ2 t|tāˆ’1) + u2 t Ļƒ2 t|tāˆ’1 } (3) Under mild conditions that Īµt isnā€™t speciļ¬ed as standard normal, Berkes, Horvath and Kokoszka (2003) [11] proved the consistency and asymptotic normality of quasi-MLE. Apart from normal distribution, studentā€™s t-distributions and generalized Gaussian dis- tributions are considered frequently. The deduction of the generalized quasi-maximum likelihood estimator of GARCH model is as follows: Ė†Īø = argmaxĪø 1 2 nāˆ‘ t=1 {āˆ’ log(Ļƒt|tāˆ’1) + log g( ut Ļƒt )} (4) In this paper, we consider innovation is standardized t-distribution, so the true prob- ability density function of {Īµt, āˆ’āˆž < t < āˆž} which is the g(Ā·) in the above formula is: g(x) = Ī“((Ī½ + 1)/2) (Ļ€Ī½)1/2Ī“(Ī½/2) (1 + x2 Ī½ )āˆ’ (Ī½+1) 2 (5) where:Ī½ > 0 may be treated as continuous parameter.
  • 4. 4 3 Adaptive Quasi Maximum likelihood Estimation If the true innovation distribution cannot be speciļ¬ed, gaussian quasi-maximum likeli- hood estimation(GQMLE) is always inconsistent as shown in Newey and Steigerwald(1997) [15]. Weiss(1984), Lee and Hansen(1994) study the asymptotic distributions of GQMLE. Francq et al.(2011) [8] proposes a two-stage GQMLE that can improve eļ¬ƒciency of esti- mator. Based on a three step quasi-maximum likelihood estimation, Fan et al.(2013) [6] derived the asymptotic theory of non-GQMLE when heavy-tailed studentā€™s t distribution is taken into consideration. These methods need to specify the degree of freedom Ī½(t distribution) or the parameter r of the Generalized Error Distribution(GED(r)). And they adjust estimator one time when the innovation distribution is heavy-tailed. However, they couldnā€™t totaly capture the heavy-tail characteristic. Therefore we put forward the adaptive QMLE procedure by choosing the optimized quasi likelihood function at mini- mal divergence between quasi innovation density f and the true innovation distribution density g. This iterative procedure will gain better eļ¬ƒciency than other methods when the distribution of innovation is heavy-tailed or unknown. 3.1 KLIC and Scale Parameter In order to measure the divergence between true innovation density g and speciļ¬ed likelihood function f, Kullback-Leibler divergence is necessary: I(g; f) = āˆ« [log g(u)]g(u)du āˆ’ āˆ« [log f(u)]g(u)du (6) The scale parameter Ī·f we use was proposed by White(1982) [20] and Fan et al. [6] as: Ī·f = arg maxĪ·>0E{āˆ’ log Ī· + log f( Īµ Ī· )}, (7) where Ī·f can be computed by using maximum likelihood estimation. Let W(Ī·) = E{āˆ’ log Ī· + log f(Īµ Ī· )}. In order to derive the consistency property of Ė†Īø, another assumption is needed as follows: Assumption 2. The quasi likelihood is chose from t-distribution family in (5) such that W(Ī·) has a unique maximmizer Ī·f > 0. This assumption helps our ļ¬nd the optimal likelihood within the t-distribution family that best captured the heavy-tailed characteristic of innovation than others. In other words, this assumption and the proposition 1 below will determine the best degree of freedom in adaptive-QMLE, and the best likelihood function will make adaptive-QMLE have better eļ¬ƒciency. And the scale parameter Ī·f had a crucial proposition: Proposition 1. If f āˆ exp(āˆ’x2/2) or f = g, then Ī·f = 1. Proof. [6] Deļ¬ne the likelihood ratio function G (Ī·) = E ( log ( f(Īµ/Ī·) Ī·Ā·f(Īµ) )) . Suppose G(Ī·) has no local extremal values. And since log(x) ā‰¤ 2( āˆš x āˆ’ 1), E ( log ( f (Īµ/Ī·) Ī· Ā· f (Īµ) )) ā‰¤ 2E (āˆš f (Īµ/Ī·) Ī· Ā· f (Īµ) āˆ’ 1 ) = 2 āˆ« +āˆž āˆ’āˆž āˆš 1 Ī· f ( x Ī· ) f (x) dx āˆ’ 2 ā‰¤ āˆ’ āˆ« +āˆž āˆ’āˆž (āˆš 1 Ī· f ( x Ī· ) āˆ’ āˆš f(x) )2 dx ā‰¤ 0 3.2 Adaptive QMLE
  • 5. 5 Now, we propose adaptive quasi-maximum likelihood estimation which can be used to estimate parameters of GARCH model. This method can approximate degree of freedom of quasi likelihood function based on the Proposition 1. So quasi-maximum likelihood estimation with the approximative degree will gain eļ¬ƒcency signiļ¬cantly. (a) First, we estimate ĖœĪø(0)(number in the upper right corner is serial number of esti- mator) with GQMLE under the assumption of normality by ĖœĪø(0) = argmaxĪø 1 2 nāˆ‘ t=1 {āˆ’ log(Ļƒt|tāˆ’1) āˆ’ u2 t Ļƒ2 t|tāˆ’1 } (8) . (b) The {Īµt} we need to caculate Ī·f can be replaced by computing from ĖœĪµ (0) t = ut/(ĖœĻƒ(ĖœĪø(0))) with gaussian maximum likelihood estimator ĖœĪø(0) in the step (a). Then ĖœĪ·f is obtained in ĖœĪ· (1) f = arg maxĪ·>0E{āˆ’ log Ī· + log f( ĖœĪµ (0) t Ī· )} (9) by changing the degree of freedom (df1) of studentā€™s t denstity function (f(Ā·)) above until |ĖœĪ· (1) f āˆ’ 1| ā‰¤ Ī“ (Ī“ can be set). In other words, this step ļ¬nds the approximate f(df1)(Ā·) of ĖœĪµ (0) t under the Proposition 1. Therefore, we can obtain ĖœĪø(1) under the new density function f(df1) which is more reliable for true innovation: ĖœĪø(1) = arg maxĪø Tāˆ‘ t=1 (āˆ’ log(Ļƒt|tāˆ’1) + log f(df1)( ut Ļƒt|tāˆ’1 )) (10) (c) Apply the similar procedure in (b) and we will get {ĖœĪµ (1) t = ut/(ĖœĻƒt|tāˆ’1)} , ĖœĪ· (2) f in formula (9), and f(df2)(Ā·) under the condition |ĖœĪ· (2) f āˆ’ 1| ā‰¤ Ī“. Then, estimate ĖœĪø(2) with the likelihood function f(df2) , and so on. (d) Finally, obtain a Adaptive estimator Ė†Īø = ĖœĪø(n) until |ĖœĪø(n) āˆ’ ĖœĪø(nāˆ’1)| < Ī». Usually, we set Ī“ at about 0.1 and Ī» less than 0.5. The smaller the Ī“ and Ī», the more times of iteration are needed. The adaptive-QMLE is consistent as follow: THEOREM 3.1. [6] Suppose that Assumption 1(i,ii,iii) and 2 hold. Then Ė†Īø p ā†’ Īø0. Remark 1. The Adaptive-QMLE needs a ļ¬nite fourth moment for the innovation because we simply adopt GQMLE in the ļ¬rst step. The ļ¬nite fourth moment condition is essential to obtain asymptotic normality. In the future studies, We will use other alternative estimators in the ļ¬rst step to remove this condition. And the condition EĪµ2 t = 1 assure the Proposition 1 can be used in this procedure. This paper discusses the case that EĪµ2 t < āˆž, when EĪµ2 t = āˆž, lots of estimators in this context were studied. Such as Chen and Zhu(2014) [3], Hill(2014) [10], Peng and Yao(2003) [17] and so on. This procedure is helpful when the family of distribution of innovation is known. With the help of KLIC, adaptive-QMLE will increase the eļ¬ƒciency of the estimator and decrease the discrepancy between true density of innovation and speciļ¬ed density of innovation. So it will be a better method to process varies situations and a wide variety of data. 4 Simulation studies We show variation of Ī·f in Figure 1. Each line represents the variation of the Ī·f which
  • 6. 6 the distribution of {Īµt} in formula (9) is ļ¬xed as noted in the upper left. The horizontal axis represents degree of freedom of likelihood function. We compares 4 types of Studentā€™s t distribution and three types of Generalized Gaussian distribution with diļ¬€erent shape parameter. It is shown that if the degree of freedom of quasi likelihood function is larger(or smaller) than the degree of freedom of {Īµt}, the Ī·f > 1(or < 1). When the degree of freedom of likelihood function is smaller than the true innovation degree freedom the Ī·f < 1 . Itā€™s no doubt that Ī·f approximately equals to 1 when the speciļ¬ed the quasi likelihood function equals to the true innovation function. For example, we can ļ¬nd that the bold line in Figure 1, when innovation is t2 and quasi likelihood function is Studentā€™s t density with degree of freedom equal to 2, the Ī·f is approximately equal to 1. The same things occured in each line. In order to show the advantages of adaptive-QMLE clearly, we took the ordinary GARCH(1,1) model with true parameters (Ļ‰, Ī±1, Ī²1) = (0.02, 0.6, 0.3) into consideration. With the bound of |ĖœĪ· (2) f āˆ’1| ā‰¤ 0.2, adaptive estimator is convergent after several iterations as showed in Table 1. The innovation distribution ranges from thin-tailed t20, which is approximately normal distribution, to heavy-tailed t2. At the same time, we compared diļ¬€erent situation with sample size ranging among 500, 1000 or 2000. The ā€œStepā€ column display the order of iteration. ā€œdfā€ column is the degree of freedom of studentā€™s t quasi likelihood function which was found with the bound |ĖœĪ· (1) f āˆ’1| ā‰¤ 0.2. ā€œdf=Gaussā€ means the gaussian assumption of innovation distribution. What can be found when the innovations are heavy-tailed, such as t2, t3, is that the maximum-likelihood estimator with Gaussian assumption is intolerable. However, from the results in the table we could see the Adaptive- Quasi-Maximum likelihood estimators are better than Maximum likelihood estimators. And, the adaptive estimation procedure approximate the true degree of freedom and true parameters. In Table 2, we use the same setting of GARCH(1,1) model in Table 1. The sample size is various among 250,500 and 1000. And repeat time of simulation is 500 at each sample size. The innovation was set as Studentā€™s t distribution for various degrees of freedom from heavy-tailed to thin-tailed in Table 2. There are three things need to be emphasized in Table 2. The ļ¬rst one is that NGQMLE is the three-step estimation procedure proposed by Fan et al(2013) [6]. The auxiliary innovation distribution of NG-QMLE is t4. The second is GQMLE represent maximum-likelihood estimation with the assumption that distribution of innovations is normal. Third is that MLE is ordinary maximum likelihood estimation with the true distribution of innovation set at the beginning of simulations. In Table 2, gaussian quasi-maximum likelihood estimators are always terrible when the innovations are heavy-tailed such as t2 or t3. Especially, when the innovations are t2 or t3, a fourth moment doesnā€™t exist, the RMSE of Ļ‰ of GQMLE is intolerable. So the relative RMSE ratio of other estimators against GQMLE is meaningless. We wonā€™t display them in the Table 2. But we can ļ¬nd that when the innovations are heavy-tailed, the Adaptive-QMLE is better than the other two estimations, NGQMLE and GQMLE. In the table, the relative RMSE ratio of A-QML estimator against GQMLE is smaller than 1. Meanwhile, the relative RMSE ration of adaptive-QMLE shown that the adaptive-QMLE is better than NGQMLE. It is clearly shown in the Table 2 that most relative RMSE ratio of A-QMLE against GQMLE is close to relative RMSE ratio of MLE, which means A-QMLE can be an best alternative estimator when the innovation is unknown. On the other hand, approximate df of true innovation, showed in ā€œdfBā€ column, is close to the exact degree of freedom. These evidences implies adaptive QMLE, no matter whether
  • 7. 7 the tail of innovation is heavy or thin, is an optimized estimator which is close to the maximum likelihood estimator with true innovation distribution. 5 Application In this section, ļ¬rst, we summary daily returns of six indexes, namely S&P500, FTSE, NADQ, CAC, DAX, HSI. The time period we considered is taken from January 2, 2000 to March 31, 2014. In the Table 3, the number of samples, the mean, standard deviation and excess kurtosis of daily return are showed. The we can ļ¬nd that the excess kurtosis of all the indexes are bigger than zero, which means that they are all heavy-tailed. So, when one want to analyze stock returns with GARCH model, the Adaptive-QMLE precedure is not only necessary but also helpful. Second, we use this adaptive-QMLE procedure to estimate parameters of GARCH model. Table 4 displays estimators of GARCH(1,1) model of quasi-maximum likelihood estimation and adaptive quasi-maximum likelihood. In the table 4, ā€œdfBā€ represent approximate degree of freedom of studentā€™s t density. From the results in Table 4 we could ļ¬nd that although the adaptive QML estimator is a little diļ¬€erent from the quasi-maximum likelihood estimator, we get an approximate degree of freedom. These estimated degree of freedom imply the same heavy-tailed characteristic of data in Table 3. S&P500, NASDAQ and HSI have bigger kurtosis, so these three approximate degree of freedom are smaller than the other three indexes, in other words their tails are heavier than othersā€™. 6 Conclusions This article focuses on improving eļ¬ƒciency of the estimator of the GARCH model when innovation distribution is unknown, and proposes the Adaptive-QMLE which gains better eļ¬ƒciency. By using Ī·f = 1 in the sense of KLIC which can identify the quasi likelihood function f, the Adaptive-QMLE is stable no matter whether the tail of innovation is heavy or not. The most important thing is that, without specifying the distribution of innovation, the Adaptive-QMLE is very close to MLE with true innovation distribution. Hence, it is very helpful and accurate when we donā€™t know the distribution of innovation, especially when the innovation is heavy-tailed. So itā€™s a general quasi-maximum likelihood estimation which can be used in more situations, such as the ļ¬nancial ļ¬eld or genetics. Possible extension of the Adaptive-QMLE includes introducing other distributions of innovation and considering more models.
  • 8. 8 List of Figures and Tables Figure 1: Variations of Ī·f across Studentā€™s-t likelihood QMLE 2 4 6 8 10 0.61.01.4 Degree of freedom Ī· t2 t4 t6 t10
  • 9. 9 Table 1: Some samples to show Convergence of Adaptive-QMLE Innov. Step N = 500 N = 1000 N = 2000 Si Ļ‰ Ī± Ī² df Ļ‰ Ī± Ī² df Ļ‰ Ī± Ī² df t2 S1 2.571 0.387 0.613 Gauss. 0.386 0.267 0.733 Gauss. 0.984 0.336 0.663 Gauss. t2 S2 1.748 0.576 0.420 6 0.196 0.463 0.536 20 0.191 0.568 0.432 11 t2 S3 0.022 0.618 0.365 2.5 0.069 0.620 0.375 5 0.042 0.649 0.349 4 t2 S4 0.013 0.627 0.312 2.0 0.033 0.656 0.326 3 0.032 0.679 0.317 3 t2 S5 0.013 0.605 0.305 1.8 0.027 0.643 0.312 2.5 0.032 0.679 0.317 3 t2 S6 0.027 0.643 0.312 2.5 t3 S1 0.000 0.758 0.298 Gauss. 0.029 0.709 0.285 Gauss. 0.371 0.540 0.459 Gauss. t3 S2 0.015 0.686 0.269 2.8 0.021 0.656 0.309 3.8 0.040 0.644 0.356 20 t3 S3 0.016 0.690 0.274 3 0.020 0.643 0.302 3.5 0.302 0.678 0.321 9 t3 S4 0.016 0.690 0.274 3 0.027 0.692 0.307 7 t3 S5 0.025 0.701 0.298 6 t3 S6 0.024 0.707 0.293 5.5 t5 S1 0.014 0.666 0.249 Gauss. 0.022 0.608 0.220 Gauss. 0.018 0.416 0.356 Gauss. t5 S2 0.012 0.542 0.328 4 0.019 0.606 0.248 3 0.039 0.619 0.378 20 t5 S3 0.012 0.542 0.328 4 0.019 0.606 0.248 3 0.032 0.632 0.364 15 t5 S4 0.029 0.637 0.357 10 t5 S5 0.028 0.641 0.351 5 t10 S1 0.082 0.689 0.302 Gauss. 0.096 0.639 0.359 Gauss. 0.087 0.755 0.243 Gauss. t10 S2 0.018 0.579 0.354 20 0.027 0.706 0.272 20 0.023 0.750 0.223 20 t10 S3 0.017 0.566 0.356 15 0.026 0.703 0.267 15 0.023 0.750 0.223 20 t10 S4 0.017 0.566 0.356 15 0.025 0.695 0.261 11 t10 S5 0.024 0.684 0.256 10 t10 S6 0.023 0.675 0.254 9 t20 S1 0.056 0.613 0.382 Gauss. 0.080 0.735 0.261 Gauss. 0.078 0.582 0.291 Gauss. t20 S2 0.017 0.537 0.365 20 0.024 0.630 0.253 20 0.017 0.623 0.306 20 t20 S3 0.017 0.537 0.365 20 0.024 0.630 0.253 20 0.017 0.623 0.306 20 a True paremeters (Ļ‰, Ī±1, Ī²1) = (0.02, 0.6, 0.3).
  • 10. 10 Table 2: Relative RMSE ratio comparison with Studentā€™s t innovation Para Īµ T dfB A-tQMLE NGQMLE GQMLE MLE Ļ‰ t5 250 7.807 0.084 0.094 0.157 0.059 500 7.555 0.064 0.055 0.139 0.044 1000 7.825 0.051 0.031 0.123 0.032 Ļ‰ t10 250 11.27 0.148 0.282 0.060 0.130 500 11.961 0.126 0.107 0.055 0.087 1000 10.752 0.137 0.128 0.044 0.078 Ļ‰ t20 250 13.292 0.158 0.180 0.055 0.148 500 14.151 0.141 0.679 0.048 0.123 1000 14.545 0.127 0.075 0.044 0.084 Ī± t2 250 2.937 0.478 0.801 0.203 0.639 500 2.699 0.408 0.601 0.218 0490 1000 2.644 0.339 0.495 0.254 0.351 Ī± t3 250 4.544 0.701 0.726 0.160 0.828 500 4.010 0.584 0.667 0.140 0.733 1000 4.223 0.647 0.763 0.125 0.747 Ī± t5 250 7.807 0.787 0.773 0.172 0.784 500 7.555 0.653 0.683 0.173 0.557 1000 7.825 0.596 0.561 0.151 0.502 Ī± t10 250 11.27 0.741 0.800 0.228 0.552 500 11.961 0.665 0.749 0.224 0.394 1000 10.752 0.648 0.740 0.249 0.264 Ī± t20 250 13.292 0.665 0.746 0.235 0.508 500 14.151 0.684 0.796 0.220 0.421 1000 14.545 0.667 0.781 0.226 0.293 Ī² t2 250 2.937 0.253 0.757 0.305 0.213 500 2.699 0.172 0.642 0.317 0.136 1000 2.644 0.098 0.562 0.367 0.087 Ī² t3 250 4.544 0.343 0.701 0.234 0.226 500 4.010 0.188 0.630 0.260 0.195 1000 4.223 0.152 0.679 0.258 0.155 Ī² t5 250 7.807 0.530 0.682 0.184 0.184 500 7.555 0.358 0.528 0.171 0.353 1000 7.825 0.252 0.505 0.170 0.267 Ī² t10 250 11.27 0.669 0.720 0.161 0.659 500 11.961 0.477 0.488 0.148 0.450 1000 10.752 0.288 0.359 0.156 0.270 Ī² t20 250 13.292 0.682 0.732 0.166 0.666 500 14.151 0.521 0.566 0.150 0.497 1000 14.545 0.405 0.418 0.131 0.394 a The RMSE of GQMLE are showed between the two bold lines. The others are the relative RMSE ratio of other estimators against GQMLE. Almost all are less than 1 means these three estimators are better than GQMLE.
  • 11. 11 Table 3: Summary of Six stock indexes ut of Index n mean sd kurtosis S&P500 3581 0.0002 0.0132 7.9423 FTSE 3596 0.0001 0.0125 6.1308 NASDAQ 3582 0.0007 0.0302 7.2389 CAC 3632 0.0002 0.0414 4.9996 DAX 3633 0.0019 0.0458 4.4939 HSI 3548 0.0019 0.0471 8.2166 a The data are 6 daily stock market returns, from January 2, 2000 to March 31, 2014. And the kurtosis in the table is sample excess kur- tosis.
  • 12. 12 Table 4: QMLE and Adaptive-tQMLE of GARCH(1,1) models. Index Estimator Ļ‰ Ī± Ī² dfB S&P500 GQMLE 0.003 0.085 0.902 15 AtQMLE 0.007 0.083 0.905 FTSE GQMLE 0.013 0.098 0.892 30 AtQMLE 0.021 0.082 0.891 NASDAQ GQMLE 0.005 0.715 0.236 13 AtQMLE 0.006 0.139 0.824 CAC GQMLE 0.0002 0.084 0.906 56 AtQMLE 0.0003 0.085 0.906 DAX GQMLE 0.0002 0.085 0.905 50 AtQMLE 0.0002 0.088 0.899 HSI GQMLE 0.0002 0.064 0.912 12 AtQMLE 0.0002 0.054 0.929 a The data are 6 daily stock market returns, from Jan- uary 2, 2000 to March 31, 2014.
  • 13. 13 References [1] Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of econometrics, 31(3):307ā€“327, 1986. [2] Philippe Bougerol and Nico Picard. Stationarity of garch processes and of some nonnegative time series. Journal of econometrics, 52(1):115ā€“127, 1992. [3] Min Chen and Ke Zhu. Sign-based portmanteau test for arch-type models with heavy- tailed innovations. Working paper, 2014. [4] Feike C Drost, Chris AJ Klaassen, and Bas JM Werker. Adaptive estimation in time-series models. The Annals of Statistics, 25(2):786ā€“817, 1997. [5] Robert F Engle and Gloria Gonzalez-Rivera. Semiparametric arch models. Journal of Business & Economic Statistics, 9(4):345ā€“359, 1991. [6] Jianqing Fan, Lei Qi, and Dacheng Xiu. Quasi maximum likelihood estimation of garch models with heavy-tailed likelihoods. Journal of Business & Economic Statis- tics, pages justā€“accepted, 2013. [7] Jianquing Fan. Nonlinear time series: nonparametric and parametric methods. Springer, 2003. [8] Christian Francq, Guillaume Lepage, and Jean-Michel ZakoĀØıan. Two-stage non gaus- sian qml estimation of garch models and testing the eļ¬ƒciency of the gaussian qmle. Journal of Econometrics, 165(2):246ā€“257, 2011. [9] Christian Francq, Jean-Michel Zakoian, et al. Maximum likelihood estimation of pure garch and arma-garch processes. Bernoulli, 10(4):605ā€“637, 2004. [10] Jonathan B Hill. Robust estimation and inference for heavy tailed garch. Unpublished Manuscript, Department of Economics, University of North Carolina, 2014. [11] Lajos Horv, Piotr Kokoszka, et al. Garch processes: structure and estimation. Bernoulli, 9(2):201ā€“227, 2003. [12] David A Hsieh. Modeling heteroscedasticity in daily foreign-exchange rates. Journal of Business & Economic Statistics, 7(3):307ā€“317, 1989. [13] Shiqing Ling. Self-weighted and local quasi-maximum likelihood estimators for arma- garch/igarch models. Journal of Econometrics, 140(2):849ā€“873, 2007. [14] Daniel B Nelson. Conditional heteroskedasticity in asset returns: A new approach. Econometrica: Journal of the Econometric Society, pages 347ā€“370, 1991. [15] Whitney K Newey and Douglas G Steigerwald. Asymptotic bias for quasi-maximum- likelihood estimators in conditional heteroskedasticity models. Econometrica: Journal of the Econometric Society, pages 587ā€“599, 1997. [16] SebastiĀ“an OssandĀ“on and Natalia Bahamonde. On the nonlinear estimation of garch models using an extended kalman ļ¬lter. In Proceedings of the World Congress on Engineering, volume 1, 2011.
  • 14. 14 [17] Liang Peng and Qiwei Yao. Least absolute deviations estimation for arch and garch models. Biometrika, pages 967ā€“975, 2003. [18] Yiguo Sun and Thanasis Stengos. Semiparametric eļ¬ƒcient adaptive estimation of asymmetric garch models. Journal of Econometrics, 133(1):373ā€“386, 2006. [19] Andrew A Weiss. Asymptotic theory for arch models: estimation and testing. Econo- metric theory, pages 107ā€“131, 1986. [20] Halbert White. Maximum likelihood estimation of misspeciļ¬ed models. Econometrica, 50(1):1ā€“25, 1982. [21] Dacheng Xiu. Quasi-maximum likelihood estimation of volatility with high frequency data. Journal of Econometrics, 159(1):235ā€“250, 2010. [22] Ke Zhu. A mixed portmanteau test for arma-garch models by the quasi-maximum exponential likelihood estimation approach. Journal of Time Series Analysis, 2012. [23] Ke Zhu and Wai Keung Li. A new pearson-type qmle for conditionally heteroskedastic models. Working paper, 2014.