Volatility Forecasting - A Performance Measure of Garch Techniques With Diffe...ijscmcj
Ā
Volatility Forecasting is an interesting challengingtopicin current financial instruments as it is directly associated with profits. There are many risks and rewards directly associated with volatility. Hence forecasting volatility becomes most dispensable topic in finance. The GARCH distributionsplay an import ant role in the risk measurement a nd option pricing. T heminmotiveof this paper is tomeasure the performance of GARCH techniques for forecasting volatility by using different distribution model. We have used 9 variations in distribution models that are used to forecast t he volatility of a stock entity. Thedifferent GARCH
distribution models observed in this paper are Std, Norm, SNorm,GED, SSTD, SGED, NIG, GHYP and JSU.Volatility is forecasted for 10 days in dvance andvalues are compared with the actual values to find out the best distribution model for volatility forecast. From the results obtain it has been observed that GARCH withGED distribution models has outperformed all models
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...ijscmcj
Ā
Volatility Forecasting is an interesting challenging topic in current financial instruments as it is directly associated with profits. There are many risks and rewards directly associated with volatility. Hence forecasting volatility becomes most dispensable topic in finance. The GARCH distributions play an important role in the risk measurement and option pricing. The min motive of this paper is to measure the performance of GARCH techniques for forecasting volatility by using different distribution model. We have used 9 variations in distribution models that are used to forecast the volatility of a stock entity. The different GARCH distribution models observed in this paper are Std, Norm, SNorm, GED, SSTD, SGED, NIG, GHYP and JSU. Volatility is forecasted for 10 days in advance and values are compared with the actual values to find out the best distribution model for volatility forecast. From the results obtain it has been observed that GARCH with GED distribution models has outperformed all models.
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...BRNSS Publication Hub
Ā
The most important assumption about time series and econometrics data is stationarity. Therefore, this study focuses on behaviors of some parameters in stationarity of autoregressive (AR) and moving average (MA) models. Simulation studies were conducted using R statistical software to investigate the parameter values at different orders (p) of AR and (q) of MA models, and different sample sizes. The stationary status of the p and q are, respectively, determined, parameters such as mean, variance, autocorrelation function (ACF), and partial autocorrelation function (PACF) were determined. The study concluded that the absolute values of ACF and PACF of AR and MA models increase as the parameter values increase but decrease with increase of their orders which as a result, tends to zero at higher lag orders. This is clearly observed in large sample size (n = 300). However, their values decline as sample size increases when compared by orders across the sample sizes. Furthermore, it was observed that the means values of the AR and MA models of first order increased with increased in parameter but decreased when sample sizes were decreased, which tend to zero at large sample sizes, so also the variances
Mathematical models employing an autoregressive integrated moving average (ARIMA) have found very wide applications following work by Box and Jenkins in 1970, especially in time series analysis. ARIMA models have been very successful in financial forecasting, forming the basis of such things as predicting how much gas prices will rise. However, no mathematical requirement exists requiring the data to be a time series: only the use of equally spaced intervals for the independent variable is necessary. This can be done by binning data into standard ranges, such as income by $10,000 intervals. This paper reviews the fundamental statistical concepts of ARIMA models and applications of non-temporal ARIMA models in statistical research. Examples and applications are given in biostatistics, meteorology, and econometrics as well as astrostatistics.
Volatility Forecasting - A Performance Measure of Garch Techniques With Diffe...ijscmcj
Ā
Volatility Forecasting is an interesting challengingtopicin current financial instruments as it is directly associated with profits. There are many risks and rewards directly associated with volatility. Hence forecasting volatility becomes most dispensable topic in finance. The GARCH distributionsplay an import ant role in the risk measurement a nd option pricing. T heminmotiveof this paper is tomeasure the performance of GARCH techniques for forecasting volatility by using different distribution model. We have used 9 variations in distribution models that are used to forecast t he volatility of a stock entity. Thedifferent GARCH
distribution models observed in this paper are Std, Norm, SNorm,GED, SSTD, SGED, NIG, GHYP and JSU.Volatility is forecasted for 10 days in dvance andvalues are compared with the actual values to find out the best distribution model for volatility forecast. From the results obtain it has been observed that GARCH withGED distribution models has outperformed all models
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...ijscmcj
Ā
Volatility Forecasting is an interesting challenging topic in current financial instruments as it is directly associated with profits. There are many risks and rewards directly associated with volatility. Hence forecasting volatility becomes most dispensable topic in finance. The GARCH distributions play an important role in the risk measurement and option pricing. The min motive of this paper is to measure the performance of GARCH techniques for forecasting volatility by using different distribution model. We have used 9 variations in distribution models that are used to forecast the volatility of a stock entity. The different GARCH distribution models observed in this paper are Std, Norm, SNorm, GED, SSTD, SGED, NIG, GHYP and JSU. Volatility is forecasted for 10 days in advance and values are compared with the actual values to find out the best distribution model for volatility forecast. From the results obtain it has been observed that GARCH with GED distribution models has outperformed all models.
Investigation of Parameter Behaviors in Stationarity of Autoregressive and Mo...BRNSS Publication Hub
Ā
The most important assumption about time series and econometrics data is stationarity. Therefore, this study focuses on behaviors of some parameters in stationarity of autoregressive (AR) and moving average (MA) models. Simulation studies were conducted using R statistical software to investigate the parameter values at different orders (p) of AR and (q) of MA models, and different sample sizes. The stationary status of the p and q are, respectively, determined, parameters such as mean, variance, autocorrelation function (ACF), and partial autocorrelation function (PACF) were determined. The study concluded that the absolute values of ACF and PACF of AR and MA models increase as the parameter values increase but decrease with increase of their orders which as a result, tends to zero at higher lag orders. This is clearly observed in large sample size (n = 300). However, their values decline as sample size increases when compared by orders across the sample sizes. Furthermore, it was observed that the means values of the AR and MA models of first order increased with increased in parameter but decreased when sample sizes were decreased, which tend to zero at large sample sizes, so also the variances
Mathematical models employing an autoregressive integrated moving average (ARIMA) have found very wide applications following work by Box and Jenkins in 1970, especially in time series analysis. ARIMA models have been very successful in financial forecasting, forming the basis of such things as predicting how much gas prices will rise. However, no mathematical requirement exists requiring the data to be a time series: only the use of equally spaced intervals for the independent variable is necessary. This can be done by binning data into standard ranges, such as income by $10,000 intervals. This paper reviews the fundamental statistical concepts of ARIMA models and applications of non-temporal ARIMA models in statistical research. Examples and applications are given in biostatistics, meteorology, and econometrics as well as astrostatistics.
Motivated by the problem of computing investment portfolio weightings we investigate various methods of clustering as alternatives to traditional mean-variance approaches. Such methods can have significant benefits from a practical point of view since they remove the need to invert a sample covariance matrix, which can suffer from estimation error and will almost certainly be non-stationary. The general idea is to find groups of assets which share similar return
characteristics over time and treat each group as a single composite asset. We then apply inverse volatility weightings to these new composite assets. In the course of our investigation we devise a method of clustering based on triangular potentials and we present associated theoretical results as well as various examples based on synthetic data.
This paper examines the modelling and forecasting Murder crimes using Auto-Regressive Integrated Moving Average models (ARIMA). Twenty-nine years data obtained from Nigeria Information Resource Center were used to make predictions. Among the most effective approaches for analyzing time series data is the method propounded by Box and Jenkins, the Autoregressive Integrated Moving Average (ARIMA). The augmented Dickey-Fuller test for unit root was applied to the data set to investigate for Stationarity, the data set was found to be non-stationary hence transformed using first-order differencing to make them Stationary. The Stationarities were confirmed with time series plots. Statistical analysis was performed using GRETL software package from which, ARIMA (0, 1, 0) was found to be the best and adequate model for Murder crimes. Forecasted values suggest that Murder would slightly be on the increase.
Assessing Discriminatory Performance of a Binary Logistic Regression Modelsajjalp
Ā
The evaluation of fitted binary logistic regression model is very important in assessing the appropriateness of a model for specific purposes. The studyproposesto assess the discriminatory performance of a binary logistic regression model to correctly classify between the cases and non-cases. The discriminatory performance of binary logistic regression model is measured using two approaches. The first approach is the use of fitted binary logistic regression model to correctly predict the subjects that are cases and non-cases,with the help of the parameters sensitivity and specificity. The alternative approach is basedon receiver operatingcharacteristic(ROC)curvefor the fitted binary logistic regression model and then determining the area under the curve (AUC) as a measure of discriminatory performance. The value of sensitivity is observed to be greater than the value of 1-specificity, which signifies suitable discrimination for the mentioned cut point. The area under the curve indicates that there is evidence of reasonable discrimination reported bythe fitted model.
This paper is a methodological exercices presenting the results obtained from the estimation of the growth convergence equation using different methodologies.
A dynamic balanced panel data is estimated using: OLS, WithinGroup, HsiaoAnderson, First Difference, GMM with endogenous and GMM with predetermined instruments. An unbalanced panel is also realized for OLS, WG and FD.
Results are discused in light of Monte Carlo studies.
Time Series basic concepts and ARIMA family of models. There is an associated video session along with code in github: https://github.com/bhaskatripathi/timeseries-autoregressive-models
https://drive.google.com/file/d/1yXffXQlL6i4ufQLSpFFrJgymhHNXL1Mf/view?usp=sharing
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cutTELKOMNIKA JOURNAL
Ā
With qualified data or information a better decision can be made. The interval width of forecasting
is one of data values to assist in the selection decision making process in regards to crime prevention.
However, in time series forecasting, especially the use of ARIMA model, the amount of historical data
available can affect forecasting result including interval width forecasting value. This study proposes a
combination technique, in order to get get a better interval width crime forecasting value. The propose
combination technique between ARIMA model and Fuzzy Alpha Cut are presented. The use of variation
alpha values are used, they are 0.3, 0.5, and 0.7. The experimental results have shown the use of
ARIMA-FAC with alpha=0.5 is appropriate. The overall results obtained have shown the interval width
crime forecasting with ARIMA-FAC is better than interval width crime forecasting with 95% CI
ARIMA model.
Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...Project KRIT
Ā
Philippe Guicheteau "Bifurcation theory: a tool for nonlinear flight dynamics", Phil.Trans.R.Soc.Lond.A (1998) 356, 2181-2201
This paper presents a survey of some applications of bifurcation theory in flight dynamics at ONERA (France). After describing basic nonlinear phenomena due to aerodynamics and gyroscopic torque, the theory is applied to a real combat aircraft, and its validation in flight tests is shown. Then, nonlinear problems connected with the introduction of control laws to stabilize unstable dynamic systems and transient motions are addressed.
Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...iosrjce
Ā
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The models, principles and steps of Bayesian time series analysis and forecasting have been established extensively during the past fifty years. In order to estimate parameters of an autoregressive (AR) model we develop Markov chain Monte Carlo (MCMC) schemes for inference of AR model. It is our interest to propose a new prior distribution placed directly on the AR parameters of the model. Thus, we revisit the stationarity conditions to determine a ļ¬exible prior for AR model parameters. A MCMC procedure is proposed to estimate coeļ¬cients of AR(p) model. In order to set Bayesian steps, we determined prior distribution with the purpose of applying MCMC. We advocate the use of prior distribution placed directly on parameters. We have proposed a set of suļ¬cient stationarity conditions for autoregressive models of any lag order. In this thesis, a set of new stationarity conditions have been proposed for the AR model. We motivated the new methodology by considering the autoregressive model of AR(2) and AR(3). Additionally, through simulation we studied suļ¬ciency and necessity of the proposed conditions of stationarity. The researcher, additionally draw parameter space of AR(3) model for stationary region of Barndorļ¬-Nielsen and Schou (1973) and our new suggested condition. A new prior distribution has been proposed placed directly on the parameters of the AR(p) model. This is motivated by priors proposed for the AR(1), AR(2),..., AR(6), which take advantage of the range of the AR parameters. We then develop a Metropolis step within Gibbs sampling for estimation. This scheme is illustrated using simulated data, for the AR(2), AR(3) and AR(4) models and extended to models with higher lag order. The thesis compared the new proposed prior distribution with the prior distributions obtained from the correspondence relationship between partial autocorrelations and parameters discussed by Barndorļ¬-Nielsen and Schou (1973).
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...Waqas Tariq
Ā
This paper uses fuzzy arithmetic approach to the system cost for perishable items with instant deterioration for the discounted entropic order quantity model. Traditional crisp system cost observes that some costs may belong to the uncertain factors. It is necessary to extend the system cost to treat also the vague costs. We introduce a new concept which we call entropy and show that the total payoff satisfies the optimization property. We show how special case of this problem reduce to perfect results, and how post deteriorated discounted entropic order quantity model is a generalization of optimization. It has been imperative to demonstrate this model by analysis, which reveals important characteristics of discounted structure. Further numerical experiments are conducted to evaluate the relative performance between the fuzzy and crisp cases in EnOQ and EOQ separately.
CRAM (Change Risk Assessment Model) is a novel model approach which can significantly contribute to the missing formality of business models especially in the change(s) risk assessment area.
Ā
Project Management has long established the need for risk management techniques to be utilised in the succinct definition of associated risks in projects and agreement on countervailing actions as an aim to reduce scope creep, increase the probability of on-time and in-budget delivery.
Uncontrolled changes, regardless of size and complexity, can certainly pose as risks, of any magnitude, to projects and affect project success or even an organisationās coherence.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
ā help.mbaassignments@gmail.com ā
or
Call us at : 08263069601
Dear students get fully solved assignments by professionals
Send your semester & Specialization name to our mail id :
stuffstudy5@gmail.com
or
call us at : 098153-33456
Non-life claims reserves using Dirichlet random environmentIJERA Editor
Ā
The purpose of this paper is to propose a stochastic extension of the Chain-Ladder model in a Dirichlet random
environment to calculate the provesions for disaster payement. We study Dirichlet processes centered around the
distribution of continuous-time stochastic processes such as a Brownian motion or a continuous time Markov
chain. We then consider the problem of parameter estimation for a Markov-switched geometric Brownian
motion (GBM) model. We assume that the prior distribution of the unobserved Markov chain driving by the
drift and volatility parameters of the GBM is a Dirichlet process. We propose an estimation method based on
Gibbs sampling.
NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORKcscpconf
Ā
The importance of volatility for all market participants has led to the development and
application of various econometric models. The most popular models in modelling volatility are
GARCH type models because they can account excess kurtosis and asymmetric effects of
financial time series. Since standard GARCH(1,1) model usually indicate high persistence in the
conditional variance, the empirical researches turned to GJR-GARCH model and reveal its
superiority in fitting the asymmetric heteroscedasticity in the data. In order to capture both
asymmetry and nonlinearity in data, the goal of this paper is to develop a parsimonious NN
model as an extension to GJR-GARCH model and to determine if GJR-GARCH-NN outperforms
the GJR-GARCH model.
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework csandit
Ā
The importance of volatility for all market partici
pants has led to the development and
application of various econometric models. The most
popular models in modelling volatility are
GARCH type models because they can account excess k
urtosis and asymmetric effects of
financial time series. Since standard GARCH(1,1) mo
del usually indicate high persistence in the
conditional variance, the empirical researches turn
ed to GJR-GARCH model and reveal its
superiority in fitting the asymmetric heteroscedast
icity in the data. In order to capture both
asymmetry and nonlinearity in data, the goal of thi
s paper is to develop a parsimonious NN
model as an extension to GJR-GARCH model and to det
ermine if GJR-GARCH-NN outperforms
the GJR-GARCH model.
Motivated by the problem of computing investment portfolio weightings we investigate various methods of clustering as alternatives to traditional mean-variance approaches. Such methods can have significant benefits from a practical point of view since they remove the need to invert a sample covariance matrix, which can suffer from estimation error and will almost certainly be non-stationary. The general idea is to find groups of assets which share similar return
characteristics over time and treat each group as a single composite asset. We then apply inverse volatility weightings to these new composite assets. In the course of our investigation we devise a method of clustering based on triangular potentials and we present associated theoretical results as well as various examples based on synthetic data.
This paper examines the modelling and forecasting Murder crimes using Auto-Regressive Integrated Moving Average models (ARIMA). Twenty-nine years data obtained from Nigeria Information Resource Center were used to make predictions. Among the most effective approaches for analyzing time series data is the method propounded by Box and Jenkins, the Autoregressive Integrated Moving Average (ARIMA). The augmented Dickey-Fuller test for unit root was applied to the data set to investigate for Stationarity, the data set was found to be non-stationary hence transformed using first-order differencing to make them Stationary. The Stationarities were confirmed with time series plots. Statistical analysis was performed using GRETL software package from which, ARIMA (0, 1, 0) was found to be the best and adequate model for Murder crimes. Forecasted values suggest that Murder would slightly be on the increase.
Assessing Discriminatory Performance of a Binary Logistic Regression Modelsajjalp
Ā
The evaluation of fitted binary logistic regression model is very important in assessing the appropriateness of a model for specific purposes. The studyproposesto assess the discriminatory performance of a binary logistic regression model to correctly classify between the cases and non-cases. The discriminatory performance of binary logistic regression model is measured using two approaches. The first approach is the use of fitted binary logistic regression model to correctly predict the subjects that are cases and non-cases,with the help of the parameters sensitivity and specificity. The alternative approach is basedon receiver operatingcharacteristic(ROC)curvefor the fitted binary logistic regression model and then determining the area under the curve (AUC) as a measure of discriminatory performance. The value of sensitivity is observed to be greater than the value of 1-specificity, which signifies suitable discrimination for the mentioned cut point. The area under the curve indicates that there is evidence of reasonable discrimination reported bythe fitted model.
This paper is a methodological exercices presenting the results obtained from the estimation of the growth convergence equation using different methodologies.
A dynamic balanced panel data is estimated using: OLS, WithinGroup, HsiaoAnderson, First Difference, GMM with endogenous and GMM with predetermined instruments. An unbalanced panel is also realized for OLS, WG and FD.
Results are discused in light of Monte Carlo studies.
Time Series basic concepts and ARIMA family of models. There is an associated video session along with code in github: https://github.com/bhaskatripathi/timeseries-autoregressive-models
https://drive.google.com/file/d/1yXffXQlL6i4ufQLSpFFrJgymhHNXL1Mf/view?usp=sharing
Enhance interval width of crime forecasting with ARIMA model-fuzzy alpha cutTELKOMNIKA JOURNAL
Ā
With qualified data or information a better decision can be made. The interval width of forecasting
is one of data values to assist in the selection decision making process in regards to crime prevention.
However, in time series forecasting, especially the use of ARIMA model, the amount of historical data
available can affect forecasting result including interval width forecasting value. This study proposes a
combination technique, in order to get get a better interval width crime forecasting value. The propose
combination technique between ARIMA model and Fuzzy Alpha Cut are presented. The use of variation
alpha values are used, they are 0.3, 0.5, and 0.7. The experimental results have shown the use of
ARIMA-FAC with alpha=0.5 is appropriate. The overall results obtained have shown the interval width
crime forecasting with ARIMA-FAC is better than interval width crime forecasting with 95% CI
ARIMA model.
Philippe Guicheteau (1998) - Bifurcation theory: a tool for nonlinear flight ...Project KRIT
Ā
Philippe Guicheteau "Bifurcation theory: a tool for nonlinear flight dynamics", Phil.Trans.R.Soc.Lond.A (1998) 356, 2181-2201
This paper presents a survey of some applications of bifurcation theory in flight dynamics at ONERA (France). After describing basic nonlinear phenomena due to aerodynamics and gyroscopic torque, the theory is applied to a real combat aircraft, and its validation in flight tests is shown. Then, nonlinear problems connected with the introduction of control laws to stabilize unstable dynamic systems and transient motions are addressed.
Fuzzy Inventory Model for Constantly Deteriorating Items with Power Demand an...iosrjce
Ā
IOSR Journal of Mathematics(IOSR-JM) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of mathemetics and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in mathematics. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
The models, principles and steps of Bayesian time series analysis and forecasting have been established extensively during the past fifty years. In order to estimate parameters of an autoregressive (AR) model we develop Markov chain Monte Carlo (MCMC) schemes for inference of AR model. It is our interest to propose a new prior distribution placed directly on the AR parameters of the model. Thus, we revisit the stationarity conditions to determine a ļ¬exible prior for AR model parameters. A MCMC procedure is proposed to estimate coeļ¬cients of AR(p) model. In order to set Bayesian steps, we determined prior distribution with the purpose of applying MCMC. We advocate the use of prior distribution placed directly on parameters. We have proposed a set of suļ¬cient stationarity conditions for autoregressive models of any lag order. In this thesis, a set of new stationarity conditions have been proposed for the AR model. We motivated the new methodology by considering the autoregressive model of AR(2) and AR(3). Additionally, through simulation we studied suļ¬ciency and necessity of the proposed conditions of stationarity. The researcher, additionally draw parameter space of AR(3) model for stationary region of Barndorļ¬-Nielsen and Schou (1973) and our new suggested condition. A new prior distribution has been proposed placed directly on the parameters of the AR(p) model. This is motivated by priors proposed for the AR(1), AR(2),..., AR(6), which take advantage of the range of the AR parameters. We then develop a Metropolis step within Gibbs sampling for estimation. This scheme is illustrated using simulated data, for the AR(2), AR(3) and AR(4) models and extended to models with higher lag order. The thesis compared the new proposed prior distribution with the prior distributions obtained from the correspondence relationship between partial autocorrelations and parameters discussed by Barndorļ¬-Nielsen and Schou (1973).
A Fuzzy Arithmetic Approach for Perishable Items in Discounted Entropic Order...Waqas Tariq
Ā
This paper uses fuzzy arithmetic approach to the system cost for perishable items with instant deterioration for the discounted entropic order quantity model. Traditional crisp system cost observes that some costs may belong to the uncertain factors. It is necessary to extend the system cost to treat also the vague costs. We introduce a new concept which we call entropy and show that the total payoff satisfies the optimization property. We show how special case of this problem reduce to perfect results, and how post deteriorated discounted entropic order quantity model is a generalization of optimization. It has been imperative to demonstrate this model by analysis, which reveals important characteristics of discounted structure. Further numerical experiments are conducted to evaluate the relative performance between the fuzzy and crisp cases in EnOQ and EOQ separately.
CRAM (Change Risk Assessment Model) is a novel model approach which can significantly contribute to the missing formality of business models especially in the change(s) risk assessment area.
Ā
Project Management has long established the need for risk management techniques to be utilised in the succinct definition of associated risks in projects and agreement on countervailing actions as an aim to reduce scope creep, increase the probability of on-time and in-budget delivery.
Uncontrolled changes, regardless of size and complexity, can certainly pose as risks, of any magnitude, to projects and affect project success or even an organisationās coherence.
Dear students get fully solved assignments
Send your semester & Specialization name to our mail id :
ā help.mbaassignments@gmail.com ā
or
Call us at : 08263069601
Dear students get fully solved assignments by professionals
Send your semester & Specialization name to our mail id :
stuffstudy5@gmail.com
or
call us at : 098153-33456
Non-life claims reserves using Dirichlet random environmentIJERA Editor
Ā
The purpose of this paper is to propose a stochastic extension of the Chain-Ladder model in a Dirichlet random
environment to calculate the provesions for disaster payement. We study Dirichlet processes centered around the
distribution of continuous-time stochastic processes such as a Brownian motion or a continuous time Markov
chain. We then consider the problem of parameter estimation for a Markov-switched geometric Brownian
motion (GBM) model. We assume that the prior distribution of the unobserved Markov chain driving by the
drift and volatility parameters of the GBM is a Dirichlet process. We propose an estimation method based on
Gibbs sampling.
NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORKcscpconf
Ā
The importance of volatility for all market participants has led to the development and
application of various econometric models. The most popular models in modelling volatility are
GARCH type models because they can account excess kurtosis and asymmetric effects of
financial time series. Since standard GARCH(1,1) model usually indicate high persistence in the
conditional variance, the empirical researches turned to GJR-GARCH model and reveal its
superiority in fitting the asymmetric heteroscedasticity in the data. In order to capture both
asymmetry and nonlinearity in data, the goal of this paper is to develop a parsimonious NN
model as an extension to GJR-GARCH model and to determine if GJR-GARCH-NN outperforms
the GJR-GARCH model.
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework csandit
Ā
The importance of volatility for all market partici
pants has led to the development and
application of various econometric models. The most
popular models in modelling volatility are
GARCH type models because they can account excess k
urtosis and asymmetric effects of
financial time series. Since standard GARCH(1,1) mo
del usually indicate high persistence in the
conditional variance, the empirical researches turn
ed to GJR-GARCH model and reveal its
superiority in fitting the asymmetric heteroscedast
icity in the data. In order to capture both
asymmetry and nonlinearity in data, the goal of thi
s paper is to develop a parsimonious NN
model as an extension to GJR-GARCH model and to det
ermine if GJR-GARCH-NN outperforms
the GJR-GARCH model.
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...ijscmcj
Ā
Volatility Forecasting is an interesting challenging topic in current financial instruments as it is directly
associated with profits. There are many risks and rewards directly associated with volatility. Hence
forecasting volatility becomes most dispensable topic in finance. The GARCH distributions play an important
role in the risk measurement and option pricing. The min motive of this paper is to measure the performance
of GARCH techniques for forecasting volatility by using different distribution model. We have used 9
variations in distribution models that are used to forecast the volatility of a stock entity. The different GARCH
distribution models observed in this paper are Std, Norm, SNorm, GED, SSTD, SGED, NIG, GHYP and JSU.
Volatility is forecasted for 10 days in advance and values are compared with the actual values to find out the
best distribution model for volatility forecast. From the results obtain it has been observed that GARCH with
GED distribution models has outperformed all models.
The International Journal of Soft Computing, Mathematics and Control (IJSCMC) is a Quarterly peer-reviewed and refereed open access journal that publishes articles which contribute new results in all areas of Soft Computing, Pure, Applied and Numerical Mathematics and Control. The focus of this new journal is on all theoretical and numerical methods on soft computing, mathematics and control theory with applications in science and industry. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on latest topics of soft computing, pure, applied and numerical mathematics and control engineering, and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate new algorithms, theorems, modeling results, research results, projects, surveying works and industrial experiences that describe significant advances in Soft Computing, Mathematics and Control Engineering
Garch Models in Value-At-Risk Estimation for REITIJERDJOURNAL
Ā
Abstract:- In this study we investigate volatility forecasting of REIT, from January 03, 2007 to November 18, 2016, using four GARCH models (GARCH, EGARCH, GARCH-GJR and APARCH). We examine the performance of these GARCH-type models respectively and backtesting procedures are also conducted to analyze the model adequacy. The empirical results display that when we take estimation of volatility in REIT into account, the EGARCH model, the GARCH-GJR model, and the APARCH model are adequate. Among all these models, GARCH-GJR model especially outperforms others.
The Odd Generalized Exponential Log Logistic Distributioninventionjournals
Ā
We propose a new lifetime model, called the odd generalized exponential log logistic distribution (OGELLD).We obtain some of its mathematical properties. Some structural properties of the new distribution are studied. The maximum likelihood method is used for estimating the model parameters and the Fisherās information matrix is derived. We illustrate the usefulness of the proposed model by applications to real lifetime data.
We approach the screening problem - i.e. detecting which inputs of a computer model significantly impact the output - from a formal Bayesian model selection point of view. That is, we place a Gaussian process prior on the computer model and consider the $2^p$ models that result from assuming that each of the subsets of the $p$ inputs affect the response. The goal is to obtain the posterior probabilities of each of these models. In this talk, we focus on the specification of objective priors on the model-specific parameters and on convenient ways to compute the associated marginal likelihoods. These two problems that normally are seen as unrelated, have challenging connections since the priors proposed in the literature are specifically designed to have posterior modes in the boundary of the parameter space, hence precluding the application of approximate integration techniques based on e.g. Laplace approximations. We explore several ways of circumventing this difficulty, comparing different methodologies with synthetic examples taken from the literature.
Authors: Gonzalo Garcia-Donato (Universidad de Castilla-La Mancha) and Rui Paulo (Universidade de Lisboa)
1. Adaptive Quasi-Maximum Likelihood Estimation of GARCH models
with Studentās t Likelihood 1
Xiaorui Zhu 2, Li Xie3
Abstract This paper proposes an adaptive quasi-maximum likelihood estimation when forecast-
ing the volatility of ļ¬nancial data with the generalized autoregressive conditional heteroscedas-
ticity(GARCH) model. When the distribution of volatility data is unspeciļ¬ed or heavy-tailed,
we worked out adaptive quasi-maximum likelihood estimation based on data by using the scale
parameter Ī·f to identify the discrepancy between wrongly speciļ¬ed innovation density and the
true innovation density. With only a few assumptions, this adaptive approach is consistent and
asymptotically normal. Moreover, it gains better eļ¬ciency under the condition that innovation
error is heavy-tailed. Finally, simulation studies and an application show its advantage.
Keywords quasi likelihood, GARCH Model, adaptive estimator, heavy-tailed error
JEL Classiļ¬cation: C13; C22
1 Introduction
With the development of derivatives, volatility has been a crucial variable in not only
modeling ļ¬nancial data, but also designing trading strategies and implementing risk man-
agement. Among various models of analysis of volatility, GARCH(generalized autoregres-
sive conditional heteroscedasticity) model is a well-known and useful one. It was proposed
by Bollerslev(1986) as follows:
ļ£±
ļ£²
ļ£³
ut = Ļt|tā1Īµt
Ļ2
t|tā1 = Ļ +
āp
i=1 Ī±iu2
tāi +
āq
j=1 Ī²jĻ2
tāj
(1)
Primarily, the estimation of ARCH/GARCH model is based on the maximum likeli-
hood estimation(MLE) when the innovation subjects to conditional Gaussian distribution.
However, if the distribution of innovation Īµt is not normal, as is prevalent in plenty of em-
pirical data, quasi-maximum likelihood estimation would be more suitable. At ļ¬rst, plenty
of literature discussed Gaussian quasi-maximum likelihood estimation when the innovation
distribution is not normal. Weissās(1986) [19] research showed that even under special con-
dition that the data is un-normalized and have ļ¬nite fourth moments, the Gaussian-MLE
is consistent asymptotically normal. After that, Bollerslev(1986) [1], Hsieh(1989) [12],
and Nelson(1991) [14] proposed the issue of parameter estimation by generalized Gaussian
quasi-maximum likelihood estimation(GQMLE) when the innovation distribution is not
normal, and has also derived the consistency and eļ¬ciency of this method. Bougerol and
Picard(1992) [2] discussed the necessary stationarity and ergodicity of GARCH models. To
1:We appreciate all the helpful suggestions from the editor and the reviewers, thoughtful comments from
A.P.Gaorong Li and Yuyang Zhang, and ļ¬nancial support from the National Natural Science Foundation
(Grant No.11171011) and the National Social Science Foundation(Grant No.13BGL007).
2:College of Applied Sciences, Beijing University of Technology, Pingleyuan 100, Chaoyang District, Bei-
jing, 100124, China.
E-mail: xiaorui.zhu@emails.bjut.edu.cn
3:College of Applied Sciences, Beijing University of Technology, Pingleyuan 100, Chaoyang District, Bei-
jing, 100124, China.
E-mail:xieli@bjut.edu.cn
2. 2
get an estimation when the innovation distribution is unknown, Elie and Jeantheau(1995)
proposed the Gaussian quasi-maximum likelihood estimator that is consistent and asymp-
totically normal. Besides, there are also crucial achievements of Gaussian QMLE in recent
years. Berkes, Horvath and Kokoszka (2003) [11] studied the structure of a GARCH(p,q)
and proved the consistency and asymptotic normality of the QMLE under mild conditions.
Strong consistency and asymptotic normality of QMLE were also proved in the study of
Francq and Zakoian (2004) [9].
Other researches that involve improving Gaussian QMLE include: Engle and Gonzalez-
Rivera(1991) [5] published a procedure that can improve the eļ¬ciency of GQMLE. Drost
and Klaassen (1997) [4] put forward the adaptive estimation in ARCH model. Sun and
Stengos (2006) [18] proposed adaptive two-step semi-parametric procedures on the condi-
tion of symmetric error or asymmetric error separately. A Self-weighted and local QMLE
for ARMA-GARCH models has been discussed in the study of Ling (2007) [13].
With the developing of quasi-maximum likelihood estimation, some non-gaussian QM-
LEs were proposed to improve the estimator when the innovations are heavy-tailed or
skewed. Xiu(2010) [21] discussed quasi-maximum likelihood estimation of stochastic volatil-
ity model with high frequency data. OssandĀ“on and Bahamonde(2011) [16] proposed
a novel estimation for GARCH models based on the Extended Kalman Filter(EKF).
Zhu(2012) [22] put forward a mixed portmanteau test for ARMA-GARCH model by
quasi-maximum likelihood estimator. Francq et al.(2011) [8] has developed a two-stage
non-Gaussian quasi-maximum likelihood estimation to rectify the value of parameter es-
timation. This procedure allows the use of generalized Gaussian likelihood and proposes
a test that can determine whether the more eļ¬cient Quasi-MLE is required with a non-
Gaussian density. Other notable achievements lay in that the Studentās t likelihood func-
tion has been taken into consideration, which is called three-step non-Gaussian Quasi-MLE
approach in the study of Fan et al.(2013) [6]. Because the Pearsonās Type IV(PIV) distri-
bution can capture a large range of the asymmetry and leptokurtosis of innovation error,
Zhu and Li (2014) [23] have proposed a novel Pearson-type QMLE of GARCH(p,q) models
to capture not only the heavy-tailed but also the skewed innovations.
This article focuses on adaptive analyzing procedure which will increase eļ¬ciency of
the estimator of the GARCH model, and proposes an adaptive procedure of QMLE aiming
at minimizing the discrepancy between true and speciļ¬ed distribution of innovation. The
scale parameter Ī·f is built in the sense of Kullback-Leibler Information Criterion (KLIC).
With the Adaptive-QMLE procedure can not only ļ¬nd the approximate degree of freedom
of innovation distribution, but also get the optimized quasi-maximum likelihood estimator
after several iterations. This general estimation doesnāt rely on models, so it may be
used in other general models. And the idea of Adaptive-QMLE can also be used in
other methods such as GQMLE, NGQMLE and PQMLE. The simulation studies conļ¬rm
that convergence rate of this adaptive QMLE procedure is very high, especially when the
innovation is heavy-tailed. It performes well with high frequency data when the empirical
distribution of innovations is often heavy-tailed.
This paper is organized as follows. In Section 2, we introduce the GARCH model
and quasi-maximum likelihood estimation. In Section 3, we describe the assumptions,
propositions of our new adaptive quasi-maximum likelihood estimation, where we also
explain the details and proofs of this procedure. Some simulation studies are provieded in
Section 4. Real data analyses are shown in Section 5. The Section 6 concludes this paper.
3. 3
2 Quasi-MLE in GARCH model
2.1 The GARCH Model
Common type of GARCH(p,q) model has been shown in the Section 1. Let Īø =
(Ļ, āĪ±ā², āĪ²ā²)ā² be the unknown parameters of GARCH(p,q), where āĪ± = (Ī±1, Ā· Ā· Ā· , Ī±p)ā², āĪ² =
(Ī²1, Ā· Ā· Ā· , Ī²q)ā² are the heteroscedastic parameters. {Īµt, āā < t < ā} are innovation of
model. Ī ā R1+p+q
o is the parameter space and Ro = [0, ā). The following assumptions
for GARCH model are necessary:
Assumption 1.
(i)The GARCH process {ut} is strictly stationary and ergodic.
(ii)For each Īø ā Ī, Ī±(z) and Ī²(z) have no common root, Ī±(1) Ģø= 0, Ī±p + Ī²q Ģø= 0 and
āq
j=1 Ī²j < 1, where Ī±(z) =
āp
i=1 Ī±izi and Ī²(z) = 1 ā
āq
j=1 Ī²jzj.
(iii)Īµt is a nondegenerate and i.i.d random variable with EĪµt = 0, EĪµ2
t = 1 and un-
known density g(Ā·).
The stationarity and ergodicity of GARCH models in Assumption 1(i) can be found in
Bougerol and Picard(1992) [2]. The identiļ¬ability conditions for GARCH(p,q) are given in
Berkes, Horvath and Kokoszka (2003) [11]. For a general GARCH model the conditional
variance Ļ2 cannot be expressed in terms of a ļ¬nite number of the past observations utā1,
utā2.... In many researches, conditional variance ĖĻ2
t is [7]:
ĖĻ2
t =
Ļ
1 ā Ī£q
j=1Ī±j
+
pā
i=1
Ī±iu2
tāi +
pā
i=1
Ī±j
āā
k=1
qā
j1=1
Ā· Ā· Ā·
qā
jk=1
Ī²j1 Ā· Ā· Ā· Ī²jk
u2
tāiāj1āĀ·Ā·Ā·ājk
(2)
By doing so, {ĖĻ2
t } in (2) is a function of sample uā
tā1 = {us, āā < s ā¤ t ā 1}.
2.2 Quasi-Maximum Likelihood Estimation
Fortunately, it is easy to derive the likelihood function of a GARCH model with normal
error. Under the Eu2
t < ā assumption, the log-likelihood function of the GARCH model
is as follow:
L(Ļ, āĪ±, āĪ²) = ā
n
2
log(2Ļ) ā
1
2
nā
t=1
{log(Ļ2
t|tā1) +
u2
t
Ļ2
t|tā1
} (3)
Under mild conditions that Īµt isnāt speciļ¬ed as standard normal, Berkes, Horvath and
Kokoszka (2003) [11] proved the consistency and asymptotic normality of quasi-MLE.
Apart from normal distribution, studentās t-distributions and generalized Gaussian dis-
tributions are considered frequently. The deduction of the generalized quasi-maximum
likelihood estimator of GARCH model is as follows:
ĖĪø = argmaxĪø
1
2
nā
t=1
{ā log(Ļt|tā1) + log g(
ut
Ļt
)} (4)
In this paper, we consider innovation is standardized t-distribution, so the true prob-
ability density function of {Īµt, āā < t < ā} which is the g(Ā·) in the above formula
is:
g(x) =
Ī((Ī½ + 1)/2)
(ĻĪ½)1/2Ī(Ī½/2)
(1 +
x2
Ī½
)ā
(Ī½+1)
2 (5)
where:Ī½ > 0 may be treated as continuous parameter.
4. 4
3 Adaptive Quasi Maximum likelihood Estimation
If the true innovation distribution cannot be speciļ¬ed, gaussian quasi-maximum likeli-
hood estimation(GQMLE) is always inconsistent as shown in Newey and Steigerwald(1997)
[15]. Weiss(1984), Lee and Hansen(1994) study the asymptotic distributions of GQMLE.
Francq et al.(2011) [8] proposes a two-stage GQMLE that can improve eļ¬ciency of esti-
mator. Based on a three step quasi-maximum likelihood estimation, Fan et al.(2013) [6]
derived the asymptotic theory of non-GQMLE when heavy-tailed studentās t distribution
is taken into consideration. These methods need to specify the degree of freedom Ī½(t
distribution) or the parameter r of the Generalized Error Distribution(GED(r)). And
they adjust estimator one time when the innovation distribution is heavy-tailed. However,
they couldnāt totaly capture the heavy-tail characteristic. Therefore we put forward the
adaptive QMLE procedure by choosing the optimized quasi likelihood function at mini-
mal divergence between quasi innovation density f and the true innovation distribution
density g. This iterative procedure will gain better eļ¬ciency than other methods when
the distribution of innovation is heavy-tailed or unknown.
3.1 KLIC and Scale Parameter
In order to measure the divergence between true innovation density g and speciļ¬ed
likelihood function f, Kullback-Leibler divergence is necessary:
I(g; f) =
ā«
[log g(u)]g(u)du ā
ā«
[log f(u)]g(u)du (6)
The scale parameter Ī·f we use was proposed by White(1982) [20] and Fan et al. [6] as:
Ī·f = arg maxĪ·>0E{ā log Ī· + log f(
Īµ
Ī·
)}, (7)
where Ī·f can be computed by using maximum likelihood estimation.
Let W(Ī·) = E{ā log Ī· + log f(Īµ
Ī· )}. In order to derive the consistency property of ĖĪø,
another assumption is needed as follows:
Assumption 2. The quasi likelihood is chose from t-distribution family in (5) such
that W(Ī·) has a unique maximmizer Ī·f > 0.
This assumption helps our ļ¬nd the optimal likelihood within the t-distribution family
that best captured the heavy-tailed characteristic of innovation than others. In other
words, this assumption and the proposition 1 below will determine the best degree of
freedom in adaptive-QMLE, and the best likelihood function will make adaptive-QMLE
have better eļ¬ciency. And the scale parameter Ī·f had a crucial proposition:
Proposition 1. If f ā exp(āx2/2) or f = g, then Ī·f = 1.
Proof. [6] Deļ¬ne the likelihood ratio function G (Ī·) = E
(
log
(
f(Īµ/Ī·)
Ī·Ā·f(Īµ)
))
. Suppose G(Ī·)
has no local extremal values. And since log(x) ā¤ 2(
ā
x ā 1),
E
(
log
(
f (Īµ/Ī·)
Ī· Ā· f (Īµ)
))
ā¤ 2E
(ā
f (Īµ/Ī·)
Ī· Ā· f (Īµ)
ā 1
)
= 2
ā« +ā
āā
ā
1
Ī·
f
(
x
Ī·
)
f (x) dx ā 2
ā¤ ā
ā« +ā
āā
(ā
1
Ī·
f
(
x
Ī·
)
ā
ā
f(x)
)2
dx
ā¤ 0
3.2 Adaptive QMLE
5. 5
Now, we propose adaptive quasi-maximum likelihood estimation which can be used to
estimate parameters of GARCH model. This method can approximate degree of freedom
of quasi likelihood function based on the Proposition 1. So quasi-maximum likelihood
estimation with the approximative degree will gain eļ¬cency signiļ¬cantly.
(a) First, we estimate ĖĪø(0)(number in the upper right corner is serial number of esti-
mator) with GQMLE under the assumption of normality by
ĖĪø(0)
= argmaxĪø
1
2
nā
t=1
{ā log(Ļt|tā1) ā
u2
t
Ļ2
t|tā1
} (8)
.
(b) The {Īµt} we need to caculate Ī·f can be replaced by computing from ĖĪµ
(0)
t =
ut/(ĖĻ(ĖĪø(0))) with gaussian maximum likelihood estimator ĖĪø(0) in the step (a). Then ĖĪ·f is
obtained in
ĖĪ·
(1)
f = arg maxĪ·>0E{ā log Ī· + log f(
ĖĪµ
(0)
t
Ī·
)} (9)
by changing the degree of freedom (df1) of studentās t denstity function (f(Ā·)) above until
|ĖĪ·
(1)
f ā 1| ā¤ Ī“ (Ī“ can be set).
In other words, this step ļ¬nds the approximate f(df1)(Ā·) of ĖĪµ
(0)
t under the Proposition
1. Therefore, we can obtain ĖĪø(1) under the new density function f(df1) which is more
reliable for true innovation:
ĖĪø(1)
= arg maxĪø
Tā
t=1
(ā log(Ļt|tā1) + log f(df1)(
ut
Ļt|tā1
)) (10)
(c) Apply the similar procedure in (b) and we will get {ĖĪµ
(1)
t = ut/(ĖĻt|tā1)} , ĖĪ·
(2)
f in
formula (9), and f(df2)(Ā·) under the condition |ĖĪ·
(2)
f ā 1| ā¤ Ī“. Then, estimate ĖĪø(2) with the
likelihood function f(df2) , and so on.
(d) Finally, obtain a Adaptive estimator ĖĪø = ĖĪø(n) until |ĖĪø(n) ā ĖĪø(nā1)| < Ī».
Usually, we set Ī“ at about 0.1 and Ī» less than 0.5. The smaller the Ī“ and Ī», the more
times of iteration are needed.
The adaptive-QMLE is consistent as follow:
THEOREM 3.1. [6] Suppose that Assumption 1(i,ii,iii) and 2 hold. Then ĖĪø
p
ā Īø0.
Remark 1. The Adaptive-QMLE needs a ļ¬nite fourth moment for the innovation
because we simply adopt GQMLE in the ļ¬rst step. The ļ¬nite fourth moment condition
is essential to obtain asymptotic normality. In the future studies, We will use other
alternative estimators in the ļ¬rst step to remove this condition. And the condition EĪµ2
t = 1
assure the Proposition 1 can be used in this procedure. This paper discusses the case that
EĪµ2
t < ā, when EĪµ2
t = ā, lots of estimators in this context were studied. Such as Chen
and Zhu(2014) [3], Hill(2014) [10], Peng and Yao(2003) [17] and so on.
This procedure is helpful when the family of distribution of innovation is known. With
the help of KLIC, adaptive-QMLE will increase the eļ¬ciency of the estimator and decrease
the discrepancy between true density of innovation and speciļ¬ed density of innovation. So
it will be a better method to process varies situations and a wide variety of data.
4 Simulation studies
We show variation of Ī·f in Figure 1. Each line represents the variation of the Ī·f which
6. 6
the distribution of {Īµt} in formula (9) is ļ¬xed as noted in the upper left. The horizontal
axis represents degree of freedom of likelihood function. We compares 4 types of Studentās
t distribution and three types of Generalized Gaussian distribution with diļ¬erent shape
parameter. It is shown that if the degree of freedom of quasi likelihood function is larger(or
smaller) than the degree of freedom of {Īµt}, the Ī·f > 1(or < 1). When the degree of
freedom of likelihood function is smaller than the true innovation degree freedom the
Ī·f < 1 . Itās no doubt that Ī·f approximately equals to 1 when the speciļ¬ed the quasi
likelihood function equals to the true innovation function. For example, we can ļ¬nd that
the bold line in Figure 1, when innovation is t2 and quasi likelihood function is Studentās
t density with degree of freedom equal to 2, the Ī·f is approximately equal to 1. The same
things occured in each line.
In order to show the advantages of adaptive-QMLE clearly, we took the ordinary
GARCH(1,1) model with true parameters (Ļ, Ī±1, Ī²1) = (0.02, 0.6, 0.3) into consideration.
With the bound of |ĖĪ·
(2)
f ā1| ā¤ 0.2, adaptive estimator is convergent after several iterations
as showed in Table 1. The innovation distribution ranges from thin-tailed t20, which is
approximately normal distribution, to heavy-tailed t2. At the same time, we compared
diļ¬erent situation with sample size ranging among 500, 1000 or 2000. The āStepā column
display the order of iteration. ādfā column is the degree of freedom of studentās t quasi
likelihood function which was found with the bound |ĖĪ·
(1)
f ā1| ā¤ 0.2. ādf=Gaussā means the
gaussian assumption of innovation distribution. What can be found when the innovations
are heavy-tailed, such as t2, t3, is that the maximum-likelihood estimator with Gaussian
assumption is intolerable. However, from the results in the table we could see the Adaptive-
Quasi-Maximum likelihood estimators are better than Maximum likelihood estimators.
And, the adaptive estimation procedure approximate the true degree of freedom and true
parameters.
In Table 2, we use the same setting of GARCH(1,1) model in Table 1. The sample size
is various among 250,500 and 1000. And repeat time of simulation is 500 at each sample
size. The innovation was set as Studentās t distribution for various degrees of freedom from
heavy-tailed to thin-tailed in Table 2. There are three things need to be emphasized in
Table 2. The ļ¬rst one is that NGQMLE is the three-step estimation procedure proposed
by Fan et al(2013) [6]. The auxiliary innovation distribution of NG-QMLE is t4. The
second is GQMLE represent maximum-likelihood estimation with the assumption that
distribution of innovations is normal. Third is that MLE is ordinary maximum likelihood
estimation with the true distribution of innovation set at the beginning of simulations.
In Table 2, gaussian quasi-maximum likelihood estimators are always terrible when
the innovations are heavy-tailed such as t2 or t3. Especially, when the innovations are
t2 or t3, a fourth moment doesnāt exist, the RMSE of Ļ of GQMLE is intolerable. So
the relative RMSE ratio of other estimators against GQMLE is meaningless. We wonāt
display them in the Table 2. But we can ļ¬nd that when the innovations are heavy-tailed,
the Adaptive-QMLE is better than the other two estimations, NGQMLE and GQMLE. In
the table, the relative RMSE ratio of A-QML estimator against GQMLE is smaller than 1.
Meanwhile, the relative RMSE ration of adaptive-QMLE shown that the adaptive-QMLE
is better than NGQMLE. It is clearly shown in the Table 2 that most relative RMSE
ratio of A-QMLE against GQMLE is close to relative RMSE ratio of MLE, which means
A-QMLE can be an best alternative estimator when the innovation is unknown. On the
other hand, approximate df of true innovation, showed in ādfBā column, is close to the
exact degree of freedom. These evidences implies adaptive QMLE, no matter whether
7. 7
the tail of innovation is heavy or thin, is an optimized estimator which is close to the
maximum likelihood estimator with true innovation distribution.
5 Application
In this section, ļ¬rst, we summary daily returns of six indexes, namely S&P500, FTSE,
NADQ, CAC, DAX, HSI. The time period we considered is taken from January 2, 2000
to March 31, 2014. In the Table 3, the number of samples, the mean, standard deviation
and excess kurtosis of daily return are showed. The we can ļ¬nd that the excess kurtosis of
all the indexes are bigger than zero, which means that they are all heavy-tailed. So, when
one want to analyze stock returns with GARCH model, the Adaptive-QMLE precedure
is not only necessary but also helpful. Second, we use this adaptive-QMLE procedure
to estimate parameters of GARCH model. Table 4 displays estimators of GARCH(1,1)
model of quasi-maximum likelihood estimation and adaptive quasi-maximum likelihood.
In the table 4, ādfBā represent approximate degree of freedom of studentās t density. From
the results in Table 4 we could ļ¬nd that although the adaptive QML estimator is a little
diļ¬erent from the quasi-maximum likelihood estimator, we get an approximate degree of
freedom. These estimated degree of freedom imply the same heavy-tailed characteristic
of data in Table 3. S&P500, NASDAQ and HSI have bigger kurtosis, so these three
approximate degree of freedom are smaller than the other three indexes, in other words
their tails are heavier than othersā.
6 Conclusions
This article focuses on improving eļ¬ciency of the estimator of the GARCH model when
innovation distribution is unknown, and proposes the Adaptive-QMLE which gains better
eļ¬ciency. By using Ī·f = 1 in the sense of KLIC which can identify the quasi likelihood
function f, the Adaptive-QMLE is stable no matter whether the tail of innovation is heavy
or not.
The most important thing is that, without specifying the distribution of innovation, the
Adaptive-QMLE is very close to MLE with true innovation distribution. Hence, it is very
helpful and accurate when we donāt know the distribution of innovation, especially when
the innovation is heavy-tailed. So itās a general quasi-maximum likelihood estimation
which can be used in more situations, such as the ļ¬nancial ļ¬eld or genetics. Possible
extension of the Adaptive-QMLE includes introducing other distributions of innovation
and considering more models.
8. 8
List of Figures and Tables
Figure 1: Variations of Ī·f across Studentās-t likelihood QMLE
2 4 6 8 10
0.61.01.4
Degree of freedom
Ī·
t2
t4
t6
t10
11. 11
Table 3: Summary of Six stock indexes
ut of Index n mean sd kurtosis
S&P500 3581 0.0002 0.0132 7.9423
FTSE 3596 0.0001 0.0125 6.1308
NASDAQ 3582 0.0007 0.0302 7.2389
CAC 3632 0.0002 0.0414 4.9996
DAX 3633 0.0019 0.0458 4.4939
HSI 3548 0.0019 0.0471 8.2166
a The data are 6 daily stock market returns,
from January 2, 2000 to March 31, 2014. And
the kurtosis in the table is sample excess kur-
tosis.
12. 12
Table 4: QMLE and Adaptive-tQMLE of GARCH(1,1) models.
Index Estimator Ļ Ī± Ī² dfB
S&P500
GQMLE 0.003 0.085 0.902
15
AtQMLE 0.007 0.083 0.905
FTSE
GQMLE 0.013 0.098 0.892
30
AtQMLE 0.021 0.082 0.891
NASDAQ
GQMLE 0.005 0.715 0.236
13
AtQMLE 0.006 0.139 0.824
CAC
GQMLE 0.0002 0.084 0.906
56
AtQMLE 0.0003 0.085 0.906
DAX
GQMLE 0.0002 0.085 0.905
50
AtQMLE 0.0002 0.088 0.899
HSI
GQMLE 0.0002 0.064 0.912
12
AtQMLE 0.0002 0.054 0.929
a The data are 6 daily stock market returns, from Jan-
uary 2, 2000 to March 31, 2014.
13. 13
References
[1] Tim Bollerslev. Generalized autoregressive conditional heteroskedasticity. Journal of
econometrics, 31(3):307ā327, 1986.
[2] Philippe Bougerol and Nico Picard. Stationarity of garch processes and of some
nonnegative time series. Journal of econometrics, 52(1):115ā127, 1992.
[3] Min Chen and Ke Zhu. Sign-based portmanteau test for arch-type models with heavy-
tailed innovations. Working paper, 2014.
[4] Feike C Drost, Chris AJ Klaassen, and Bas JM Werker. Adaptive estimation in
time-series models. The Annals of Statistics, 25(2):786ā817, 1997.
[5] Robert F Engle and Gloria Gonzalez-Rivera. Semiparametric arch models. Journal
of Business & Economic Statistics, 9(4):345ā359, 1991.
[6] Jianqing Fan, Lei Qi, and Dacheng Xiu. Quasi maximum likelihood estimation of
garch models with heavy-tailed likelihoods. Journal of Business & Economic Statis-
tics, pages justāaccepted, 2013.
[7] Jianquing Fan. Nonlinear time series: nonparametric and parametric methods.
Springer, 2003.
[8] Christian Francq, Guillaume Lepage, and Jean-Michel ZakoĀØıan. Two-stage non gaus-
sian qml estimation of garch models and testing the eļ¬ciency of the gaussian qmle.
Journal of Econometrics, 165(2):246ā257, 2011.
[9] Christian Francq, Jean-Michel Zakoian, et al. Maximum likelihood estimation of pure
garch and arma-garch processes. Bernoulli, 10(4):605ā637, 2004.
[10] Jonathan B Hill. Robust estimation and inference for heavy tailed garch. Unpublished
Manuscript, Department of Economics, University of North Carolina, 2014.
[11] Lajos Horv, Piotr Kokoszka, et al. Garch processes: structure and estimation.
Bernoulli, 9(2):201ā227, 2003.
[12] David A Hsieh. Modeling heteroscedasticity in daily foreign-exchange rates. Journal
of Business & Economic Statistics, 7(3):307ā317, 1989.
[13] Shiqing Ling. Self-weighted and local quasi-maximum likelihood estimators for arma-
garch/igarch models. Journal of Econometrics, 140(2):849ā873, 2007.
[14] Daniel B Nelson. Conditional heteroskedasticity in asset returns: A new approach.
Econometrica: Journal of the Econometric Society, pages 347ā370, 1991.
[15] Whitney K Newey and Douglas G Steigerwald. Asymptotic bias for quasi-maximum-
likelihood estimators in conditional heteroskedasticity models. Econometrica: Journal
of the Econometric Society, pages 587ā599, 1997.
[16] SebastiĀ“an OssandĀ“on and Natalia Bahamonde. On the nonlinear estimation of garch
models using an extended kalman ļ¬lter. In Proceedings of the World Congress on
Engineering, volume 1, 2011.
14. 14
[17] Liang Peng and Qiwei Yao. Least absolute deviations estimation for arch and garch
models. Biometrika, pages 967ā975, 2003.
[18] Yiguo Sun and Thanasis Stengos. Semiparametric eļ¬cient adaptive estimation of
asymmetric garch models. Journal of Econometrics, 133(1):373ā386, 2006.
[19] Andrew A Weiss. Asymptotic theory for arch models: estimation and testing. Econo-
metric theory, pages 107ā131, 1986.
[20] Halbert White. Maximum likelihood estimation of misspeciļ¬ed models. Econometrica,
50(1):1ā25, 1982.
[21] Dacheng Xiu. Quasi-maximum likelihood estimation of volatility with high frequency
data. Journal of Econometrics, 159(1):235ā250, 2010.
[22] Ke Zhu. A mixed portmanteau test for arma-garch models by the quasi-maximum
exponential likelihood estimation approach. Journal of Time Series Analysis, 2012.
[23] Ke Zhu and Wai Keung Li. A new pearson-type qmle for conditionally heteroskedastic
models. Working paper, 2014.