The document compares 11 time series models for fitting daily stock return data from the KLCI before and after the 1997 Asian financial crisis using two methods: 1) ranking models based on log likelihood, SBC, and AIC values, and 2) principal component analysis of these criteria. For the pre-crisis period, both methods identify GARCH(1,2) as the best fitting model and ARCH(1) as the worst, but disagree on intermediate models. PCA avoids information loss from ranking and better classifies models by performance level.
Machiwal, D. y Jha, MK (2012). Modelado estocástico de series de tiempo. En A...SandroSnchezZamora
This document provides an overview of stochastic modelling and different stochastic processes that are commonly used, including:
- Purely random (white noise) processes where data points are independent and identically distributed
- Autoregressive (AR) processes where each data point is modeled as a linear combination of previous data points plus noise
- Moving average (MA) processes where each data point is modeled as a linear combination of previous noise terms plus a constant
- Autoregressive moving average (ARMA) processes which combine AR and MA processes
- Autoregressive integrated moving average (ARIMA) processes which explicitly include differencing to make time series stationary
Measuring the volatility in ghana’s gross domestic product (gdp) rate using t...Alexander Decker
This document summarizes a study that analyzed volatility in Ghana's GDP growth rate using GARCH models. The study found that GDP volatility exhibited characteristics like clustering and leverage effects. A GARCH(1,1) model provided a reasonably good fit to quarterly GDP data. Volatility and leverage effects were found to have significantly increased. The best fitting models for GDP volatility were ARIMA(1,1,1)(0,0,1)12 and ARIMA(1,1,2)(0,0,1)12 models.
1) This paper proposes an adaptive quasi-maximum likelihood estimation approach for GARCH models when the distribution of volatility data is unspecified or heavy-tailed.
2) The approach works by using a scale parameter ηf to identify the discrepancy between the wrongly specified innovation density and the true innovation density.
3) Simulation studies and an application show that the adaptive approach gains better efficiency compared to other methods, especially when the innovation error is heavy-tailed.
1. Dimensional analysis and the concept of similitude allow experiments using scale models to be used to study full-scale systems. Dimensional analysis uses Buckingham pi theorem to determine the minimum number of dimensionless groups needed to describe a phenomenon in terms of the variables involved.
2. For a model to accurately simulate a prototype system, the dimensionless pi groups that describe the phenomenon must be equal between the model and prototype. This establishes the modeling laws or similarity requirements that a model must satisfy.
3. Common dimensionless groups in fluid mechanics include the Reynolds number, Froude number, Strouhal number, and Weber number. These groups arise frequently in analyzing experimental data from fluid mechanics problems.
Novel analysis of transition probabilities in randomized k sat algorithmijfcstjournal
This document summarizes a research paper that proposes a new analysis of transition probabilities in randomized k-SAT algorithms. Specifically:
- It shows the probability of correctly flipping a literal in 2-SAT and 3-SAT approaches 2/3 and 4/7 respectively, using Karnaugh maps to analyze all possible variable combinations.
- It extends this analysis to general k-SAT, showing the transition probability of the Markov chain in randomized k-SAT algorithms approaches 0.5.
- Using this result, it determines the probability and complexity of finding a satisfying assignment for randomized k-SAT, showing values within a polynomial factor of (0.9272)^n and (1.0785)^n for satisf
Performance Assessment of Polyphase Sequences Using Cyclic Algorithmrahulmonikasharma
Polyphase Sequences (known as P1, P2, Px, Frank) exist for a square integer length with good auto correlation properties are helpful in the several applications. Unlike the Barker and Binary Sequences which exist for certain length and exhibits a maximum of two digit merit factor. The Integrated Sidelobe level (ISL) is often used to define excellence of the autocorrelation properties of given Polyphase sequence. In this paper, we present the application of Cyclic Algorithm named CA which minimizes the ISL (Integrated Sidelobe Level) related metric which in turn improve the Merit factor to a greater extent is main thing in applications like RADAR, SONAR and communications. To illustrate the performance of the P1, P2, Px, Frank sequences when cyclic Algorithm is applied. we presented a number of examples for integer lengths. CA(Px) sequence exhibits the good Merit Factor among all the Polyphase sequences that are considered.
Stress-Strength Reliability of type II compound Laplace distributionIRJET Journal
The document discusses estimating the reliability (R) of a stress-strength model where the strength (Y) and stress (X) variables follow a type II compound Laplace distribution. R is defined as the probability that stress is less than strength (Pr(X<Y)).
It first provides background on stress-strength models and reliability (R) in engineering. It then defines the type II compound Laplace distribution and derives an expression for R when X and Y follow this distribution. Maximum likelihood estimation is used to estimate the three parameters of the distribution. Simulation studies validate the estimation algorithm. Applications of stress-strength models are discussed in fields like engineering, medicine, and genetics.
Machiwal, D. y Jha, MK (2012). Modelado estocástico de series de tiempo. En A...SandroSnchezZamora
This document provides an overview of stochastic modelling and different stochastic processes that are commonly used, including:
- Purely random (white noise) processes where data points are independent and identically distributed
- Autoregressive (AR) processes where each data point is modeled as a linear combination of previous data points plus noise
- Moving average (MA) processes where each data point is modeled as a linear combination of previous noise terms plus a constant
- Autoregressive moving average (ARMA) processes which combine AR and MA processes
- Autoregressive integrated moving average (ARIMA) processes which explicitly include differencing to make time series stationary
Measuring the volatility in ghana’s gross domestic product (gdp) rate using t...Alexander Decker
This document summarizes a study that analyzed volatility in Ghana's GDP growth rate using GARCH models. The study found that GDP volatility exhibited characteristics like clustering and leverage effects. A GARCH(1,1) model provided a reasonably good fit to quarterly GDP data. Volatility and leverage effects were found to have significantly increased. The best fitting models for GDP volatility were ARIMA(1,1,1)(0,0,1)12 and ARIMA(1,1,2)(0,0,1)12 models.
1) This paper proposes an adaptive quasi-maximum likelihood estimation approach for GARCH models when the distribution of volatility data is unspecified or heavy-tailed.
2) The approach works by using a scale parameter ηf to identify the discrepancy between the wrongly specified innovation density and the true innovation density.
3) Simulation studies and an application show that the adaptive approach gains better efficiency compared to other methods, especially when the innovation error is heavy-tailed.
1. Dimensional analysis and the concept of similitude allow experiments using scale models to be used to study full-scale systems. Dimensional analysis uses Buckingham pi theorem to determine the minimum number of dimensionless groups needed to describe a phenomenon in terms of the variables involved.
2. For a model to accurately simulate a prototype system, the dimensionless pi groups that describe the phenomenon must be equal between the model and prototype. This establishes the modeling laws or similarity requirements that a model must satisfy.
3. Common dimensionless groups in fluid mechanics include the Reynolds number, Froude number, Strouhal number, and Weber number. These groups arise frequently in analyzing experimental data from fluid mechanics problems.
Novel analysis of transition probabilities in randomized k sat algorithmijfcstjournal
This document summarizes a research paper that proposes a new analysis of transition probabilities in randomized k-SAT algorithms. Specifically:
- It shows the probability of correctly flipping a literal in 2-SAT and 3-SAT approaches 2/3 and 4/7 respectively, using Karnaugh maps to analyze all possible variable combinations.
- It extends this analysis to general k-SAT, showing the transition probability of the Markov chain in randomized k-SAT algorithms approaches 0.5.
- Using this result, it determines the probability and complexity of finding a satisfying assignment for randomized k-SAT, showing values within a polynomial factor of (0.9272)^n and (1.0785)^n for satisf
Performance Assessment of Polyphase Sequences Using Cyclic Algorithmrahulmonikasharma
Polyphase Sequences (known as P1, P2, Px, Frank) exist for a square integer length with good auto correlation properties are helpful in the several applications. Unlike the Barker and Binary Sequences which exist for certain length and exhibits a maximum of two digit merit factor. The Integrated Sidelobe level (ISL) is often used to define excellence of the autocorrelation properties of given Polyphase sequence. In this paper, we present the application of Cyclic Algorithm named CA which minimizes the ISL (Integrated Sidelobe Level) related metric which in turn improve the Merit factor to a greater extent is main thing in applications like RADAR, SONAR and communications. To illustrate the performance of the P1, P2, Px, Frank sequences when cyclic Algorithm is applied. we presented a number of examples for integer lengths. CA(Px) sequence exhibits the good Merit Factor among all the Polyphase sequences that are considered.
Stress-Strength Reliability of type II compound Laplace distributionIRJET Journal
The document discusses estimating the reliability (R) of a stress-strength model where the strength (Y) and stress (X) variables follow a type II compound Laplace distribution. R is defined as the probability that stress is less than strength (Pr(X<Y)).
It first provides background on stress-strength models and reliability (R) in engineering. It then defines the type II compound Laplace distribution and derives an expression for R when X and Y follow this distribution. Maximum likelihood estimation is used to estimate the three parameters of the distribution. Simulation studies validate the estimation algorithm. Applications of stress-strength models are discussed in fields like engineering, medicine, and genetics.
Differential evolution (DE) algorithm has been applied as a powerful tool to find optimum switching angles for selective harmonic elimination pulse width modulation (SHEPWM) inverters. However, the DE’s performace is very dependent on its control parameters. Conventional DE generally uses either trial and error mechanism or tuning technique to determine appropriate values of the control paramaters. The disadvantage of this process is that it is very time comsuming. In this paper, an adaptive control parameter is proposed in order to speed up the DE algorithm in optimizing SHEPWM switching angles precisely. The proposed adaptive control parameter is proven to enhance the convergence process of the DE algorithm without requiring initial guesses. The results for both negative and positive modulation index (M) also indicate that the proposed adaptive DE is superior to the conventional DE in generating SHEPWM switching patterns.
A Fuzzy Inventory Model with Perishable and Aging ItemsIJERA Editor
A parametric multi-period inventory model for perishable items considered in this paper. Each item in the stock perishes in a given period of time with some uncertainty. A model derived for recursive unnormalized conditional distributions of {} given the information accumulated about the inventory level- surviving items processes.
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...Alkis Vazacopoulos
The processing of a raw material is a phenomenon that varies its quantity and quality along a specific network and logics and logistics to transform it into final products. To capture the production framework in a mathematical programming model, a full space formulation integrating discrete design variables and quantity-quality relations gives rise to large scale non-convex mixed-integer nonlinear models, which are often difficult to solve. In order to overcome this problem, we propose a phenomenological decomposition heuristic to solve separately in a first stage the quantity and logic variables in a mixed-integer linear model, and in a second stage the quantity and quality variables in a nonlinear programming formulation. By considering different fuel demand scenarios, the problem becomes a two-stage stochastic programming model, where nonlinear models for each demand scenario are iteratively restricted by the process design results. Two examples demonstrate the tailor-made decomposition scheme to construct the complex oil-refinery process design in a quantitative manner.
In this paper, we propose four algorithms for L1 norm computation of regression parameters, where two of them are more efficient for simple and multiple regression models. However, we start with restricted simple linear regression and corresponding derivation and computation of the weighted median problem. In this respect, a computing function is coded. With discussion on the m parameters model, we continue to expand the algorithm to include unrestricted simple linear regression, and two crude and efficient algorithms are proposed. The procedures are then generalized to the m parameters model by presenting two new algorithms, where the algorithm 4 is selected as more efficient. Various properties of these algorithms are discussed.
Solving Method of H-Infinity Model Matching Based on the Theory of the Model ...Radita Apriana
People used to solve high-order H model matching based on H control theory, it is too
difficult. In this paper, we use model reduction theory to solve high-order H model matching problem, A
new method to solve H model matching problem based on the theory of the model reduction is
proposed. The simulation results show that the method has better applicability and can get the expected
performance.
THE NEW HYBRID COAW METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
In this article using Cuckoo Optimization Algorithm and simple additive weighting method the hybrid COAW algorithm is presented to solve multi-objective problems. Cuckoo algorithm is an efficient and structured method for solving nonlinear continuous problems. The created Pareto frontiers of the COAW proposed algorithm are exact and have good dispersion. This method has a high speed in finding the
Pareto frontiers and identifies the beginning and end points of Pareto frontiers properly. In order to validation the proposed algorithm, several experimental problems were analyzed. The results of which indicate the proper effectiveness of COAW algorithm for solving multi-objective problems.
1) The document discusses the relationship between transforming entangled quantum states via local operations and classical communication (LOCC) and the theory of majorization from linear algebra.
2) Nielsen's theorem states that one entangled state can be transformed into another via LOCC if and only if the vector of eigenvalues of one state is majorized by the vector of eigenvalues of the other state.
3) The proof of Nielsen's theorem relies on five key properties, including that any two-way classical communication in an LOCC protocol can be simulated by a one-way communication protocol.
Designed to construct a statistical model describing the impact of a two or more quantitative factors on a dependent variable. The fitted model may be used to make predictions, including confidence limits and/or prediction limits. Residuals may also be plotted and influential observations identified.
ESTIMATE OF THE HEAD PRODUCED BY ELECTRICAL SUBMERSIBLE PUMPS ON GASEOUS PETR...ijaia
This paper reports successful development of an exact and an efficient radial basis function network (RBFN) model to estimate the head of gaseous petroleum fluids (GPFs) in electrical submersible pumps (ESPs). Head of GPFs in ESPs is now often estimated using empirical models. Overfitting and its consequent lack of model generality data is a potentially serious issue. In addition, available data series is fairly small, including the results of 110 experiments. All these limits were considered in RBFN design process, and highly accurate RBFNs were developed and cross validated.
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMijscai
This article presents a promising new gradient-based backpropagation algorithm for multi-layer
feedforward networks. The method requires no manual selection of global hyperparameters and is capable
of dynamic local adaptations using only first-order information at a low computational cost. Its semistochastic nature makes it fit for mini-batch training and robust to different architecture choices and data
distributions. Experimental evidence shows that the proposed algorithm improves training in terms of both
convergence rate and speed as compared with other well known techniques.
The document reviews optimal speed car-following models. It discusses macroscopic and microscopic traffic models, with a focus on microscopic optimal speed models. The optimal speed model defines a desired speed that is a function of headway distance and helps model traffic flow situations. The document also proposes enhancements to the optimal speed model, including a weighting factor dependent on relative speed and spacing to improve braking reactivity. In conclusion, it evaluates optimal speed models and their ability to realistically model traffic dynamics while avoiding collisions.
Short-Term Load Forecasting Using ARIMA Model For Karnataka State Electrical ...IJERDJOURNAL
ABSTRACT: Short-term load forecasting is a key issue for reliable and economic operation of power systems. This paper aims to develop short-term electric load forecasting ARIMA Model for Karnataka Electrical Load pattern based on Stochastic Time Series Analysis. The logical and organised procedures for model development using Autocorrelation Function and Partial Autocorrelation Function make ARIMA Model particularly attractive. The methodology involves Initial Model Development Phase, Parameter Estimation Phase and Forecasting Phase. To confirm the effectiveness, the proposed model is developed and tested using the historical data of Karnataka Electrical Load pattern (2016). The forecasting error of ARIMA Model is computed and results have shown favourable forecasting accuracy.
A New Method to Solving Generalized Fuzzy Transportation Problem-Harmonic Mea...AI Publications
Transportation Problem is one of the models in the Linear Programming problem. The objective of this paper is to transport the item from the origin to the destination such that the transport cost should be minimized, and we should minimize the time of transportation. To achieve this, a new approach using harmonic mean method is proposed in this paper. In this proposed method transportation costs are represented by generalized trapezoidal fuzzy numbers. Further comparative studies of the new technique with other existing algorithms are established by means of sample problems.
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework csandit
The importance of volatility for all market partici
pants has led to the development and
application of various econometric models. The most
popular models in modelling volatility are
GARCH type models because they can account excess k
urtosis and asymmetric effects of
financial time series. Since standard GARCH(1,1) mo
del usually indicate high persistence in the
conditional variance, the empirical researches turn
ed to GJR-GARCH model and reveal its
superiority in fitting the asymmetric heteroscedast
icity in the data. In order to capture both
asymmetry and nonlinearity in data, the goal of thi
s paper is to develop a parsimonious NN
model as an extension to GJR-GARCH model and to det
ermine if GJR-GARCH-NN outperforms
the GJR-GARCH model.
This document discusses the assignment problem and provides an overview of the Hungarian algorithm for solving assignment problems. It begins by defining the assignment problem and describing it as a special case of the transportation problem. It then provides details on the Hungarian algorithm, including the key theorems and steps involved. An example problem of assigning salespeople to cities is presented and solved using the Hungarian algorithm to find the optimal assignment with minimum total cost. The document concludes that the Hungarian algorithm provides an efficient solution for minimizing assignment problems.
This document summarizes a study that models crude oil prices using a Lévy process. The study finds that a MA(8) model best fits the time series properties of oil price returns. However, there is also evidence of GARCH effects. Therefore, the best overall model is a GARCH(1,1) with errors modeled by a Johnson SU distribution. This hybrid Lévy-GARCH process captures the temporal, spectral and distributional properties of the crude oil price data set.
This document presents a time series model for the exchange rate between the Euro (EUR) and the Egyptian Pound (EGP) using a GARCH model. The author analyzes the time series data of the exchange rate for 2008 and finds that it exhibits volatility clustering where large changes tend to follow large changes. An ARCH or GARCH model is needed to capture the changing conditional variances over time. The author estimates several GARCH models and selects the GARCH(1,2) model based on statistical significance of coefficients and AIC values. Diagnostic tests show that the GARCH(1,2) model adequately captures the heteroskedasticity in the data. The fitted model is then used to predict future exchange rates
Principal component analysis - application in financeIgor Hlivka
Principal component analysis is a useful multivariate times series method to examine and study the drivers of the changes in the entire dataset. The main advantage of PCA is the reduction of dimensionality where the large sets of data get transformed into few principal factors that explain majority of variability in that group. PCA has found many applications in finance – both in risk and yield curve analytics
This paper analyzes the swap rates issued by the China Inter-bank Offered Rate(CHIBOR) and
selects the one-year FR007 daily data from January 1st, 2019 to June 30th, 2019 as a sample. To fit the data,
we conduct Monte Carlo simulation with several typical continuous short-term swap rate models such as the
Merton model, the Vasicek model, the CIR model, etc. These models contain both linear forms and nonlinear
forms and each has both drift terms and diffusion terms. After empirical analysis, we obtain the parameter
values in Euler-Maruyama scheme and relevant statistical characteristics of each model. The results show that
most of the short-term swap rate models can fit the swap rates and reflect the change of trend, while the CKLSO
model performs best.
- The document is a final report for a stock trading prediction challenge that aimed to predict auction volume for 900 stocks over 350 days.
- The authors tested several machine learning models including K-nearest neighbors, random forests, stacking, and neural networks. They extracted features like principal components, wavelet transforms, and stock embeddings.
- Their best performing model was a hybrid neural network approach, with one network for initial predictions and a second network to correct errors, which outperformed the benchmark model for the challenge.
This document discusses ARIMA (autoregressive integrated moving average) models for time series forecasting. It covers the basic steps for identifying and fitting ARIMA models, including plotting the data, identifying possible AR or MA components using the autocorrelation function (ACF) and partial autocorrelation function (PACF), estimating model parameters, checking the residuals to validate the model fit, and choosing the best model. An example analyzes quarterly US GNP data to demonstrate these steps.
Differential evolution (DE) algorithm has been applied as a powerful tool to find optimum switching angles for selective harmonic elimination pulse width modulation (SHEPWM) inverters. However, the DE’s performace is very dependent on its control parameters. Conventional DE generally uses either trial and error mechanism or tuning technique to determine appropriate values of the control paramaters. The disadvantage of this process is that it is very time comsuming. In this paper, an adaptive control parameter is proposed in order to speed up the DE algorithm in optimizing SHEPWM switching angles precisely. The proposed adaptive control parameter is proven to enhance the convergence process of the DE algorithm without requiring initial guesses. The results for both negative and positive modulation index (M) also indicate that the proposed adaptive DE is superior to the conventional DE in generating SHEPWM switching patterns.
A Fuzzy Inventory Model with Perishable and Aging ItemsIJERA Editor
A parametric multi-period inventory model for perishable items considered in this paper. Each item in the stock perishes in a given period of time with some uncertainty. A model derived for recursive unnormalized conditional distributions of {} given the information accumulated about the inventory level- surviving items processes.
Phenomenological Decomposition Heuristics for Process Design Synthesis of Oil...Alkis Vazacopoulos
The processing of a raw material is a phenomenon that varies its quantity and quality along a specific network and logics and logistics to transform it into final products. To capture the production framework in a mathematical programming model, a full space formulation integrating discrete design variables and quantity-quality relations gives rise to large scale non-convex mixed-integer nonlinear models, which are often difficult to solve. In order to overcome this problem, we propose a phenomenological decomposition heuristic to solve separately in a first stage the quantity and logic variables in a mixed-integer linear model, and in a second stage the quantity and quality variables in a nonlinear programming formulation. By considering different fuel demand scenarios, the problem becomes a two-stage stochastic programming model, where nonlinear models for each demand scenario are iteratively restricted by the process design results. Two examples demonstrate the tailor-made decomposition scheme to construct the complex oil-refinery process design in a quantitative manner.
In this paper, we propose four algorithms for L1 norm computation of regression parameters, where two of them are more efficient for simple and multiple regression models. However, we start with restricted simple linear regression and corresponding derivation and computation of the weighted median problem. In this respect, a computing function is coded. With discussion on the m parameters model, we continue to expand the algorithm to include unrestricted simple linear regression, and two crude and efficient algorithms are proposed. The procedures are then generalized to the m parameters model by presenting two new algorithms, where the algorithm 4 is selected as more efficient. Various properties of these algorithms are discussed.
Solving Method of H-Infinity Model Matching Based on the Theory of the Model ...Radita Apriana
People used to solve high-order H model matching based on H control theory, it is too
difficult. In this paper, we use model reduction theory to solve high-order H model matching problem, A
new method to solve H model matching problem based on the theory of the model reduction is
proposed. The simulation results show that the method has better applicability and can get the expected
performance.
THE NEW HYBRID COAW METHOD FOR SOLVING MULTI-OBJECTIVE PROBLEMSijfcstjournal
In this article using Cuckoo Optimization Algorithm and simple additive weighting method the hybrid COAW algorithm is presented to solve multi-objective problems. Cuckoo algorithm is an efficient and structured method for solving nonlinear continuous problems. The created Pareto frontiers of the COAW proposed algorithm are exact and have good dispersion. This method has a high speed in finding the
Pareto frontiers and identifies the beginning and end points of Pareto frontiers properly. In order to validation the proposed algorithm, several experimental problems were analyzed. The results of which indicate the proper effectiveness of COAW algorithm for solving multi-objective problems.
1) The document discusses the relationship between transforming entangled quantum states via local operations and classical communication (LOCC) and the theory of majorization from linear algebra.
2) Nielsen's theorem states that one entangled state can be transformed into another via LOCC if and only if the vector of eigenvalues of one state is majorized by the vector of eigenvalues of the other state.
3) The proof of Nielsen's theorem relies on five key properties, including that any two-way classical communication in an LOCC protocol can be simulated by a one-way communication protocol.
Designed to construct a statistical model describing the impact of a two or more quantitative factors on a dependent variable. The fitted model may be used to make predictions, including confidence limits and/or prediction limits. Residuals may also be plotted and influential observations identified.
ESTIMATE OF THE HEAD PRODUCED BY ELECTRICAL SUBMERSIBLE PUMPS ON GASEOUS PETR...ijaia
This paper reports successful development of an exact and an efficient radial basis function network (RBFN) model to estimate the head of gaseous petroleum fluids (GPFs) in electrical submersible pumps (ESPs). Head of GPFs in ESPs is now often estimated using empirical models. Overfitting and its consequent lack of model generality data is a potentially serious issue. In addition, available data series is fairly small, including the results of 110 experiments. All these limits were considered in RBFN design process, and highly accurate RBFNs were developed and cross validated.
GRADIENT OMISSIVE DESCENT IS A MINIMIZATION ALGORITHMijscai
This article presents a promising new gradient-based backpropagation algorithm for multi-layer
feedforward networks. The method requires no manual selection of global hyperparameters and is capable
of dynamic local adaptations using only first-order information at a low computational cost. Its semistochastic nature makes it fit for mini-batch training and robust to different architecture choices and data
distributions. Experimental evidence shows that the proposed algorithm improves training in terms of both
convergence rate and speed as compared with other well known techniques.
The document reviews optimal speed car-following models. It discusses macroscopic and microscopic traffic models, with a focus on microscopic optimal speed models. The optimal speed model defines a desired speed that is a function of headway distance and helps model traffic flow situations. The document also proposes enhancements to the optimal speed model, including a weighting factor dependent on relative speed and spacing to improve braking reactivity. In conclusion, it evaluates optimal speed models and their ability to realistically model traffic dynamics while avoiding collisions.
Short-Term Load Forecasting Using ARIMA Model For Karnataka State Electrical ...IJERDJOURNAL
ABSTRACT: Short-term load forecasting is a key issue for reliable and economic operation of power systems. This paper aims to develop short-term electric load forecasting ARIMA Model for Karnataka Electrical Load pattern based on Stochastic Time Series Analysis. The logical and organised procedures for model development using Autocorrelation Function and Partial Autocorrelation Function make ARIMA Model particularly attractive. The methodology involves Initial Model Development Phase, Parameter Estimation Phase and Forecasting Phase. To confirm the effectiveness, the proposed model is developed and tested using the historical data of Karnataka Electrical Load pattern (2016). The forecasting error of ARIMA Model is computed and results have shown favourable forecasting accuracy.
A New Method to Solving Generalized Fuzzy Transportation Problem-Harmonic Mea...AI Publications
Transportation Problem is one of the models in the Linear Programming problem. The objective of this paper is to transport the item from the origin to the destination such that the transport cost should be minimized, and we should minimize the time of transportation. To achieve this, a new approach using harmonic mean method is proposed in this paper. In this proposed method transportation costs are represented by generalized trapezoidal fuzzy numbers. Further comparative studies of the new technique with other existing algorithms are established by means of sample problems.
Nonlinear Extension of Asymmetric Garch Model within Neural Network Framework csandit
The importance of volatility for all market partici
pants has led to the development and
application of various econometric models. The most
popular models in modelling volatility are
GARCH type models because they can account excess k
urtosis and asymmetric effects of
financial time series. Since standard GARCH(1,1) mo
del usually indicate high persistence in the
conditional variance, the empirical researches turn
ed to GJR-GARCH model and reveal its
superiority in fitting the asymmetric heteroscedast
icity in the data. In order to capture both
asymmetry and nonlinearity in data, the goal of thi
s paper is to develop a parsimonious NN
model as an extension to GJR-GARCH model and to det
ermine if GJR-GARCH-NN outperforms
the GJR-GARCH model.
This document discusses the assignment problem and provides an overview of the Hungarian algorithm for solving assignment problems. It begins by defining the assignment problem and describing it as a special case of the transportation problem. It then provides details on the Hungarian algorithm, including the key theorems and steps involved. An example problem of assigning salespeople to cities is presented and solved using the Hungarian algorithm to find the optimal assignment with minimum total cost. The document concludes that the Hungarian algorithm provides an efficient solution for minimizing assignment problems.
This document summarizes a study that models crude oil prices using a Lévy process. The study finds that a MA(8) model best fits the time series properties of oil price returns. However, there is also evidence of GARCH effects. Therefore, the best overall model is a GARCH(1,1) with errors modeled by a Johnson SU distribution. This hybrid Lévy-GARCH process captures the temporal, spectral and distributional properties of the crude oil price data set.
This document presents a time series model for the exchange rate between the Euro (EUR) and the Egyptian Pound (EGP) using a GARCH model. The author analyzes the time series data of the exchange rate for 2008 and finds that it exhibits volatility clustering where large changes tend to follow large changes. An ARCH or GARCH model is needed to capture the changing conditional variances over time. The author estimates several GARCH models and selects the GARCH(1,2) model based on statistical significance of coefficients and AIC values. Diagnostic tests show that the GARCH(1,2) model adequately captures the heteroskedasticity in the data. The fitted model is then used to predict future exchange rates
Principal component analysis - application in financeIgor Hlivka
Principal component analysis is a useful multivariate times series method to examine and study the drivers of the changes in the entire dataset. The main advantage of PCA is the reduction of dimensionality where the large sets of data get transformed into few principal factors that explain majority of variability in that group. PCA has found many applications in finance – both in risk and yield curve analytics
This paper analyzes the swap rates issued by the China Inter-bank Offered Rate(CHIBOR) and
selects the one-year FR007 daily data from January 1st, 2019 to June 30th, 2019 as a sample. To fit the data,
we conduct Monte Carlo simulation with several typical continuous short-term swap rate models such as the
Merton model, the Vasicek model, the CIR model, etc. These models contain both linear forms and nonlinear
forms and each has both drift terms and diffusion terms. After empirical analysis, we obtain the parameter
values in Euler-Maruyama scheme and relevant statistical characteristics of each model. The results show that
most of the short-term swap rate models can fit the swap rates and reflect the change of trend, while the CKLSO
model performs best.
- The document is a final report for a stock trading prediction challenge that aimed to predict auction volume for 900 stocks over 350 days.
- The authors tested several machine learning models including K-nearest neighbors, random forests, stacking, and neural networks. They extracted features like principal components, wavelet transforms, and stock embeddings.
- Their best performing model was a hybrid neural network approach, with one network for initial predictions and a second network to correct errors, which outperformed the benchmark model for the challenge.
This document discusses ARIMA (autoregressive integrated moving average) models for time series forecasting. It covers the basic steps for identifying and fitting ARIMA models, including plotting the data, identifying possible AR or MA components using the autocorrelation function (ACF) and partial autocorrelation function (PACF), estimating model parameters, checking the residuals to validate the model fit, and choosing the best model. An example analyzes quarterly US GNP data to demonstrate these steps.
Option pricing under multiscale stochastic volatilityFGV Brazil
The stochastic volatility model proposed by Fouque, Papanicolaou, and Sircar (2000) explores a fast and a slow time-scale fluctuation of the volatility process to end up with a parsimonious way of capturing the volatility smile implied by close to the money options. In this paper, we test three different models of these authors using options on the S&P 500. First, we use model independent statistical tools to demonstrate the presence of a short time-scale, on the order of days, and a long time-scale, on the order of months, in the S&P 500 volatility. Our analysis of market data shows that both time-scales are statistically significant. We also provide a calibration method using observed option prices as represented by the so-called term structure of implied volatility. The resulting approximation is still independent of the particular details of the volatility model and gives more flexibility in the parametrization of the implied volatility surface. In addition, to test the model’s ability to price options, we simulate options prices using four different specifications for the Data generating Process. As an illustration, we price an exotic option.
Ongoing Master Thesis by Cristina Tessari and Caio Almeida;
EPGE Brazilian School of Economics and Finance.
http://www.fgv.br/epge/en
On Selection of Periodic Kernels Parameters in Time Series Prediction cscpconf
In the paper the analysis of the periodic kernels parameters is described. Periodic kernels can
be used for the prediction task, performed as the typical regression problem. On the basis of the
Periodic Kernel Estimator (PerKE) the prediction of real time series is performed. As periodic
kernels require the setting of their parameters it is necessary to analyse their influence on the
prediction quality. This paper describes an easy methodology of finding values of parameters of
periodic kernels. It is based on grid search. Two different error measures are taken into
consideration as the prediction qualities but lead to comparable results. The methodology was
tested on benchmark and real datasets and proved to give satisfactory results.
ON SELECTION OF PERIODIC KERNELS PARAMETERS IN TIME SERIES PREDICTIONcscpconf
In the paper the analysis of the periodic kernels parameters is described. Periodic kernels can
be used for the prediction task, performed as the typical regression problem. On the basis of the
Periodic Kernel Estimator (PerKE) the prediction of real time series is performed. As periodic
kernels require the setting of their parameters it is necessary to analyse their influence on the
prediction quality. This paper describes an easy methodology of finding values of parameters of
periodic kernels. It is based on grid search. Two different error measures are taken into
consideration as the prediction qualities but lead to comparable results. The methodology was
tested on benchmark and real datasets and proved to give satisfactory results.
The Use of ARCH and GARCH Models for Estimating and Forecasting Volatility-ru...Ismet Kale
This document discusses volatility modeling using ARCH and GARCH models. It first provides background on ARCH and GARCH models, noting they were developed to model characteristics of financial time series data like volatility clustering and fat tails. It then describes the specific ARCH and GARCH models that will be used in the study, including the ARCH, GARCH, EGARCH, GJR, APARCH, IGARCH, FIGARCH and FIAPARCH models. The document aims to apply these models to daily stock index data from the IMKB 100 to analyze and forecast volatility, and better understand risk in the Turkish market.
MFBLP Method Forecast for Regional Load Demand SystemCSCJournals
Load forecast plays an important role in planning and operation of a power system. The accuracy of the forecast value is necessary for economically efficient operation and also for effective control. This paper describes a method of modified forward backward linear predictor (MFBLP) for solving the regional load demand of New South Wales (NSW), Australia. The method is designed and simulated based on the actual load data of New South Wales, Australia. The accuracy of discussed method is obtained and comparison with previous methods is also reported.
An Enhance PSO Based Approach for Solving Economical Ordered Quantity (EOQ) P...IJMER
The Meta-heuristic approaches can provide a sufficiently good solution to an
optimization problem, especially with incomplete or imperfect information and with lower
computational complexity especially for numerical solutions. This paper presents an enhanced PSO
(Particle Swarm Optimization) technique for solving the same problem. Through the PSO performs well
but it may require some more iteration to converge or sometimes many repetitions for the complex
problems. To overcome these problems an enhanced PSO is presented which utilizes the PSO with
double chaotic maps to perform irregular velocity updates which forces the particles to search greater
space for best global solution. Finally the comparison between both algorithms is performed for the
EOQ problem considering deteriorating items, shortages and partially backlogging. The simulation
results shows that the proposed enhanced PSO converges quickly and found much closer solution then
PSO.
The Impact of Interest Rate Adjustment PolicyShuai Yuan
The document analyzes the impact of China's interest rate adjustment in November 2014 on the volatility of closing and overnight returns of the Shanghai Composite Index, compared to the NASDAQ Index in the US. It finds that the interest rate reduction in China increased volatility in China's stock market, while having no significant impact on volatility in the US stock market. Various GARCH models with normal and t distributions are estimated and compared to analyze the impact on return volatility before and after the rate adjustment.
- The document analyzes forecasting volatility for the MSCI Emerging Markets Index using a Stochastic Volatility model solved with Kalman Filtering. It derives the Stochastic Differential Equations for the model and puts them into State Space form solved with a Kalman Filter.
- Descriptive statistics on the daily returns of the MSCI Emerging Markets Index ETF from 2011-2016 show a mean close to 0, standard deviation of 0.01428, negative skewness, and kurtosis close to a normal distribution. The model will be evaluated against a GARCH model.
This document provides a tutorial on the RESTART rare event simulation technique. RESTART introduces nested sets of states (C1 to CM) defined by importance thresholds. When the process enters a set Ci, Ri simulation retrials are performed within that set until exiting. This oversamples high importance regions to estimate rare event probabilities more efficiently than crude simulation. The document analyzes RESTART's efficiency, deriving formulas for the variance of its estimator and gain over crude simulation. Guidelines are also provided for choosing an optimal importance function to maximize efficiency. Examples applying the guidelines to queueing networks and reliable systems are presented.
The document describes using the Runge-Kutta numerical method to analyze the Ramsey-Cass-Koopmans model of economic growth. It shows that the 4th order Runge-Kutta method approximates solutions to differential equations with increasing accuracy as the step size decreases. When applied to the Ramsey-Cass-Koopmans model, the method generates phase diagrams showing different trajectories for capital and consumption depending on initial conditions. The analysis confirms the Runge-Kutta method provides a reliable way to approximate the dynamics of the Ramsey-Cass-Koopmans model economy.
NONLINEAR EXTENSION OF ASYMMETRIC GARCH MODEL WITHIN NEURAL NETWORK FRAMEWORKcscpconf
The importance of volatility for all market participants has led to the development and
application of various econometric models. The most popular models in modelling volatility are
GARCH type models because they can account excess kurtosis and asymmetric effects of
financial time series. Since standard GARCH(1,1) model usually indicate high persistence in the
conditional variance, the empirical researches turned to GJR-GARCH model and reveal its
superiority in fitting the asymmetric heteroscedasticity in the data. In order to capture both
asymmetry and nonlinearity in data, the goal of this paper is to develop a parsimonious NN
model as an extension to GJR-GARCH model and to determine if GJR-GARCH-NN outperforms
the GJR-GARCH model.
VOLATILITY FORECASTING - A PERFORMANCE MEASURE OF GARCH TECHNIQUES WITH DIFFE...ijscmcj
Volatility Forecasting is an interesting challenging topic in current financial instruments as it is directly associated with profits. There are many risks and rewards directly associated with volatility. Hence forecasting volatility becomes most dispensable topic in finance. The GARCH distributions play an important role in the risk measurement and option pricing. The min motive of this paper is to measure the performance of GARCH techniques for forecasting volatility by using different distribution model. We have used 9 variations in distribution models that are used to forecast the volatility of a stock entity. The different GARCH distribution models observed in this paper are Std, Norm, SNorm, GED, SSTD, SGED, NIG, GHYP and JSU. Volatility is forecasted for 10 days in advance and values are compared with the actual values to find out the best distribution model for volatility forecast. From the results obtain it has been observed that GARCH with GED distribution models has outperformed all models.
The International Journal of Soft Computing, Mathematics and Control (IJSCMC) is a Quarterly peer-reviewed and refereed open access journal that publishes articles which contribute new results in all areas of Soft Computing, Pure, Applied and Numerical Mathematics and Control. The focus of this new journal is on all theoretical and numerical methods on soft computing, mathematics and control theory with applications in science and industry. The goal of this journal is to bring together researchers and practitioners from academia and industry to focus on latest topics of soft computing, pure, applied and numerical mathematics and control engineering, and establishing new collaborations in these areas.
Authors are solicited to contribute to this journal by submitting articles that illustrate new algorithms, theorems, modeling results, research results, projects, surveying works and industrial experiences that describe significant advances in Soft Computing, Mathematics and Control Engineering
Similar to Principal component analysis in modelling (20)
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Enchancing adoption of Open Source Libraries. A case study on Albumentations.AIVladimir Iglovikov, Ph.D.
Presented by Vladimir Iglovikov:
- https://www.linkedin.com/in/iglovikov/
- https://x.com/viglovikov
- https://www.instagram.com/ternaus/
This presentation delves into the journey of Albumentations.ai, a highly successful open-source library for data augmentation.
Created out of a necessity for superior performance in Kaggle competitions, Albumentations has grown to become a widely used tool among data scientists and machine learning practitioners.
This case study covers various aspects, including:
People: The contributors and community that have supported Albumentations.
Metrics: The success indicators such as downloads, daily active users, GitHub stars, and financial contributions.
Challenges: The hurdles in monetizing open-source projects and measuring user engagement.
Development Practices: Best practices for creating, maintaining, and scaling open-source libraries, including code hygiene, CI/CD, and fast iteration.
Community Building: Strategies for making adoption easy, iterating quickly, and fostering a vibrant, engaged community.
Marketing: Both online and offline marketing tactics, focusing on real, impactful interactions and collaborations.
Mental Health: Maintaining balance and not feeling pressured by user demands.
Key insights include the importance of automation, making the adoption process seamless, and leveraging offline interactions for marketing. The presentation also emphasizes the need for continuous small improvements and building a friendly, inclusive community that contributes to the project's growth.
Vladimir Iglovikov brings his extensive experience as a Kaggle Grandmaster, ex-Staff ML Engineer at Lyft, sharing valuable lessons and practical advice for anyone looking to enhance the adoption of their open-source projects.
Explore more about Albumentations and join the community at:
GitHub: https://github.com/albumentations-team/albumentations
Website: https://albumentations.ai/
LinkedIn: https://www.linkedin.com/company/100504475
Twitter: https://x.com/albumentations
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
1. Matematika, 2004, Jilid 20, bil. 1, hlm. 31–41
c Jabatan Matematik, UTM.
Principal Component Analysis in Modelling
Stock Market Returns
Kassim Haron & Maiyastri
Department of Mathematics, Universiti Putra Malaysia, 43400 UPM Serdang, Selangor, Malaysia
Abstract In this study, an alternative method to compare the performance of several
GARCH models in fitting the KLCI daily rate of return series before and after the
Asian financial crisis in 1997 using Principal Component Analysis (PCA) is sought.
Comparison is then made with the results obtained from a known method based on
the ranks of the Log Likelihood (Log L), Schwarzs Bayesian Criterion (SBC) and the
Akaike Information Criterion (AIC) values. It is found that the best and the worst
fit models identified by both methods are exactly the same for the two periods but
some degree of disagreement, however, existed between the intermediate models. We
also find that the proposed method has a clear edge over its rival because PCA uses
actual values of the three criteria and hence the inability to exactly specify the relative
position of each of the competing models as faced by the ranking method may be
avoided. Another plus point is this method also enables models to be classified into
several distinct groups ordered in such a way that each group is made up of models
with nearly the same level of fitting ability. The two extreme classes of models are
identified to represent the best and the worst groups respectively.
Keywords GARCH models, Returns, Fitting, Rank, Principal component
Abstrak Dalam kajian ini satu kaedah alternatif untuk membandingkan pencapaian
beberapa model GARCH bagi penyuaian siri kadar pulangan harian KLCI sebelum
dan selepas krisis kewangan Asia pada tahun 1997 menggunakan Analisis Komponen
Prinsipal dicari. Perbandingan kemudiannya dibuat dengan keputusan yang diper-
oleh daripada kaedah yang diketahui berasaskan kepada pangkat nilai Log Likelihood
(Log L), Schwarzs Bayesian Criterion (SBC) dan Akaike Information Criterion (AIC).
Didapati bahawa model penyuaian terbaik dan terburuk yang dikenalpasti menggu-
nakan kedua-dua kaedah adalah tepat sama bagi dua tempoh masa itu tetapi beberapa
percanggahan, bagaimana pun, wujud di antara model pertengahan. Kami juga da-
pati kaedah yang dicadangkan mempunyai kelebihan yang ketara ke atas lawannya
kerana PCA menggunakan nilai sebenar bagi ketiga-tiga kriteria dan oleh itu keti-
dakupayaan untuk menyatakan dengan tepat kedudukan secara relatif setiap model
yang bersaing seperti yang dihadapi oleh kaedah pangkat dapat dielakkan. Kelebi-
han lain ialah kaedah ini juga dapat mengklasifikasikan model ke dalam beberapa
kumpulan berbeza, disusun sedemikian rupa supaya setiap kumpulan terdiri daripada
model dengan paras kebolehan penyuaian yang hampir sama. Dua kelas model ekstrim
masing-masing dikenalpasti mewakili kumpulan terbaik dan terburuk.
Katakunci Model GARCH, Pulangan, Penyuaian, Pangkat, Komponen prinsipal
2. 32 Kassim Haron & Maiyastri
1 Introduction
Financial time series data such as stock return, inflation rates, foreign exchange rates have
non-normal characteristics: leptokurtic and skew. In addition, they exhibit changes in vari-
ance over time. In such circumstances, the assumption of constant variance (homoscedas-
ticity) is inappropriate. The variability in the financial data could very well be due to the
volatility of the financial markets. The markets are known to be sensitive to factors such as
rumours, political upheavals and changes in the government monetary and fiscal policies.
Engle [4] introduced the Autoregressive Conditional Heteroscedasticity (ARCH) process
to cope with the changing variance. Bollerslev [1] proposed a General ARCH (GARCH)
model which has a more flexible lag structure because the error variance can be modelled
by an Autoregressive Moving Average (ARMA) type of process. Such a model can be ef-
fective in removing the excess kurtosis. Nelson [8] proposed a class of exponential GARCH
(EGARCH) model which can capture the asymmetry and skewness of the stock market
return series. Several researchers such as Franses and Van Dijk [6], Choo [3] and Gokcan
[7] had shown that models with a small lag like GARCH (1,1) is sufficient to cope with
the changing variance. Nevertheless, due to the high volatility of the rate of returns of the
KLCI, higher order lag models such as the GARCH (1,2), GARCH (2,1) and GARCH (2,2)
will also be included in our study. In all, we shall compare the performance of eleven com-
peting time series models for fitting the rate of returns data. The models are the ARCH (1),
ARCH (2), GARCH (1,1), GARCH (1,2), GARCH (2,1), EGARCH (1,1), GARCH-NNG
(1,1), SGARCH (1,1), IGARCH (1,1) and GARCH-M (1,1) and GARCH (2,2).
Franses and Van Dijk [6] and Choo [3] chose the best models for fitting time series
data by comparing the ranks of the values of three goodness-of-fit statistics namely the
Log Likelihood (Log L), Schwarz’s Bayesian Criterion (SBC) and the Akaike Information
Criterion (AIC) respectively. We notice that such a method has one weakness. The use
of an ordinal scale type of measurement, that is the rank, instead of the actual values in
calculating the criteria would cause some loss of information. Therefore, in this paper an
alternative method capable of overcoming such a problem is sought and used to compare
the performance of potential models for fitting the return series for both periods. We
proposed the use of Principal Component Analysis (PCA) procedures to produce a new set
of variables called the principal components formed by a linear combination of the three
statistics. By looking at the component values together with the PCA plots, one would be
able to classify the relative performance of the competing models. Finally, the results from
the two methods are studied and compared.
2 Data Description
The data used in this study is the daily rate of returns of the KLCI (Kuala Lumpur Com-
posite Index) registered from week 1 of January 1989 to week 4 of December 2000. In the
fourth quarter of 1997, financial crisis which hit the Asian region had badly hurt the perfor-
mance of most of the stock markets including the KLSE (Kuala Lumpur Stock Exchange).
From the plot in Figure 1, we can see that starting from September 1997, the rate of returns
of the KLCI were noticeably volatile.
For this reason, we shall divide the data into two periods:
Period I : From January 1989 to September 1997
Period II : From October 1997 to December 2000.
3. Principal Component Analysis in Modelling Stock Market Returns 33
Figure 1: Daily rate of returns of the KLCI fro January 1989 to Decemver 2000
Some descriptive statistics for the daily return of the KLCI are presented in Table1.
Table 1: Summary statistics of the rate of daily returns of the KLCI
As seen from Table 1, the distribution of the rate of daily returns in Period I is negatively
skewed and leptokurtic. However, for Period II, that is after the financial crisis, the standard
deviation of the data is about twice as large as that in Period I. This result, according to
Gokcan [7] indicates the rate of returns in Period II is more volatile than in Period I. As a
result, the data have a positive skew and leptokurtic distribution.
3 The Models
The daily rate of returns ri of the KLCI are calculated using the following formula:
ri = log
It
It−1
, t = 1, 2, . . ., T
where It denotes the reading on the composite index at the close of tth
trading day. As
noted earlier, the rate of daily returns of the KLCI displays a changing variance over time.
There are many ways to describe the changes in variance and one of them is by considering
the Autoregressive Conditional Heteroscedasticity (ARCH) model.
4. 34 Kassim Haron & Maiyastri
The ARCH regression model for the series rt can be written as φm(B)rt = µ + εt, for
the model with intercept and φm(B)rt = εt, for the non-intercept model, with
φm(B) = 1 − φ1B − φ2B2
− · · · − φmBm
where B is the backward shift operator defined by Bk
Xt = Xt−k. The parameter µ reflects
a constant term (intercept) which in practice is typically estimated to be close or equal to
zero. The order m is usually 0 or small, indicating that there are usually no opportunities
to forecast rt from its own past. In other words, there is never an autoregressive process in
rt.
The conditional distribution of the series of disturbances which follows the ARCH process
can be written as
ε|Φτ ∼ N(0, ht) (1)
where Φτ denotes all available information at time τ < t. The conditional variance ht is
ht = ω +
q
i=1
αiε2
t−1 (2)
εt = ht et, et ∼ N(0, 1).
Bollerslev [1] introduced a Generalized ARCH(p, q) or GARCH(p, q) model where the con-
ditional variance ht is given by
ht = ω +
q
i=1
αiε2
t−1 +
p
j=1
βjht−j, p ≥ q, q > 0 and ω > 0, αi > 0, βj ≥ 0 (3)
If the parameters are constrained such that
q
i=1
αi +
p
j=1
βj < 1
we have a weakly stationary GARCH(p, q) or SGARCH(p, q) process since the mean, vari-
ance and autocovariance are finite and constant over time. If
q
i=1
αi +
p
j=1
βj = 1
we then have the Integrated GARCH(p, q) or IGARCH(p, q) model.
Nelson [11] proposed a class of exponential GARCH or EGARCH models. In this model
ht is defined by
ln(ht) = ω +
q
i=1
αig(et−i) +
p
j=1
βj ln(ht−i)
where
g(et) = θet + γ|et| − γE|et|.
The coefficient of the second term in g(et) is set to be 1(γ = 1) in this formulation. Unlike the
linear GARCH model there are no restrictions on the parameters to ensure non-negativity of
5. Principal Component Analysis in Modelling Stock Market Returns 35
the conditional variances. The EGARCH model allows good news (positive return shocks)
and bad news (negative return shocks) to have a different impact on volatility whereas the
linear GARCH model does not. If θ = 0, a positive return shock has the same effect on
volatility with the negative return shock of the same amount. If θ < 0, a positive return
shock actually reduces volatility and if θ > 0, a positive return shock increases volatility.
The conditional variance (ht) follows equation (3) and we write the model as EGARCH(p, q).
In the GARCH-in-Mean or GARCH-M model, the GARCH effects appear in the mean
of the process, given by εt =
√
ht et where et ∼ N(0, 1) and rt = µ + δ
√
ht + εt for the
model with intercept and rt = δ
√
ht + εt for the non-intercept model. Engle and Mustafa
[5] reported there is a significant test statistics for ARCH model specially for stock returns.
For the model GARCH(p, q) specification, Bollerslev [2] suggested to adopt low orders for
the lag lengths p and q. Typical examples are the GARCH(1,1), GARCH(1,2) and GARCH
(2,1).
4 Research Method
The parameters of the models considered in this study are estimated using the maximum
likelihood method. The likelihood function for the ARCH and GARCH models can be
written as follows:
L = −
T
2
log(2π) −
1
2
log
T
t=1
log(ht) −
1
2
T
t=1
ε2
t
ht
where
T = total number of daily rate of returns
εt = rt
and ht is the conditional variance. However, when estimating GARCH-M(p, q) model, we
take εt = rt − δ
√
ht.
The procedures for performing the ranking and the proposed PCA methods are as
follows.
(a) The Ranking Method
In this method, the values of the three goodness-of-fit statistics, namely the Log Likelihood
(Log L), Schwarzs Bayesian Criterion (SBC) and the Akaikes Information Criterion (AIC)
are first calculated for each rival models. The values of AIC and SBC are computed as
follows;
AIC = −2 ln(L) + 2k
SBC = −2 ln(L) + ln(T)k
where k is the number of free parameters and T is the number of residuals that can be
computed for the time series. The ranks of the models based on the calculated values are
then determined for each statistic. Finally, the ranks of the models are averaged across the
statistics and the models with the smallest and the largest averages are taken to be the best
and the worst fit models respectively.
6. 36 Kassim Haron & Maiyastri
(b) The PCA method
The principal component nYq = (y1, y2, . . . , yq) is obtained by multiplying the data matrix
nXp with pAq, (Johnson and Wichern [10]), where pAq is a matrix resulted from Singular
Value Decomposition (SVD) of matrix nXp, that is
nXp = nUqLqAp
where
qLq = diagonal matrix q × q non zero characteristics root X X
pAq = (a1, a2, . . . , aq) in which aj , j = 1, . . . , q are vector characteristics of X X.
The PCA enables a new set of variables called the principal components or components
formed by a linear combination of the old variables to be produced. In this case, the
components formed are a linear combination of the Log L, SBC and AIC statistics. The
components of PCA are ordered by their importance in such a way that the first component
contains more information than the second, the second component is better than the third
and so on. The classification of models is obtained from the PCA plots.
5 Results and Discussion
(a) Period I
Table 2 shows the parameter estimates and the values of t-ratio. All parameter estimates,
with the exception of β1 for GARCH (2,2) and θ for EGARCH (1,1) are significant at 5%
level.
Results of the goodness-of-fit test are presented in Table 3.
Since the proportion of variation for the first component in the PCA is found to be
99.8%, only the first component would be used to determine model performances. Figure 2
shows the resulting PCA plot together with the component values.
The results presented in Table 3 and Figure 2 clearly suggest that GARCH(1,2) is
the best fit whilst ARCH(1) is the worst fit models for both methods. However, the
performances of the intermediate models displayed some degree of disagreement between
the two methods. Figure 2 also shows that the models GARCH(1,2), GARCH(2,1) and
GARCH(2,2) have almost the same level of performances and may therefore form a group
consisting of models which are better than the others. They are followed by GARCH-M(1,1),
GARCH(1,1), GARCH-NNG(1,1) and SGARCH(1,1) which formed the second group. The
worst models which belong to the ARCH family, namely the ARCH(2) and ARCH(1),
formed groups which are clearly separated from the others. In other words, the GARCHs
with lag 2 formed the best group followed by the GARCHs with lag 1, with the exception
of EGARCH(1,1) and IGARCH(1,1). Also note that, if we were to use the ranking method
for choosing the models, we would not be able to identify which models exhibit about the
same level of performances and which are clearly different.
(b) Period II
Table 4 shows the parameter estimates and the values of t-ratio. All parameter estimates,
with the exception of β1 for GARCH(2,2), β2 for GARCH (2,1) and δ for GARCH-M(1,1)
are significant at 5% level. Results of the goodness-of-fit test are presented in Table 4.
7. Principal Component Analysis in Modelling Stock Market Returns 37
Table 2: Estimation results of the daily rate of returns for Period I
Table 3: Performance by ranking the average rank of the goodness-of-fit
statistics values for Period I
8. 38 Kassim Haron & Maiyastri
Figure 2: PCA plot for period I
Table 4: Estimation result of the daily rate of returns for Period II
9. Principal Component Analysis in Modelling Stock Market Returns 39
Since the proportion of variation for the first component in the PCA is found to be
99.6%, only the first component would be used to determine model performances. Figure 3
shows the resulting PCA plot together with the component values.
The results presented in Table 5 and Figure 3 clearly suggest that SGARCH(1,1) is the
best fit whilst ARCH(1) is the worst fit model for both methods. However, the performances
of the intermediate models also displayed some degree of disagreement between the two
methods Figure 3 shows that the GARCH family models are separated from the ARCH in
such a way that the GARCHs are better than the ARCHs. Generally, the GARCH family
can be divided into three groups with the best consists of SGARCH(1,1) and IGARCH(1,1)
followed by the second group which is made up of the GARCH(1,1), GARCH-NNG(1,1)
and EGARCH(1,1). The rest of the models form the third group.
Results from the two periods revealed that the ranking method has one weakness. By
using measurements in ordinal scale in calculating the values of the three criteria, one tends
to lose some information regarding the relative position of the models concerned. This is
obvious in Table 3 where two models, GARCH(2,1) and GARCH(2,2), have a tied rank of
2.5 whereas the PCA method firmly singled out the former as the second while the latter
as the third best models.
6 Conclusions
This study managed to come out with an alternative method for selecting the best model
from a set of competing GARCH models for fitting the stock market return series. The
PCA method identified exactly the same best and worst fit models as the ranking method
for the two periods. However, as a whole, the models occupying the intermediate positions
differ in the two methods. The proposed method is seen to be superior and should be
preferred because PCA uses actual values of the three goodness-of-fit statistics and hence
the inability to exactly specify the relative position of each of the competing models as faced
by the ranking method may be avoided. Another advantage is this method also enables
models to be classified into several distinct groups ordered in such a way that each group
is made up of models with about the same level of fitting ability. The two extreme classes
of models are identified to represent the best and the worst groups respectively.
References
[1] T.Bollerslev, Generalized autoregressive conditional heteroscedasticity, Journal of Eco-
nometrics, 31, (1986), 307–27.
[2] T. Bollerslev, Y.R. Chou & F.K. Kroner, ARCH modeling in finance, Journal of
Econometrics , 52 , (1992), 5–59
[3] W. C. Choo, et al, Performance of GARCH Models in Forecasting Stock Market
Volatility, Journal of Forecasting, 18, (1999), 333–334
[4] R. F. Engle, Autoregressive conditional heteroscedasticity with estimates of variance
of United Kingdom Inflation, Econometrica, 50,(4) (1982), 987–1008.
[5] R. F. Engle & C. Mustafa, Implied GARCH models from options prices, Journal of
Econometrics , 52 , (1992), 289–311
10. 40 Kassim Haron & Maiyastri
Figure 3: PCA plot for period II
Table 5: Performance by ranking the average rank of the goodness-of-fit
statistics values for Period II
11. Principal Component Analysis in Modelling Stock Market Returns 41
[6] P. H. Franses & R. Van Dijk, Forecasting stock market volatility using (non-linear)
Garch models, Journal of Forecasting, 15, (1996), 229–35.
[7] S. Gokcan, Forecasting volatility of emerging stock markets: Linear versus Non-Linear
GARCH models, Journal of Forecasting, 19, (2000), 499–504.
[8] J. D. Hamilton, Time Series Analysis, Princeton, New Jersey, 1994.
[9] A. C. Harvey, Time Series Model, Harvester, New York, 1993
[10] R. A. Johnson & D. W. Wichern, Applied Multivariate Statistical Analysis, Prentice
Hall, New Jersey, 1988.
[11] D. B. Nelson, Conditional heteroscedasticity in asset returns: A new approach, Econo-
metrica 52, (2) (1991) 347–70.