This document summarizes research on comparing the accuracy of long-horizon forecasts from multivariate cointegrated systems versus univariate models that ignore cointegration. The main findings are:
1) When accuracy is measured using standard trace mean squared error, imposing cointegration provides no benefit over univariate models at long horizons.
2) Both multivariate and univariate long-horizon forecasts satisfy the cointegrating relationships exactly.
3) The cointegrating combinations of forecast errors from both approaches have finite variance at long horizons.
This document introduces a new class of information-theoretic divergence measures called the K and L divergences. Unlike existing measures like Kullback-Leibler divergence, the new measures do not require probability distributions to be absolutely continuous. The K and L divergences also have desirable properties like being bounded above by the variational distance. A generalized form, the Jensen-Shannon divergence, can measure differences between multiple distributions and provides both upper and lower bounds for classification error probability.
This document provides an overview of distributed lag models. It defines distributed lag models as models where the current value of a dependent variable is predicted based on current and past values of an explanatory variable. It discusses finite and infinite distributed lag models. Methods for estimating distributed lag models like ad hoc estimation and the Koyck model are described. The Koyck model specifies an exponential decline in lag weights. Problems with estimation like multicollinearity, serial correlation, and heteroscedasticity are also summarized.
This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance, correlation and gini index. Talk 2 shows how the central limit theorem and the law of the large numbers work empirically. Talk 3 presents the point estimate, the confidence interval and the hypothesis test for the most important parameters. Talk 4 introduces to the linear regression model and Talk 5 to the bootstrap world. Talk 5 also presents an easy example of a markov chains.
All the talks are supported by script codes, in R language.
Granger Causality Test: A Useful Descriptive Tool for Time Series DataIJMER
Interdependency of one or more variables on the other has been in the existence over long
time when it was discovered that one variable has to move or regress toward another following the
work done by Galton (1886); Pearson & Lee (1903); Kendall & Stuart, (1961); Johnston and
DiNardo, (1997); Gujarati, (2004) etc. It was in the light of this dependency over time the researcher
uses Granger Causality as an effective tool in time series Predictive causality using Nigeria GDP and
Money Supply to know the type of causality in existence in the two time series variables under
consideration and which one can statistically predicts the other.
The research work aimed at testing for nature of causality between GDP and money supply for
Federal Republic of Nigeria for the period of thirty years using the data sourced from Central Bank
of Nigeria Statistical Bulletin. After observing the various conditions of Granger causality test such
as ensuring stationarity in the variables under consideration; adding enough number of lags in the
prescribed model before estimation as Granger causality test is sensitive to the number of lags
introduced in the model; and as well as assuming the disturbance terms in the various models are
uncorrelated, the result of the analysis indicates a bilateral relationship between Nigeria GDP and
Money Supply. It implies Nigeria GDP Granger causes money Supply and vice versa. Based on the
result of this study, both Nigeria GDP and money Supply can be successfully model using Vector
Autoregressive Model since changes in one variable has a significant effect on the other variable.
This document outlines the generalised method of moments (GMM) estimation technique. It begins with the basic principles of GMM, including that it uses theoretical relations that parameters should satisfy to choose parameter estimates. It then discusses estimating GMM, hypothesis testing with GMM, and extensions such as using GMM with dynamic stochastic general equilibrium (DSGE) models. The document provides details on how population moments relate to sample moments, and how method of moments estimation and instrumental variables estimation can both be viewed as special cases of GMM. It concludes by explaining how the generalized method of moments estimator works by minimizing a weighted distance between sample and population moments.
1) The document discusses the limiting behavior of the "probability of claiming superiority" (PST) in Bayesian clinical trials as the sample size increases.
2) The main result is that under certain conditions, the PST (also called average power) converges to the prior probability that the alternative hypothesis is true.
3) The two key assumptions for this limiting result are: 1) the posterior distribution is "π-consistent", and 2) the prior probability of the boundary of the alternative hypothesis set is zero.
This document discusses the Koyck transformation approach to modeling distributed lag structures. It begins by introducing distributed lag models, which allow the effect of a causal variable to be spread over multiple time periods. It then describes the Koyck transformation technique, which simplifies an infinite distributed lag model into an estimable autoregressive model by assuming the lag coefficients decline geometrically. This involves lagging the model by one period, multiplying by the decay parameter λ, and subtracting to isolate the impact of the causal variable in the current period. The Koyck approach allows estimation of distributed lag models using standard regression methods.
The document provides an overview of time series econometrics concepts including:
1) Time series econometrics analyzes the dynamic structure and interrelationships over time in economic data. It examines stationary and non-stationary stochastic processes.
2) A time series is stationary if its mean, variance, and autocovariance remain constant over time. A random walk process is a type of non-stationary process where the variable fluctuates around a stochastic trend.
3) The document discusses key time series econometrics models and techniques including unit root tests, vector autoregressive models, causality tests, cointegration, and error correction models.
This document introduces a new class of information-theoretic divergence measures called the K and L divergences. Unlike existing measures like Kullback-Leibler divergence, the new measures do not require probability distributions to be absolutely continuous. The K and L divergences also have desirable properties like being bounded above by the variational distance. A generalized form, the Jensen-Shannon divergence, can measure differences between multiple distributions and provides both upper and lower bounds for classification error probability.
This document provides an overview of distributed lag models. It defines distributed lag models as models where the current value of a dependent variable is predicted based on current and past values of an explanatory variable. It discusses finite and infinite distributed lag models. Methods for estimating distributed lag models like ad hoc estimation and the Koyck model are described. The Koyck model specifies an exponential decline in lag weights. Problems with estimation like multicollinearity, serial correlation, and heteroscedasticity are also summarized.
This 10 hours class is intended to give students the basis to empirically solve statistical problems. Talk 1 serves as an introduction to the statistical software R, and presents how to calculate basic measures such as mean, variance, correlation and gini index. Talk 2 shows how the central limit theorem and the law of the large numbers work empirically. Talk 3 presents the point estimate, the confidence interval and the hypothesis test for the most important parameters. Talk 4 introduces to the linear regression model and Talk 5 to the bootstrap world. Talk 5 also presents an easy example of a markov chains.
All the talks are supported by script codes, in R language.
Granger Causality Test: A Useful Descriptive Tool for Time Series DataIJMER
Interdependency of one or more variables on the other has been in the existence over long
time when it was discovered that one variable has to move or regress toward another following the
work done by Galton (1886); Pearson & Lee (1903); Kendall & Stuart, (1961); Johnston and
DiNardo, (1997); Gujarati, (2004) etc. It was in the light of this dependency over time the researcher
uses Granger Causality as an effective tool in time series Predictive causality using Nigeria GDP and
Money Supply to know the type of causality in existence in the two time series variables under
consideration and which one can statistically predicts the other.
The research work aimed at testing for nature of causality between GDP and money supply for
Federal Republic of Nigeria for the period of thirty years using the data sourced from Central Bank
of Nigeria Statistical Bulletin. After observing the various conditions of Granger causality test such
as ensuring stationarity in the variables under consideration; adding enough number of lags in the
prescribed model before estimation as Granger causality test is sensitive to the number of lags
introduced in the model; and as well as assuming the disturbance terms in the various models are
uncorrelated, the result of the analysis indicates a bilateral relationship between Nigeria GDP and
Money Supply. It implies Nigeria GDP Granger causes money Supply and vice versa. Based on the
result of this study, both Nigeria GDP and money Supply can be successfully model using Vector
Autoregressive Model since changes in one variable has a significant effect on the other variable.
This document outlines the generalised method of moments (GMM) estimation technique. It begins with the basic principles of GMM, including that it uses theoretical relations that parameters should satisfy to choose parameter estimates. It then discusses estimating GMM, hypothesis testing with GMM, and extensions such as using GMM with dynamic stochastic general equilibrium (DSGE) models. The document provides details on how population moments relate to sample moments, and how method of moments estimation and instrumental variables estimation can both be viewed as special cases of GMM. It concludes by explaining how the generalized method of moments estimator works by minimizing a weighted distance between sample and population moments.
1) The document discusses the limiting behavior of the "probability of claiming superiority" (PST) in Bayesian clinical trials as the sample size increases.
2) The main result is that under certain conditions, the PST (also called average power) converges to the prior probability that the alternative hypothesis is true.
3) The two key assumptions for this limiting result are: 1) the posterior distribution is "π-consistent", and 2) the prior probability of the boundary of the alternative hypothesis set is zero.
This document discusses the Koyck transformation approach to modeling distributed lag structures. It begins by introducing distributed lag models, which allow the effect of a causal variable to be spread over multiple time periods. It then describes the Koyck transformation technique, which simplifies an infinite distributed lag model into an estimable autoregressive model by assuming the lag coefficients decline geometrically. This involves lagging the model by one period, multiplying by the decay parameter λ, and subtracting to isolate the impact of the causal variable in the current period. The Koyck approach allows estimation of distributed lag models using standard regression methods.
The document provides an overview of time series econometrics concepts including:
1) Time series econometrics analyzes the dynamic structure and interrelationships over time in economic data. It examines stationary and non-stationary stochastic processes.
2) A time series is stationary if its mean, variance, and autocovariance remain constant over time. A random walk process is a type of non-stationary process where the variable fluctuates around a stochastic trend.
3) The document discusses key time series econometrics models and techniques including unit root tests, vector autoregressive models, causality tests, cointegration, and error correction models.
Probabilistic project failure modes follow specific distribution functions. One common function of Poisson, which says the longer you wait the more likely it is something will happen.
The document discusses calculating and plotting the allocation of entropy for bipartite and tripartite quantum systems. It provides tables of entropy calculations for a bipartite system of |0> and |1> states, and checks that the results satisfy the subadditivity inequality. It also outlines the methodology to perform similar calculations and plots for other systems to visualize the convex cones of entropy allocation.
Generalized Method of Moments (GMM) is an estimation procedure that allows economic models to be specified without strong distributional assumptions. GMM is broadly applicable as it nests other estimation methods and allows estimation in systems with more moment conditions than unknown parameters. The document introduces GMM, discusses specification, estimation, inference and testing. It also provides three examples of GMM estimation: a consumption asset pricing model, linear factor models, and a stochastic volatility model.
International Refereed Journal of Engineering and Science (IRJES)irjes
The core of the vision IRJES is to disseminate new knowledge and technology for the benefit of all, ranging from academic research and professional communities to industry professionals in a range of topics in computer science and engineering. It also provides a place for high-caliber researchers, practitioners and PhD students to present ongoing research and development in these areas.
Causal set theory is an approach to quantum gravity that represents spacetime as a locally finite partially ordered set of points with causal relations. It is a minimalist approach that does not assume an underlying spacetime continuum. There are two main methods to reconstruct a manifold from a causal set: 1) extracting manifold properties like dimension from causal sets that can be embedded in a manifold, and 2) sprinkling points randomly into an existing manifold to produce an embedded causal set. To study dynamics, an action must be defined on causal sets that reproduces the Einstein-Hilbert action in the continuum limit. Several proposals have been made to define nonlocal operators on causal sets that approach the d'Alembertian operator in the limit. Overall causal set
The document discusses various econometric modeling techniques including regression equations, cointegration, error correction models, vector autoregressive (VAR) modeling, and vector error correction models (VECM). It explains that regression equations can produce spurious results if the data is non-stationary, and that cointegration exists if the residuals from a regression equation are stationary. Error correction models specify the short-run relationship that maintains the long-run equilibrium between cointegrated variables. VAR models express current values of variables as functions of past values, while VECMs are VARs in first differences that incorporate the long-run cointegrating relationships between variables.
Lecture notes on Johansen cointegrationMoses sichei
This document discusses the Johansen cointegration procedure and error correction models. It provides an example where there are 3 variables (short-term interest rate, 3-year interest rate, and 10-year interest rate) that are cointegrated with 2 cointegrating relationships. The error correction form of the vector autoregression is shown, with the 2 cointegrating vectors entering each equation. Restrictions can be tested on the coefficients of the cointegrating vectors (beta) using likelihood ratio tests. This allows testing of economic theory restrictions on the long-run relationships between the variables.
This paper proposes a method to jointly match multiple 3D meshes by maximizing pairwise feature affinities and cycle consistency across models. It formulates the matching problem as a low-rank matrix recovery problem and uses nuclear norm relaxation for rank minimization. An alternating minimization algorithm is used to efficiently solve the optimization problem. Experimental results show the method provides an order of magnitude speed-up compared to state-of-the-art algorithms based on semi-definite programming, while achieving competitive performance. It also introduces a distortion term to the pairwise matching to help match reflexive sub-parts of models distinctly.
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
Description of methodologies and overview of numerical methods, which we used for modeling and quantification of uncertainties in numerical aerodynamics
This document presents a discrete valuation methodology for swing options using a forest model approach. It develops numerical implementations of swing options on one-factor and two-factor mean-reverting underlying processes using binomial trees. It establishes convergence via finite difference methods and considers qualitative properties and sensitivity analysis. The methodology values swing options as a system of coupled European options and allows for various discrete models of the underlying process.
Errors in the Discretized Solution of a Differential Equationijtsrd
We study the error in the derivatives of an unknown function. We construct the discretized problem. The local truncation and global errors are discussed. The solution of discretized problem is constructed. The analytical and discretized solutions are compared. The two solution graphs are described by using MATLAB software. Wai Mar Lwin | Khaing Khaing Wai "Errors in the Discretized Solution of a Differential Equation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27937.pdfPaper URL: https://www.ijtsrd.com/mathemetics/applied-mathamatics/27937/errors-in-the-discretized-solution-of-a-differential-equation/wai-mar-lwin
The document discusses the spectral gap problem in quantum many-body physics. It proves that the spectral gap problem, which is to determine if a quantum system described by a Hamiltonian is gapped or gapless, is undecidable in general. Specifically:
1) The spectral gap problem is algorithmically undecidable, meaning there is no algorithm that can determine if an arbitrary Hamiltonian describes a gapped or gapless system.
2) The spectral gap problem is axiomatically independent, meaning there are Hamiltonians for which the presence or absence of a spectral gap cannot be determined from the axioms of mathematics.
3) The proof constructs families of Hamiltonians with translationally invariant, nearest-neighbor interactions
This document presents a framework for analyzing the convergence of Galerkin approximations for a class of noncoercive operators. It begins by introducing assumptions on the operators and establishing well-posedness of the continuous problem. It then analyzes a "GAP" condition on the finite element discretization that is sufficient for stability and quasi-optimal convergence. Finally, it discusses two applications of the theory: Maxwell's equations with variable coefficients, and a boundary integral formulation for electromagnetic wave propagation.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Quantum-Like Bayesian Networks using Feynman's Path Diagram RulesCatarina Moreira
- The document discusses building a quantum probabilistic model to make automatic predictions in situations that violate the Sure Thing Principle, such as in two-stage gambling games.
- It develops a quantum-like Bayesian network approach using Feynman's path diagrams and represents random variables as vectors to calculate quantum parameters based on cosine similarity.
- When applied to experimental data on two-stage gambling games, the model was able to predict outcomes with an overall error of 4.16%, showing potential for the quantum-like Bayesian network approach.
This document discusses fuzzy logic control of vehicle speed based on sensed obstacles. It proposes an automated fuzzy control system that can control a vehicle's speed and apply the brakes based on the angle and distance of any obstacles detected. This system aims to provide accident-free driving by taking over manual control of speed and braking during times of stress or tension when human control may falter. The system uses fuzzy logic to smoothly vary the vehicle's speed based on the obstacle parameters in order to stop safely before any potential collisions.
This document summarizes seemingly unrelated regression (SURE) models. SURE models can handle multiple regression equations that may appear unrelated but are actually linked through correlated error terms. The key points are:
1) SURE models account for correlations between error terms in different regression equations, even if the equations do not share explanatory variables.
2) Estimation of SURE models involves generalized least squares to account for error term correlations between equations.
3) Estimating the error term covariance matrix Σ is important for SURE models, and can be done using restricted or unrestricted residuals from the separate equations.
The document discusses the Fundamental Theorem of Calculus, which has two parts. Part 1 establishes the relationship between differentiation and integration, showing that the derivative of an antiderivative is the integrand. Part 2 allows evaluation of a definite integral by evaluating the antiderivative at the bounds. Examples are given of using both parts to evaluate definite integrals. The theorem unified differentiation and integration and was fundamental to the development of calculus.
Cointegration, causality and fisher effect in nigeriaAlexander Decker
This document empirically investigates the relationship between expected inflation and nominal interest rates in Nigeria from 1970 to 2011. It examines whether there is a long-run relationship between inflation and interest rates using cointegration techniques. The results indicate the existence of a long-run partial Fisher effect in Nigeria, with a positive and significant relationship between inflation and interest rates. There is also evidence of unidirectional causality running from inflation to interest rates. The paper recommends that monetary authorities maintain interest rates at reasonably low levels to prevent inflation from rising too high and negatively impacting savings, investment, and economic growth.
The document describes the error correction model (ECM) version of Granger causality testing for determining the causal relationship between two non-stationary time series variables. It involves first testing for cointegration between the variables using the Johansen test or Engle-Granger approach. If cointegrated, the ECM version estimates an error correction model and performs Granger causality tests to examine short-run, long-run, and strong causality. The procedure and hypotheses for each test are provided along with the method for calculating the relevant F-statistics.
1. The document discusses modelling macroeconomic relationships in Pakistan using time series data in Eviews.
2. It presents a basic model relating GDP to consumption (E), investment (I), and net exports (X-M) to be estimated using OLS regression.
3. However, the document notes that OLS may produce spurious results with non-stationary time series data, so it introduces unit root and cointegration testing to determine the appropriate estimation technique.
Probabilistic project failure modes follow specific distribution functions. One common function of Poisson, which says the longer you wait the more likely it is something will happen.
The document discusses calculating and plotting the allocation of entropy for bipartite and tripartite quantum systems. It provides tables of entropy calculations for a bipartite system of |0> and |1> states, and checks that the results satisfy the subadditivity inequality. It also outlines the methodology to perform similar calculations and plots for other systems to visualize the convex cones of entropy allocation.
Generalized Method of Moments (GMM) is an estimation procedure that allows economic models to be specified without strong distributional assumptions. GMM is broadly applicable as it nests other estimation methods and allows estimation in systems with more moment conditions than unknown parameters. The document introduces GMM, discusses specification, estimation, inference and testing. It also provides three examples of GMM estimation: a consumption asset pricing model, linear factor models, and a stochastic volatility model.
International Refereed Journal of Engineering and Science (IRJES)irjes
The core of the vision IRJES is to disseminate new knowledge and technology for the benefit of all, ranging from academic research and professional communities to industry professionals in a range of topics in computer science and engineering. It also provides a place for high-caliber researchers, practitioners and PhD students to present ongoing research and development in these areas.
Causal set theory is an approach to quantum gravity that represents spacetime as a locally finite partially ordered set of points with causal relations. It is a minimalist approach that does not assume an underlying spacetime continuum. There are two main methods to reconstruct a manifold from a causal set: 1) extracting manifold properties like dimension from causal sets that can be embedded in a manifold, and 2) sprinkling points randomly into an existing manifold to produce an embedded causal set. To study dynamics, an action must be defined on causal sets that reproduces the Einstein-Hilbert action in the continuum limit. Several proposals have been made to define nonlocal operators on causal sets that approach the d'Alembertian operator in the limit. Overall causal set
The document discusses various econometric modeling techniques including regression equations, cointegration, error correction models, vector autoregressive (VAR) modeling, and vector error correction models (VECM). It explains that regression equations can produce spurious results if the data is non-stationary, and that cointegration exists if the residuals from a regression equation are stationary. Error correction models specify the short-run relationship that maintains the long-run equilibrium between cointegrated variables. VAR models express current values of variables as functions of past values, while VECMs are VARs in first differences that incorporate the long-run cointegrating relationships between variables.
Lecture notes on Johansen cointegrationMoses sichei
This document discusses the Johansen cointegration procedure and error correction models. It provides an example where there are 3 variables (short-term interest rate, 3-year interest rate, and 10-year interest rate) that are cointegrated with 2 cointegrating relationships. The error correction form of the vector autoregression is shown, with the 2 cointegrating vectors entering each equation. Restrictions can be tested on the coefficients of the cointegrating vectors (beta) using likelihood ratio tests. This allows testing of economic theory restrictions on the long-run relationships between the variables.
This paper proposes a method to jointly match multiple 3D meshes by maximizing pairwise feature affinities and cycle consistency across models. It formulates the matching problem as a low-rank matrix recovery problem and uses nuclear norm relaxation for rank minimization. An alternating minimization algorithm is used to efficiently solve the optimization problem. Experimental results show the method provides an order of magnitude speed-up compared to state-of-the-art algorithms based on semi-definite programming, while achieving competitive performance. It also introduces a distortion term to the pairwise matching to help match reflexive sub-parts of models distinctly.
Sparse data formats and efficient numerical methods for uncertainties in nume...Alexander Litvinenko
Description of methodologies and overview of numerical methods, which we used for modeling and quantification of uncertainties in numerical aerodynamics
This document presents a discrete valuation methodology for swing options using a forest model approach. It develops numerical implementations of swing options on one-factor and two-factor mean-reverting underlying processes using binomial trees. It establishes convergence via finite difference methods and considers qualitative properties and sensitivity analysis. The methodology values swing options as a system of coupled European options and allows for various discrete models of the underlying process.
Errors in the Discretized Solution of a Differential Equationijtsrd
We study the error in the derivatives of an unknown function. We construct the discretized problem. The local truncation and global errors are discussed. The solution of discretized problem is constructed. The analytical and discretized solutions are compared. The two solution graphs are described by using MATLAB software. Wai Mar Lwin | Khaing Khaing Wai "Errors in the Discretized Solution of a Differential Equation" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-3 | Issue-5 , August 2019, URL: https://www.ijtsrd.com/papers/ijtsrd27937.pdfPaper URL: https://www.ijtsrd.com/mathemetics/applied-mathamatics/27937/errors-in-the-discretized-solution-of-a-differential-equation/wai-mar-lwin
The document discusses the spectral gap problem in quantum many-body physics. It proves that the spectral gap problem, which is to determine if a quantum system described by a Hamiltonian is gapped or gapless, is undecidable in general. Specifically:
1) The spectral gap problem is algorithmically undecidable, meaning there is no algorithm that can determine if an arbitrary Hamiltonian describes a gapped or gapless system.
2) The spectral gap problem is axiomatically independent, meaning there are Hamiltonians for which the presence or absence of a spectral gap cannot be determined from the axioms of mathematics.
3) The proof constructs families of Hamiltonians with translationally invariant, nearest-neighbor interactions
This document presents a framework for analyzing the convergence of Galerkin approximations for a class of noncoercive operators. It begins by introducing assumptions on the operators and establishing well-posedness of the continuous problem. It then analyzes a "GAP" condition on the finite element discretization that is sufficient for stability and quasi-optimal convergence. Finally, it discusses two applications of the theory: Maxwell's equations with variable coefficients, and a boundary integral formulation for electromagnetic wave propagation.
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
Quantum-Like Bayesian Networks using Feynman's Path Diagram RulesCatarina Moreira
- The document discusses building a quantum probabilistic model to make automatic predictions in situations that violate the Sure Thing Principle, such as in two-stage gambling games.
- It develops a quantum-like Bayesian network approach using Feynman's path diagrams and represents random variables as vectors to calculate quantum parameters based on cosine similarity.
- When applied to experimental data on two-stage gambling games, the model was able to predict outcomes with an overall error of 4.16%, showing potential for the quantum-like Bayesian network approach.
This document discusses fuzzy logic control of vehicle speed based on sensed obstacles. It proposes an automated fuzzy control system that can control a vehicle's speed and apply the brakes based on the angle and distance of any obstacles detected. This system aims to provide accident-free driving by taking over manual control of speed and braking during times of stress or tension when human control may falter. The system uses fuzzy logic to smoothly vary the vehicle's speed based on the obstacle parameters in order to stop safely before any potential collisions.
This document summarizes seemingly unrelated regression (SURE) models. SURE models can handle multiple regression equations that may appear unrelated but are actually linked through correlated error terms. The key points are:
1) SURE models account for correlations between error terms in different regression equations, even if the equations do not share explanatory variables.
2) Estimation of SURE models involves generalized least squares to account for error term correlations between equations.
3) Estimating the error term covariance matrix Σ is important for SURE models, and can be done using restricted or unrestricted residuals from the separate equations.
The document discusses the Fundamental Theorem of Calculus, which has two parts. Part 1 establishes the relationship between differentiation and integration, showing that the derivative of an antiderivative is the integrand. Part 2 allows evaluation of a definite integral by evaluating the antiderivative at the bounds. Examples are given of using both parts to evaluate definite integrals. The theorem unified differentiation and integration and was fundamental to the development of calculus.
Cointegration, causality and fisher effect in nigeriaAlexander Decker
This document empirically investigates the relationship between expected inflation and nominal interest rates in Nigeria from 1970 to 2011. It examines whether there is a long-run relationship between inflation and interest rates using cointegration techniques. The results indicate the existence of a long-run partial Fisher effect in Nigeria, with a positive and significant relationship between inflation and interest rates. There is also evidence of unidirectional causality running from inflation to interest rates. The paper recommends that monetary authorities maintain interest rates at reasonably low levels to prevent inflation from rising too high and negatively impacting savings, investment, and economic growth.
The document describes the error correction model (ECM) version of Granger causality testing for determining the causal relationship between two non-stationary time series variables. It involves first testing for cointegration between the variables using the Johansen test or Engle-Granger approach. If cointegrated, the ECM version estimates an error correction model and performs Granger causality tests to examine short-run, long-run, and strong causality. The procedure and hypotheses for each test are provided along with the method for calculating the relevant F-statistics.
1. The document discusses modelling macroeconomic relationships in Pakistan using time series data in Eviews.
2. It presents a basic model relating GDP to consumption (E), investment (I), and net exports (X-M) to be estimated using OLS regression.
3. However, the document notes that OLS may produce spurious results with non-stationary time series data, so it introduces unit root and cointegration testing to determine the appropriate estimation technique.
The document summarizes the Toda-Yamamoto augmented Granger causality test.
[1] The test allows checking for causality between integrated variables of different orders without needing to determine cointegration. It involves estimating a VAR model with maximal order of integration lags added.
[2] The test procedure involves determining the order of integration (d), selecting the optimal lag length (k), setting the null and alternative hypotheses of no causality and causality, and calculating an F-statistic to test for causality.
[3] If the F-statistic exceeds the critical value, the null of no causality is rejected, indicating causality between the variables.
These days a lot of data being generated is in the form of time series. From climate data to users post in social media, stock prices, neurological data etc. Discovering the temporal dependence between different time series data is important task in time series analysis. It finds its application in varied fields ranging from advertising in social media, finding influencers, marketing, share markets, psychology, climate science etc. Identifying the networks of dependencies has been studied in this report.
In this report we have study how this problem has been studied in the field of econometrics. We will also study three different approaches for building causal networks between the time series and then see how this knowledge has been used in three completely different fields. At last some important issues are presented and areas in which this can be extended for further research.
1. The document discusses Granger causality testing within the context of bivariate analysis of stationary time series.
2. It defines Granger causality as when one time series can better predict another by including information from its own past, and describes three main tests for Granger causality between two stationary time series: the direct Granger test, Sims test, and modified Sims test.
3. The direct Granger test involves regressing each variable on lagged values of itself and the other variable, and using an F-test to examine if including lags of the other variable improves predictions compared to only using own lags.
This document provides an overview of an upcoming presentation on error correction models and their application to agricultural economics research. It outlines the major topics to be covered, including concepts and definitions related to cointegration and error correction models, Johansen's cointegration test, the Engle-Granger two-step error correction model, and a case study on market integration of arecanut in Karnataka state using an error correction model approach. Tables and figures are included to illustrate key concepts like order of integration, cointegration, and the residual-based test for cointegration.
MA MPHIL PHD ECONOMICS ENTRANCE //MICROECONOMICS //intemporal optimization //...naresh sehdev
intertemporal optmization // in advanced microeconomics
2] in growth models using hamilton optimization techniques
//this is quantitative economics indian statistical numerical for 20 marks
Time series analysis, modeling and applicationsSpringer
- The document proposes a novel composition forecasting model based on the Choquet integral with respect to a completed extensional L-measure and M-density fuzzy measure.
- It compares the performance of this model to other composition forecasting models, including ones based on extensional L-measure, L-measure, lambda-measure, and P-measure, as well as ridge regression and multiple linear regression models.
- Experimental results on grain production time series data show that the proposed Choquet integral composition forecasting model with completed extensional L-measure and M-density outperforms the other models.
This document proposes generalized additive models (GAMs) to model conditional dependence structures between random variables. Specifically, it develops a GAM framework where a dependence or concordance measure between two variables is modeled as a parametric, non-parametric, or semi-parametric function of explanatory variables. It derives the root-n consistency and asymptotic normality of the maximum penalized log-likelihood estimator for the proposed GAMs. It also discusses details of the estimation procedure and selection of smoothing parameters.
This document discusses several methods for temporal disaggregation, which is the process of estimating higher frequency data (e.g. monthly or daily) from observed lower frequency data (e.g. quarterly or yearly). It describes the Chow Lin method, which uses a linear model and regression to distribute errors among estimated high frequency values. It also discusses extensions by Fernandez and Litterman that allow for non-stationary errors by modeling the error process as a random walk or AR(1) process. The key steps of each method are outlined.
A Monte Carlo strategy for structure multiple-step-head time series predictionGianluca Bontempi
The document proposes a Monte Carlo approach called SMC (Structured Monte Carlo) for multiple-step-ahead time series forecasting that takes into account the structural dependencies between predictions. It generates samples using a direct forecasting approach and weights them based on how well they satisfy dependencies identified by an iterated approach. Experiments on three benchmark datasets show the SMC approach achieves more accurate forecasts as measured by SMAPE than iterated, direct, or other comparison methods for most prediction horizons tested.
This document presents a novel approach for combining individual realized volatility measures to form new estimators of asset price variability. It analyzes 30 different realized measures estimated from high frequency IBM stock price data from 1996-2007. It finds that a simple equally-weighted average of the realized measures is not outperformed by any individual measure and that combining measures provides benefits by incorporating information from different estimators. Optimal linear and multiplicative combination estimators are estimated and none of the individual measures are found to encompass all the information in other measures, further supporting the use of combination estimators.
Affine cascade models for term structure dynamics of sovereign yield curvesLAURAMICHAELA
Rafael Serrano profesor de la Universidad del Rosario
Resumen:
In the first part of the talk, I will present an introduction to stochastic affine short rate models for term structure of yield curves In the second part, I will focus on a recursive affine cascade with persistent factors for which the number of parameters, under specifications, is invariant to the size of the state space and converges to a stochastic limit as the number of factors goes to infinity. The cascade construction thereby overcomes dimensionality difficulties associated with general affine models. We contrast two specfifications of the model using linear Kalman filter for a panel of Colombian sovereign yields.
The tensor language provides a unifying approach that simplifies notation, which leads to compact modeling of multi-way information objects in many knowledge fields, and a thought framework as well. By such a language, it is modeled a generic system that connects to environment through its boundaries.
Panel data combines cross-sectional and time-series data by observing the same cross-sectional units (e.g. firms, countries) over time. This allows for more data variation and better study of dynamic changes. The document discusses fixed and random effects models for panel data, the Hausman test for choosing between them, and evaluating models for autocorrelation and heteroskedasticity.
This document discusses multiple linear regression analysis. It begins by introducing the basic multiple regression model that includes more than one predictor variable. It then discusses the assumptions of multiple regression including adequate sample size, absence of outliers and multicollinearity, and normality, linearity and homoscedasticity of residuals. The document provides an example of predicting house prices using living area and distance from the city center as predictor variables. It shows how to check assumptions, interpret the regression output and make predictions using the fitted model.
The document discusses implementing the Heath-Jarrow-Morton (HJM) model for modeling interest rate dynamics using Monte Carlo simulation. It describes:
1) Using principal component analysis to analyze the yield curve and estimate volatility functions for a multi-factor HJM model from historical yield curve data.
2) Calculating the covariance matrix from differenced historical yield curve data and factorizing it to obtain eigenvalues and eigenvectors via numerical methods.
3) Deriving the stochastic differential equation for the risk-neutral forward rate curve under the HJM model using no-arbitrage arguments to obtain drift and volatility terms.
The Vasicek model is one of the earliest stochastic models for modeling the term structure of interest rates. It represents the movement of interest rates as a function of market risk, time, and the equilibrium value the rate tends to revert to. This document discusses parameter estimation techniques for the Vasicek one-factor model using least squares regression and maximum likelihood estimation on historical interest rate data. It also covers simulating the term structure and pricing zero-coupon bonds under the Vasicek model. The two-factor Vasicek model is introduced as an extension of the one-factor model.
Control theorists have extensively studied stability
as well as relative stability of LTI systems. In this research
paper we attempt to answer the question. How unstable is an
unstable system? In that effort we define instability exponents
dealing with the impulse response. Using these instability
exponents, we characterize unstable systems.
The aim of this paper is to re-interprete the accelerationist Phillips curve, by studying the effect of the higher-order derivatives of acceleration. We show that a complex dynamic behavior emerges when dealing with a jerk and jounce displacement of price settings, whose simple unfolding leads to a three dimensional vector field which generates a double-scroll chaotic attractor.
Financial Time Series Analysis Based On Normalized Mutual Information FunctionsIJCI JOURNAL
A method of predictability analysis of future values of financial time series is described. The method is based on normalized mutual information functions. In the analysis, the use of these functions allowed to refuse any restrictions on the distributions of the parameters and on the correlations between parameters. A comparative analysis of the predictability of financial time series of Tel Aviv 25 stock exchange has been carried out.
Introduction to financial forecasting in investment analysisSpringer
This document provides an overview of regression analysis and forecasting models. It discusses how regression analysis can be used to analyze the relationship between independent and dependent variables and make forecasts. Specifically, it explains simple linear regression, involving one independent variable, and multiple regression, involving more than one independent variable. It also covers topics such as estimating the regression line, hypothesis testing of regression coefficients, and using regression to estimate a security's beta.
Effective properties of heterogeneous materialsSpringer
1) The document discusses the multipole expansion method (MEM) for analyzing the microstructure and effective properties of composite materials.
2) MEM reduces boundary value problems for heterogeneous materials to systems of linear algebraic equations. It expresses fields like temperature and stress as expansions of basis functions related to inclusion geometry.
3) MEM has been applied to analyze conductivity and elasticity problems in composites with spherical, spheroidal, circular, and elliptic inclusions. It provides analytical solutions for local fields and exact expressions for effective properties involving only dipole moments.
L-8 VECM Formulation, Hypothesis Testing, and Forecasting - KH.pptxRiyadhJack
This lecture discusses vector error correction models (VECM), comparing the single-equation Engle-Granger approach to the multivariate Johansen approach. It outlines the VECM formulation and discusses hypothesis testing and properties including cointegration, weak exogeneity, and Granger causality. An example analyzing the relationship between short-term and long-term interest rates is used to illustrate the methodology.
Priliminary Research on Multi-Dimensional Panel Data Modeling Parthasarathi E
This document provides an overview of statistical modelling using multidimensional panel data. It begins by discussing basic panel data models that make restrictive assumptions, then relaxes those assumptions to develop more robust models. It extends basic models to a three-dimensional panel data framework. Finally, it describes some modelling issues that arise when using multidimensional vintage panel data, specifically relating to whether a two-dimensional or three-dimensional approach is best, dealing with too many dummy variables in a two-dimensional model, specifying the error structure in a three-dimensional model, and implementing estimation for a three-dimensional panel model.
This document discusses advanced tools for risk management and asset pricing. It contains an assignment analyzing credit default swap (CDS) term structures for three companies - Danone, Carrefour and Tesco. The assignment involves:
1) Deriving hazard rate term structures from CDS data assuming piecewise constant and constant default intensities. Constant intensities overestimate short-term but underestimate long-term default probabilities.
2) Performing sensitivity analysis on how hazard rates change with default frequency, recovery rates, and maturity. Higher frequencies and recovery rates increase hazard rates.
3) Calculating cumulative default probabilities from hazard rates.
4) Deriving portfolio loss distributions under a Gaussian copula model for different correlation parameters.
Similar to Cointegration and Long-Horizon Forecasting (20)
Solution Manual For Financial Accounting, 8th Canadian Edition 2024, by Libby...Donc Test
Solution Manual For Financial Accounting, 8th Canadian Edition 2024, by Libby, Hodge, Verified Chapters 1 - 13, Complete Newest Version Solution Manual For Financial Accounting, 8th Canadian Edition by Libby, Hodge, Verified Chapters 1 - 13, Complete Newest Version Solution Manual For Financial Accounting 8th Canadian Edition Pdf Chapters Download Stuvia Solution Manual For Financial Accounting 8th Canadian Edition Ebook Download Stuvia Solution Manual For Financial Accounting 8th Canadian Edition Pdf Solution Manual For Financial Accounting 8th Canadian Edition Pdf Download Stuvia Financial Accounting 8th Canadian Edition Pdf Chapters Download Stuvia Financial Accounting 8th Canadian Edition Ebook Download Stuvia Financial Accounting 8th Canadian Edition Pdf Financial Accounting 8th Canadian Edition Pdf Download Stuvia
^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Duba...mayaclinic18
Whatsapp (+971581248768) Buy Abortion Pills In Dubai/ Qatar/Kuwait/Doha/Abu Dhabi/Alain/RAK City/Satwa/Al Ain/Abortion Pills For Sale In Qatar, Doha. Abu az Zuluf. Abu Thaylah. Ad Dawhah al Jadidah. Al Arish, Al Bida ash Sharqiyah, Al Ghanim, Al Ghuwariyah, Qatari, Abu Dhabi, Dubai.. WHATSAPP +971)581248768 Abortion Pills / Cytotec Tablets Available in Dubai, Sharjah, Abudhabi, Ajman, Alain, Fujeira, Ras Al Khaima, Umm Al Quwain., UAE, buy cytotec in Dubai– Where I can buy abortion pills in Dubai,+971582071918where I can buy abortion pills in Abudhabi +971)581248768 , where I can buy abortion pills in Sharjah,+97158207191 8where I can buy abortion pills in Ajman, +971)581248768 where I can buy abortion pills in Umm al Quwain +971)581248768 , where I can buy abortion pills in Fujairah +971)581248768 , where I can buy abortion pills in Ras al Khaimah +971)581248768 , where I can buy abortion pills in Alain+971)581248768 , where I can buy abortion pills in UAE +971)581248768 we are providing cytotec 200mg abortion pill in dubai, uae.Medication abortion offers an alternative to Surgical Abortion for women in the early weeks of pregnancy. Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman Fujairah Ras Al Khaimah%^^%$Zone1:+971)581248768’][* Legit & Safe #Abortion #Pills #For #Sale In #Dubai Abu Dhabi Sharjah Deira Ajman
"Does Foreign Direct Investment Negatively Affect Preservation of Culture in the Global South? Case Studies in Thailand and Cambodia."
Do elements of globalization, such as Foreign Direct Investment (FDI), negatively affect the ability of countries in the Global South to preserve their culture? This research aims to answer this question by employing a cross-sectional comparative case study analysis utilizing methods of difference. Thailand and Cambodia are compared as they are in the same region and have a similar culture. The metric of difference between Thailand and Cambodia is their ability to preserve their culture. This ability is operationalized by their respective attitudes towards FDI; Thailand imposes stringent regulations and limitations on FDI while Cambodia does not hesitate to accept most FDI and imposes fewer limitations. The evidence from this study suggests that FDI from globally influential countries with high gross domestic products (GDPs) (e.g. China, U.S.) challenges the ability of countries with lower GDPs (e.g. Cambodia) to protect their culture. Furthermore, the ability, or lack thereof, of the receiving countries to protect their culture is amplified by the existence and implementation of restrictive FDI policies imposed by their governments.
My study abroad in Bali, Indonesia, inspired this research topic as I noticed how globalization is changing the culture of its people. I learned their language and way of life which helped me understand the beauty and importance of cultural preservation. I believe we could all benefit from learning new perspectives as they could help us ideate solutions to contemporary issues and empathize with others.
STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...sameer shah
Delve into the world of STREETONOMICS, where a team of 7 enthusiasts embarks on a journey to understand unorganized markets. By engaging with a coffee street vendor and crafting questionnaires, this project uncovers valuable insights into consumer behavior and market dynamics in informal settings."
Falcon stands out as a top-tier P2P Invoice Discounting platform in India, bridging esteemed blue-chip companies and eager investors. Our goal is to transform the investment landscape in India by establishing a comprehensive destination for borrowers and investors with diverse profiles and needs, all while minimizing risk. What sets Falcon apart is the elimination of intermediaries such as commercial banks and depository institutions, allowing investors to enjoy higher yields.
Abhay Bhutada, the Managing Director of Poonawalla Fincorp Limited, is an accomplished leader with over 15 years of experience in commercial and retail lending. A Qualified Chartered Accountant, he has been pivotal in leveraging technology to enhance financial services. Starting his career at Bank of India, he later founded TAB Capital Limited and co-founded Poonawalla Finance Private Limited, emphasizing digital lending. Under his leadership, Poonawalla Fincorp achieved a 'AAA' credit rating, integrating acquisitions and emphasizing corporate governance. Actively involved in industry forums and CSR initiatives, Abhay has been recognized with awards like "Young Entrepreneur of India 2017" and "40 under 40 Most Influential Leader for 2020-21." Personally, he values mindfulness, enjoys gardening, yoga, and sees every day as an opportunity for growth and improvement.
BONKMILLON Unleashes Its Bonkers Potential on Solana.pdfcoingabbar
Introducing BONKMILLON - The Most Bonkers Meme Coin Yet
Let's be real for a second – the world of meme coins can feel like a bit of a circus at times. Every other day, there's a new token promising to take you "to the moon" or offering some groundbreaking utility that'll change the game forever. But how many of them actually deliver on that hype?
Eco-Innovations and Firm Heterogeneity.Evidence from Italian Family and Nonf...
Cointegration and Long-Horizon Forecasting
1. Christoffersen, P. and Diebold, F.X. (1998),
"Cointegration and Long-Horizon Forecasting,"
Journal of Business and Economic Statistics, 16, 450-458.
Cointegration and Long-Horizon Forecasting
Peter F. Christoffersen
Research Department, International Monetary Fund, Washington, DC 20431
(pchristoffersen@imf.org)
Francis X. Diebold
Department of Economics, University of Pennsylvania, Philadelphia, PA 19104
and NBER (fdiebold@mail.sas.upenn.edu)
Abstract: We consider the forecasting of cointegrated variables, and we show that
at long horizons nothing is lost by ignoring cointegration when forecasts are
evaluated using standard multivariate forecast accuracy measures. In fact, simple
univariate Box-Jenkins forecasts are just as accurate. Our results highlight a
potentially important deficiency of standard forecast accuracy measures—they fail
to value the maintenance of cointegrating relationships among variables— and we
suggest alternatives that explicitly do so.
KEY WORDS: Prediction, Loss Function, Integration, Unit Root
2. -2-
1. INTRODUCTION
Cointegration implies restrictions on the low-frequency dynamic behavior of
multivariate time series. Thus, imposition of cointegrating restrictions has immediate
implications for the behavior of long-horizon forecasts, and it is widely believed that
imposition of cointegrating restrictions, when they are in fact true, will produce superior
long-horizon forecasts. Stock (1995, p. 1), for example, provides a nice distillation of the
consensus belief when he asserts that “If the variables are cointegrated, their values are
linked over the long run, and imposing this information can produce substantial
improvements in forecasts over long horizons.” The consensus belief stems from the
theoretical result that long-horizon forecasts from cointegrated systems satisfy the
cointegrating relationships exactly, and the related result that only the cointegrating
combinations of the variables can be forecast with finite long-horizon error variance.
Moreover, it appears to be supported by a number of independent Monte Carlo analyses
(e.g., Engle and Yoo 1987; Reinsel and Ahn 1992; Clements and Hendry 1993, Lin and
Tsay 1996).
This paper grew out of an attempt to reconcile the popular intuition sketched
above, which seems sensible, with a competing conjecture, which also seems sensible.
Forecast enhancement from exploiting cointegration comes from using information in the
current deviations from the cointegrating relationships. That is, knowing whether and by
how much the cointegrating relations are violated today is valuable in assessing where the
variables will go tomorrow, because deviations from cointegrating relations tend to be
eliminated. However, although the current value of the error-correction term clearly
provides information about the likely near-horizon evolution of the system, it seems
unlikely that it provides information about thelong-horizon evolution of the system,
because the long-horizon forecast of the error-correction term is always zero. (The
error-correction term, by construction, is covariance stationary with a zero mean.) From
this perspective, it seems unlikely that cointegration could be exploited to improve long-
horizon forecasts.
Motivated by this apparent paradox, we provide a precise characterization of the
implications of cointegration for long-horizon forecasting. Our work is closely related to
important earlier contributions of Clements and Hendry (1993, 1994, 1995) and Banerjee,
Dolado, Galbraith and Hendry (1993, pp.278-285), who compare forecasts from a true
VAR to forecasts from a misspecified VAR in differences, whereas we compare the true
forecasts to exact forecasts from correctly-specified but univariate representations. We
focus explicitly and exclusively on forecasting, and we obtain a number of new theoretical
results which sharpen the interpretation of existing Monte Carlo results. Moreover, our
motivation is often very different. Rather than focusing, for example, on loss functions
invariant to certain linear transformations of the data, we take the opposite view that loss
functions—like preferences—are sovereign, and explore in detail how the effects of
imposing cointegration on long-horizon forecasts vary fundamentally with the loss
function adopted. In short, our results and theirs are highly complementary.
3. -3-
We proceed as follows. In Section 2 we show that, contrary to popular belief,
nothing is lost by ignoring cointegration when long-horizon forecasts are evaluated using
standard accuracy measures; in fact, even univariate Box-Jenkins forecasts are equally
accurate. In Section 3 we illustrate our results with a simple bivariate cointegrated system.
In Section 4, we address a potentially important deficiency of standard forecast accuracy
measures highlighted by our analysis—they fail to value the maintenance of cointegrating
relationships among variables—and we suggest alternative accuracy measures that
explicitly do so. In Section 5, we consider forecasting from models with estimated
parameters, and we use our results to clarify the interpretation of a number of well-known
Monte Carlo studies. We conclude in Section 6.
2. MULTIVARIATE AND UNIVARIATE FORECASTS
OF COINTEGRATED VARIABLES
In this section we establish notation, recall standard results on multivariate
forecasts of cointegrated variables, add new results on univariate forecasts of cointegrated
variables, and compare the two. First, let us establish some notation.
Assume that the Nx1 vector process of interest is generated by
(1 L)xt µ C(L) t,
where µ is a constant drift term, C(L) is an NxN matrix lag operator polynomial of
possibly infinite order, and t is a vector of i.i.d. innovations. Then, under regularity
conditions, the existence of r linearly independent cointegrating vectors is equivalent to
rank(C(1)) = N-r, and the cointegrating vectors are given by the rows of the rxN matrix ,
where C(1) = µ = 0. That is, z= xt is an r-dimensional stationary zero-mean time
t
series. We will assume that the system is in fact cointegrated, with 0<rank(C(1))<N. For
future reference, note that following Stock and Watson (1988) we can use the
decomposition C(L) C(1) (1 L)C (L), where Cj Ci, to write the system in
i j 1
“common-trends” form,
xt µt C(1) t C (L) t,
t
where t i.
i 1
We will compare the accuracy of two forecasts of a multivariate cointegrated
system that are polar extremes in terms of cointegrating restrictions imposed—first,
forecasts from the multivariate model, and second, forecasts from the implied univariate
models. Both forecasting models are correctly specified from aunivariate perspective, but
4. -4-
one imposes the cointegrating restrictions and allows for correlated error terms across
equations, and the other does not.
We will make heavy use of a ubiquitous forecast accuracy measure, mean squared
error, the multivariate version of which is
MSE E(et hKet h),
where K is an NxN positive definite symmetric matrix andet h is the vector of h-step-ahead
forecast errors. MSE of course depends on the weighting matrix K. It is standard to set
K=I, in which case
MSE E(et het h) trace( h),
where h var(et h) . We call this the “trace MSE” accuracy measure. To compare the
accuracy of two forecasts, say 1 to 2, it is standard to examine the ratio
1 2
trace( h)/ trace( h) which we call the “trace MSE ratio.”
2.1 Forecasts From the Multivariate Cointegrated System
Now we review standard results (required by our subsequent analysis) on
multivariate forecasting in cointegrated systems. For expanded treatments, see Engle and
Yoo (1987) and Lin and Tsay (1996).
From the moving average representation, we can unravel the process recursively
from time t+h to time 1 and write
t t h i h h i
xt h (t h)µ Cj i Cj t i,
i 1 j 0 i 1 j 0
from which the h-step-ahead forecasts are easily calculated as
t t h i
xt
ˆ h (t h)µ Cj i.
i 1 j 0
From the fact that
t h i
lim Cj C(1),
h j 0
we get that
lim xt
ˆ h 0,
h
so that the cointegrating relationship is satisfied exactly by the long-horizon system
forecasts. This is the sense in which long-horizon forecasts from cointegrated systems
preserve the long-run multivariate relationships exactly.
We define the h-step-ahead forecast error from the multivariate system as
5. -5-
et
ˆ h xt h xt h.
ˆ
The forecast errors from the multivariate system satisfy
h h i
et
ˆ h
Cj ,
t i
i 1 j 0
so the variance of the h-step-ahead forecast error is
h h i h i
var[ˆ t h]
e Cj Cj ,
i 1 j 0 j 0
where is the variance of t.
From the definition of et
ˆ h we can also see that the system forecast errors satisfy
h
et
ˆ h
et
ˆ h 1
Ch i t i
C(L) t h
,
i 1
where the last equality holds if we take j=0 for all j<t. That is, when we view the system
forecast error process as a function of the forecast horizon, h, it has the same stochastic
structure as the original process, x, and therefore is integrated and cointegrated.
t
Consequently, the variance of the h-step ahead forecast errors from the cointegrated
system is of order h, that is increasing at the rate h,
var[ˆ t h]
e O(h).
In contrast, the cointegrating combinations of the system forecast errors, just as the error-
correction process z, will have finite variance for large h,
t
lim var[ et h]
ˆ Q < ,
h
where the matrix Q is a constant function of the stationary component of the forecast
error. Although individual series can only be forecast with increasingly wide confidence
intervals, the cointegrating combination has a confidence interval of finite width, even as
the forecast horizon goes to infinity.
2.2 Forecasts from the Implied Univariate Representations
Now consider ignoring the multivariate features of the system, forecasting instead
using the implied univariate representations. We can use Wold’s decomposition theorem
and write for any series (the n-th, say),
(1 L)xn,t µn n,jun,t j,
j 0
6. -6-
where n,0 = 1 and un,t is white noise. It follows from this expression that the univariate
time-t forecast for period t+h is,
h h 1
xn,t
˜ h hµn xn,t n,i un,t n,i un,t 1 ...
i 1 i 2
Using obvious notation we can write
xn,t h hµn
˜ xn,t n(L)un,t,
and stacking the N series we have
xt
˜ h hµ xt (L)ut,
where (L) is a diagonal matrix polynomial with the individual n(L)’s on the diagonal.
Now let us consider the errors from the univariate forecasts. We will rely on the
following convenient orthogonal decomposition
et h xt h xt h (xt h xt h) (ˆ t h xt h) et h (ˆ t h xt h).
˜ ˜ ˆ x ˜ ˆ x ˜
Recall that the system forecast is
t t h i
xt
ˆ h µ(t h) Cj i µ(t h) C(1) t,
i 1 j 0
where the approximation holds as h gets large. Using univariate forecasts, the
decomposition for et h , and the approximate long-horizon system forecast, we get
˜
et
˜ h et
ˆ h (µ(t h) C(1) t) (xt µh (L)ut).
Now insert the common trends representation for x to get
t
et h et h µ(t h) C(1) t (µt C(1) t C (L)
˜ ˆ t µh (L)ut),
and finally cancel terms to get
et h
˜ et
ˆ h (C (L) t (L)ut).
Notice that the t’s are serially uncorrelated and the u’s only depend on current
t
and past t’s; thus, et h is orthogonal to the terms in the parenthesis. Notice also that the
ˆ
term inside the parenthesis term is just a sum of stationary series and is therefore
stationary; furthermore, its variance is constant as the forecast horizon h changes. We can
therefore write the long-horizon variance of the univariate forecasts as
var(˜ t h)
e Var(ˆ t h)
e O(1) O(h) O(1) O(h),
which is of the same order of magnitude as the variance of thesystem forecast errors.
Furthermore, since the dominating terms in the numerator and denominator are identical,
the trace MSE ratio goes to one as formalized in the following proposition:
7. -7-
Proposition 1
trace(var(˜ t h))
e
lim 1.
h trace(var(ˆ t h))
e
When comparing accuracy using the trace MSE ratio, the univariate forecasts
perform as well as the cointegrated system forecasts as the horizon gets large. This is the
opposite of the folk wisdom—it turns out that imposition of cointegrating restrictions
helps at short, but not long, horizons. Quite simply, when accuracy is evaluated with the
trace MSE ratio, there is no long-horizon benefit from imposing cointegration; all that
matters is getting the level of integration right.
Proposition 1 provides the theoretical foundation for the results of Hoffman and
Rasche (1996), who find in an extensive empirical application that imposing cointegration
does little to enhance long-horizon forecast accuracy, and Brandner and Kunst (1990),
who suggest that when in doubt about how many unit roots to impose in a multivariate
long-horizon forecasting model, it is less harmful to impose too many than to impose too
few. A similar result can be obtained by taking the ratio of Clements and Hendry’s (1995)
formulas for the MSE at horizon h from the system forecasts and the MSE of forecasts
that they construct that correspond approximately to those from a misspecified VAR in
differences.
Now let us consider the variance of cointegrating combinations of univariate
forecast errors. Above we recounted the Engle-Yoo (1987) result that the cointegrating
combinations of the system forecast errors have finite variance as the forecast horizon gets
large. Now we want to look at the same cointegrating combinations of theunivariate
forecast errors. From our earlier derivations it follows that
et
˜ h et
ˆ h ( C (L) t (L)ut).
Again we can rely on the orthogonality of et h to the terms in the parenthesis. The first
ˆ
term, et h, has finite variance, as discussed above. So too do the terms in the parenthesis,
ˆ
because they are linear combinations of stationary processes. Thus we have
Proposition 2
var( et h)
˜ Q var(C (L) t (L)ut) O(1).
The cointegrating combinations of the long-horizon errors from the univariate forecasts,
which completely ignore cointegration, also have finite variance. Thus, it is in fact not
imposition of cointegration on the forecasting system that yields the finite variance of the
cointegrating combination of the errors; rather it is the cointegration property inherent in
the system itself, which is partly inherited by the correctly specified univariate forecasts.
8. -8-
3. A SIMPLE EXAMPLE
In this section, we illustrate the results from Section 2 in a simple multivariate
system. Consider the bivariate cointegrated system,
xt µ xt 1 t
yt xt vt,
where the disturbances are orthogonal at all leads and lags. The moving average
representation is
xt µ 1 0 t µ t
(1 L) C(L) ,
yt µ 1 L vt µ vt
and the error-correction representation is
xt µ 0 xt 1 t
(1 L) 1 .
yt µ 1 yt 1 t vt
The system’s simplicity allows us to compute exact formulae that correspond to the
qualitative results derived in the previous section.
3.1 Univariate Representations
Let us first derive the implied univariate representations for x and y. The univariate
representation for x is of course a random walk with drift, exactly as given in the first
equation of the system,
xt µ xt 1 t.
Derivation of the univariate representation for y is a bit more involved. From the moving-
average representation of the system, rewrite the process for y as a univariate two-shock
t
process,
yt µ yt 1 (1 L)vt t
µ yt 1
zt,
where zt (1 L)vt t. The autocovariance structure for zt is
2 2 2
z(0) 2 v
2
z
(1) z
( 1) v
z( ) 0, 2.
9. -9-
The only non-zero positive autocorrelation is therefore the first,
2
v 1
z(1) 2
,
2 2 2 2
q
2 v
2 2
where q / v is the signal to noise ratio. This is exactly the autocorrelation structure
of an MA(1) process, so we write zt ut 1 ut. To find the value for , we match
autocorrelations at lag 1, yielding
1
.
2
1 2 2q
This gives a second-order polynomial in , with invertible solution
4 2
(1/2)[ q 4 2q 2 2
q].
2 2
Although suppressed in the notation, will be a function of q / v and throughout.
Finally, we find the variance of the univariate innovation by matching the variances,
yielding
2 2 2 2 2 2 2
(1 ) u 2 v v(2 q),
or
2 2
2 v(2 q)
u .
2
(1 )
3.2 Forecasts From the Multivariate Cointegrated System
First consider forecasting from the multivariate cointegrated system. Write the time
t+h values in terms of time t values and future innovations as
h
xt h µh xt t i
i 1
h
yt h (µh xt) t i vt h .
i 1
The h-step-ahead forecasts are
xt
ˆ h µh xt
yt
ˆ h
µh xt,
and the h-step-ahead forecast errors are
10. - 10 -
h
ex,t
ˆ h t i
i 1
h
ey,t
ˆ h t i vt h .
i 1
Note that the forecast errors follow the same stochastic process as the original system
(aside from the drift term),
t h 1 0 t h t h
(1 L)ˆ t h
e C(L) .
t h (1 L)vt h 1 L vt h vt h
Finally, the corresponding forecast error variances are
2
var(ˆ x,t h) h
e
2 2 2
var(ˆ y,t h)
e h v.
Both forecast error variances are O(h). As for the variance of the cointegrating
combination, we have
h h
2
var[ˆ y,t
e h ex,t h]
ˆ var t i vt h t i v,
i 1 i 1
for all h, because there are no short run dynamics. Similarly, the forecasts satisfy the
cointegrating relationship at all horizons, not just in the limit. That is,
yt
ˆ h
xt
ˆ h
0, h 1, 2, ...
3.3 Forecasts From the Implied Univariate Representations
Now consider forecasting from the implied univariate models. Immediately, the
univariate forecast for x is the same as the system forecast,
xt
˜ h µh xt.
Thus,
h
ex,t
˜ h ex,t
ˆ h t i,
i 1
so that
2
var(˜ x,t h)
e var(ˆ x,t h)
e h O(h).
To form the univariate forecast for y, write
11. - 11 -
h h
yt h µh yt zt i µh yt ut ut 1 zt i.
i 1 i 2
The time t forecast for period t+h is
yt h
˜ µh yt ut,
and the corresponding forecast error is
h h 1
ey,t
˜ h ut 1 zt i (1 ) ut i ut h,
i 2 i 1
yielding the forecast error variance
2
var(˜ y,t h)
e [(1 )2(h 1) 1] u.
Notice in particular that the univariate forecast error variance is O(h), as is the system
forecast error variance.
Now let us compute the variance of the cointegrating combination of univariate
forecast errors. We have
2
var[˜ y,t
e h ex,t h]
˜ var(˜ y,t h)
e var(˜ x,t h)
e 2 cov ey,t h, ex,t h .
˜ ˜
2
The second variance term is simply h . To evaluate the first variance term we write
2 2
var(˜ y,t h) [(1 )2(h 1) 1] u [(1 )2h
e (2 )] u.
2
Substituting for u, and using the fact that
1 (1 )2 2
q
,
2 2 2 2
1 2 q 1 2 q
we get
(1 )2 2 2 2 2
var(˜ y,t h)
e h v(2 q) [ 2qh 2 ] v.
2 2
1 1
To evaluate the covariance term, use the fact that
yt
ˆ h yt
˜ h ( µh xt) ( µh yt ut) vt ut,
and the decomposition result from Section 2 to write
h
ey,t
˜ h
ey,t
ˆ h
(ˆ t
y h
yt h)
˜ t i
vt h
vt ut.
i 1
Now recall the formula for the forecast error of x and the fact that future values of are
uncorrelated with future and current values of v, and with current values of u, so that
12. - 12 -
h h
2
cov ey,t h, ex,t
˜ ˜ h E t i vt h vt ut t i h .
i 1 i 1
Armed with these results we have that
2
var[˜ y,t h ex,t h]
e ˜ (2 ) v < h,
which of course accords with our general result derived earlier that the variance of the
cointegrating combination of univariate forecast errors is finite.
3.4 Forecast Accuracy Comparison
Finally, compare the forecast error variances from the multivariate and univariate
representations. Of course x has the same representation in both, so the comparison hinges
on y. We must compare
2 2 2 2 2
var(ˆ y,t h)
e h v v[ qh 1]
to
2
var(˜ y,t h)
e [ 2qh 2 ] v.
Thus,
2
var(˜ y,t h)
e var(ˆ y,t h)
e (1 ) v.
The error variance of the univariate forecast is greater than that of the system forecast, but
it grows at the same rate.
Assembling all of the results, we have immediately that
trace(var(˜ t h))
e var(˜ x,t h)
e var(˜ y,t h)
e qh 2
qh 2
.
trace(var(ˆ t h))
e var(ˆ x,t h)
e var(ˆ y,t h)
e qh 1 2
qh
In Figure 1 we show the values of this ratio as h gets large, for q = = 1. Note in
particular the speed with which the limiting result,
trace(var(˜ t h))
e
lim 1,
h trace(var(ˆ t h))
e
obtains.
In closing this section, we note that in spite of the fact that the trace MSE ratio
approaches 1, the ratio of the variances of the cointegrating combinations of the forecast
errors does not approach 1 in this simple model; rather,
var[˜ y,t h ex,t h]
e ˜
(2 ) > 1, h, q.
var[ˆ y,t h ex,t h]
e ˆ
13. - 13 -
This observation turns out to hold quite generally, and it forms the basis for an alternative
class of accuracy measures, to which we now turn.
4. ACCURACY MEASURES AND COINTEGRATION
4.1 Accuracy Measures I: Trace MSE
We have seen that long-horizon univariate forecasts of cointegrated variables
(which ignore cointegrating restrictions) are just as accurate as their system counterparts
(which explicitly impose cointegrating restrictions), when accuracy is evaluated using the
standard trace MSE criterion. So on traditional grounds there is no reason to prefer long-
horizon forecasts from the cointegrated system.
One might argue, however, that the system forecasts are nevertheless more
appealing because “... the forecasts of levels of co-integrated variables will ‘hang
together’ in a way likely to be viewed as sensible by an economist, whereas forecasts
produced in some other way, such as by a group of individual, univariate Box-Jenkins
models, may well not do so” (Granger and Newbold 1986, p. 226). But as we have seen,
univariate Box-Jenkins forecasts do hang together if the variables are cointegrated—the
cointegrating combinations, and only the cointegrating combinations, of univariate
forecast errors have finite variance.
4.2 Accuracy Measures II: Trace MSE in Forecasting the Cointegrating
Combinations of Variables
The long-horizon system forecasts, however, do a better job of satisfying the
cointegrating restrictions than do the univariate forecasts—the long-horizon system
forecasts always satisfy the cointegrating restrictions, whereas the long-horizon univariate
forecasts do so only on average. That is what is responsible for our earlier result in our
bivariate system that, although the cointegrating combinations of both the univariate and
system forecast errors have finite variance, the variance of the cointegrating combination
of the univariate errors is larger.
Such effects are lost on standard accuracy measures like trace MSE, however,
because the loss functions that underlie them do not explicitly value maintaining the
multivariate long-run relationships of long-horizon forecasts. The solution is obvious—if
we value maintenance of the cointegrating relationship, then so too should the loss
functions underlying our forecast accuracy measures. One approach, in the spirit of
Granger (1996), is to focus on forecasting the cointegrating combinations of the variables,
and to evaluate forecasts in terms of the variability of the cointegrating combinations of
the errors, et+h.
14. - 14 -
Accuracy measures based on cointegrating combinations of the forecast errors
require that the cointegrating vector be known. Fortunately, such is often the case.
Horvath and Watson (1995, pp. 984-985) [see also Watson (1994) and Zivot (1996)], for
example note that
“Economic models often imply that variables are cointegrated with simple
and known cointegrating vectors. Examples include the neoclassical growth
model, which implies that income, consumption, investment, and the capital
stock will grow in a balanced way, so that any stochastic growth in one of
the series must be matched by corresponding growth in the others. Asset
pricing models with stable risk premia imply corresponding stable
differences in spot and forward prices, long- and short-term interest rates,
and the logarithms of stock prices and dividends. Most theories of
international trade imply long-run purchasing power parity, so that long-run
movements in nominal exchange rates are matched by countries’ relative
price levels. Certain monetarist propositions are centered around the
stability of velocity, implying cointegration among the logarithms of
money, prices and income. Each of these theories has distinct implications
for the properties of economic time series under study: First, the series are
cointegrated, and second, the cointegrating vector takes on a specific value.
For example, balanced growth implies that the logarithms of income and
consumption are cointegrated and that the cointegrating vector takes on the
value of (1, -1).”
Thus, although the assumption of a known cointegrating vector certainly involves a loss of
generality, it is nevertheless legitimate in a variety of empirically- and economically-
relevant cases. This is fortunate because of problems associated with identification of
cointegrating vectors in estimated systems, as stressed in Wickens (1996). We will
maintain the assumption of a known cointegration vector throughout this paper, reserving
for subsequent work an exploration of the possibility of analysis using consistent estimates
of cointegrating vectors.
Interestingly, evaluation of accuracy in terms of the trace MSE of the cointegrating
combinations of forecast errors is a special case of the general mean squared error
measure. To see this, consider the general N-variate case with r cointegrating
relationships, and consider again the mean squared error,
E(et hKet h) = Etrace(et hKet h) = Etrace(Ket het h) = trace(K h) ,
where h is the variance of et h . Evaluating accuracy in terms of trace MSE of the
cointegrating combinations of the forecast errors amounts to evaluating
E ( et h) ( et h) trace E ( et h) ( et h) trace(K h),
15. - 15 -
where K = . Thus the trace MSE of the cointegrating combinations of the forecast
errors is in fact a particular variant of MSE formulated on the raw forecast errors, E(eKe)
= trace(K h) , where the weighting matrix K = is of (deficient) rank r (< N), the
cointegrating rank of the system.
4.3 Accuracy Measures III: Trace MSE from the Triangular Representation
The problem with the traditional E(eKe) approach with K = I is that, although it
values small MSE, it fails to value the long-run forecasts’ hanging together correctly.
Conversely, a problem with the E(eKe) approach with K = is that it values only the
long-run forecasts’ hanging together correctly, whereas both pieces seem clearly relevant.
The challenge is to incorporate both pieces into an overall accuracy measure in a natural
way, and an attractive approach for doing so follows from the triangular representation of
cointegrated systems exploited by Campbell and Shiller (1987) and Phillips (1991).
Clements and Hendry (1995) provide a numerical example that illustrates the appeal of the
triangular representation for forecasting. Below we provide a theoretical result that
establishes the general validity of the triangular approach for distinguishing between naive
univariate and fully specified system forecasts.
From the fact that has rank r, it is possible to rewrite the system so that the N
left-hand-side variables are the r error-correction terms followed by the differences of N-r
integrated but not cointegrated variables. That is, we rewrite the system in terms of
x1t x2t
,
(1 L)x2t
where the variables have been rearranged and partitioned into xt (x1t, x2t) , where
( ) and the variables in x2t are integrated but not cointegrated. We then evaluate
accuracy in terms of the trace MSE of forecastsfrom the triangular system,
e1,t h e2,t h e1,t h e2,t h Ir Ir
E E et h
et h
,
(1 L)e2,t h (1 L)e2,t h 0 (1 L) 0 (1 L)
which we denote trace MSEtri. Notice that the trace MSEtri accuracy measure is also of
E(e Ke) form, with
Ir Ir
K K(L) .
0 (1 L) 0 (1 L)
Recall Proposition 1, which says that under trace MSE, long-horizon forecast
accuracy from the cointegrated system is no better than that from univariate models. We
16. - 16 -
now show that under trace MSEtri, long-horizon forecast accuracy from the cointegrated
system is always better than that from univariate models.
Proposition 3
˜
trace MSEtri
lim > 1.
h ˆ
trace MSEtri
Proof: Consider a cointegrated system in triangular form, that is, a system such that = [
I - ]. We need to show that for large h,
r N r N
var[ iet h]
ˆ var[(1 L)ˆ j,t h] <
e var[ iet h]
˜ var[(1 L)˜ j,t h]
e
i 1 j r 1 i 1 j r 1
and
r N
var[ iet h]
ˆ var[(1 L)ˆ j,t h] <
e .
i 1 j r 1
To establish the first inequality it is sufficient to show that
r r
var[ iet h] <
ˆ var[ iet h].
˜
i 1 i 1
We showed earlier that for large h,
var( et h)
˜ Q var(C (L) t (L)ut) (Q S) ,
where Q var(ˆ t h), S
e var(C (L) t (L)ut), and from which it follows that
r r
var[ iet h]
˜ var[ iet h]
ˆ trace( S ) > 0,
i 1 i 1
because S is positive definite. To establish the second inequality, recall that
h
et
ˆ h
et
ˆ h 1
Ch i t i
C(L) ,
t h
i 1
so that
h 1 h 1
var[(1 L)ˆ t h]
e Cj Cj C(1) C(1) as h .
j 0 j 0
Let CN-r(1) be the last N-r rows of C(1); then altogether we have
r N
var[ iet h]
ˆ var[(1 L)ˆ j,t h]
e trace( Q ) trace(CN r(1) CN r(1) ) < ,
i 1 j r 1
17. - 17 -
and the proof is complete.
4.4 The Bivariate Example, Revisited
In our simple bivariate example all we have to do to put the system in the
triangular form sketched above is to switch x and y in the autoregressive representation,
yielding
1 yt 0 vt
.
0 1 L xt µ t
For the system forecasts we have
ey,t
ˆ h
ex,t
ˆ h
ey,t
ˆ h
ex,t
ˆ h
ˆ
trace MSEtri E
2
v
2
.
(1 L)ˆ x,t
e h (1 L)ˆ x,t
e h
For the univariate forecasts we have
ey,t
˜ h
ex,t
˜ h
ey,t
˜ h
ex,t
˜ h
˜
trace MSEtri E (2 )
2 2
.
v
(1 L)˜ x,t
e h (1 L)˜ x,t
e h
Thus we see that the trace MSEtri ratio does not approach one as the horizon increases; in
particular, it is constant and above one for all h,
˜
trace MSEtri 1 (2 )q (1 )q
1 > 1, h.
ˆ
trace MSEtri 1 q 1 q
2 2
In Figure 2 we plot the trace MSEtri ratio vs. h, for =1 and v q 1 . In this case, the
ratio is simply a constant (> 1) for all h since the systems contains no short-run dynamics.
In summary, although the long-horizon performances of the system and univariate
forecasts seem identical under the conventional trace MSE ratio, they differ under the
trace MSEtri ratio. The system forecast is superior to the univariate forecast under trace
MSEtri, because the system forecast is accurate in the conventional “small MSE” sense
and it hangs together correctly, i.e. it makes full use of the information in the cointegrating
relationship. We stress that abandoning MSE and adopting MSE marks a change of loss
tri
function, and thus preferences. If the forecaster’s loss function truly is trace MSE then
using trace MSEtri might not make sense. On the other hand trace MSE is often adopted
without much thought, and an underlying theme of our analysis is precisely that thought
should be given to the choice of loss function.
18. - 18 -
5. UNDERSTANDING EARLIER MONTE CARLO STUDIES
Here we clarify the interpretation of earlier influential Monte Carlo work, in
particular Engle and Yoo (1987), as well as Reinsel and Ahn (1992), Clements and
Hendry (1993), and Lin and Tsay (1996), among others. We do so by performing a Monte
Carlo analysis of our own, which reconciles our theoretical results and the apparently
conflicting Monte Carlo results reported in the literature, and we show how the existing
Monte Carlo analyses have been misinterpreted. Throughout, we use our simple bivariate
system (which is very similar to the one used by Engle and Yoo), with parameters set to
2 2
=1, µ=0 and v 1 . We use a sample size of 100 and perform 4000 Monte Carlo
replications. In keeping with our earlier discussion, we assume a known cointegrating
vector, but we estimate all other parameters. This simple design allows us to make our
point forcefully and with a minimum of clutter, and the results are robust to changes in
parameter values and sample size.
Let us first consider an analog of our theoretical results, except that we now
estimate parameters instead of assuming them known. In Figure 3 we plot the trace MSE
ratio and the trace MSEtri ratio against the forecast horizon, h. Using estimated parameters
changes none of the theoretical results reached earlier under the assumption of known
parameters. Use of the trace MSE ratio obscures the long-horizon benefits of imposing
cointegration, whereas use of trace MSE reveals those benefits clearly.
tri
How then can we reconcile our results with those of Engle and Yoo (1987) and the
many subsequent authors who conclude that imposing cointegration produces superior
long-horizon forecasts? The answer is two-part: Engle and Yoo make a different and
harder-to-interpret comparison than we do, and they misinterpret the outcome of their
Monte Carlo experiments.
First consider the forecast comparison. We have thus far compared forecasts from
univariate models (which impose integration) to forecasts from the cointegrated system
(which impose both integration and cointegration). Thus a comparison of the forecasting
results isolates the effects of imposing cointegration. Engle and Yoo, in contrast, compare
forecasts from a VAR in levels (which imposeneither integration nor cointegration) to
forecasts from the cointegrated system (which imposeboth integration and cointegration).
Thus differences in forecasting performance in the Engle-Yoo setup cannot necessarily be
attributed to the imposition of cointegration—instead, they may simply be due to
imposition of integration, irrespective of whether cointegration is imposed.
Now consider the interpretation of the results. The VAR in levels is of course
integrated, but estimating the system in levels entails estimating the unit root. Although
many estimators are consistent, an exact finite-sample unit root is a zero-probability event.
Unfortunately, even a slight and inevitable deviation of the estimated root from unity
pollutes forecasts from the estimated model, and the pollution increases with h. This in
19. - 19 -
turn causes the MSE ratio to increase in h when comparing a levels VAR forecast to a
system forecast or any other forecast that explicitly imposes unit roots. The problem is
exacerbated by bias of the Dickey-Fuller-Hurwicz type; see Stine and Shaman (1989),
Pope (1990), Abadir (1993) and Abadir, Hadri and Tzavalis (1996) for detailed
treatments.
It is no surprise that forecasts from the VAR estimated in levels perform poorly,
with performance worsening with horizon, as shown Figure 4. It is tempting to attribute
the poor performance of the VAR in levels to its failure to impose cointegration, as do
Engle and Yoo. The fact is, however, that the VAR in levels performs poorly because it
fails to impose integration, not because it fails to impose cointegration—estimation of the
cointegrated system simply imposes the correct level of integrationa priori. To see this,
consider Figure 5, in which we compare the forecasts from an estimated VAR in
differences to the forecasts from the estimated cointegrated system. At long horizons, the
forecasts from the VAR in differences, which impose integration but completely ignore
cointegration, perform just as well. In contrast, if we instead evaluate forecast accuracy
with the trace MSEtri ratio that we have advocated, the forecasts from the VAR in
differences compare poorly at all horizons to those from the cointegrated system, as
shown in Figure 6.
In the simple bivariate system, we are restricted to studying models with exactly
one unit root and one cointegration relationship. It is also of interest to examine richer
systems; conveniently, the literature already contains relevant (but unnoticed) evidence,
which is entirely consistent with our theoretical results. Reinsel and Ahn (1992) and Lin
and Tsay (1996), in particular, provide Monte Carlo evidence on the comparative
forecasting performance of competing estimated models. Both study a four-variable
VAR(2), with two unit roots and two cointegrating relationships. Their results clearly
suggest that under the trace MSE accuracy measure, one need only worry about imposing
enough unit roots on the system. Imposing three (one too many) unit roots is harmless at
any horizon, and imposing four unit roots (two too many, so that the VAR is in
differences) is harmless at long horizons. As long as one imposes enough unit roots, at
least two in this case, the trace MSE ratio will invariably go to one as the horizon
increases.
6. SUMMARY AND CONCLUDING REMARKS
First, we have shown that imposing cointegration does not improve long-horizon
forecast accuracy when forecasts of cointegrated variables are evaluated using the
standard trace MSE ratio. Ironically enough, although cointegration implies restrictions on
low-frequency dynamics, imposing cointegration is helpful for short- but not long-horizon
forecasting, in contrast to the impression created in the literature. Imposition of
cointegration on an estimated system, when the system is in fact cointegrated, helps the
accuracy of long-horizon forecasts relative to those from systems estimated in levels with
no restrictions, but that is because of the imposition of integration, not cointegration.
Univariate forecasts in differences do just as well! We hasten to add, of course, that the
result is conditional on the assumption that the univariate representations of all variables
20. - 20 -
do in fact contain unit roots. Differencing a stationary variable with roots close to unity
has potentially dire consequences for long-horizon forecasting, as argued forcefully by
Lin and Tsay (1996).
Second, we have shown that the variance of the cointegrating combination of the
long-horizon forecast errors is finite regardless of whether cointegration is imposed. The
variance of the error in forecasting the cointegrating combinationis smaller, however, for
the cointegrated system forecast errors. This suggests that accuracy measures that value
the preservation of long-run relationships should be defined, in part, on the cointegrating
combinations of the forecast errors. We explored one such accuracy measure based on the
triangular representation of the cointegrated system.
Third, we showed that our theoretical results are entirely consistent with several
well-known Monte Carlo analyses, whose interpretation we clarified. The existing Monte
Carlo results are correct, but their widespread interpretation is not. Imposition of
integration, not cointegration, is responsible for the repeated finding that the long-horizon
forecasting performance of cointegrated systems is better than that of VARs in levels.
We hasten to add that the message of this paper isnot that cointegration is of no
value in forecasting. First, even under the conventional trace MSE accuracy measure,
imposing cointegration does improve forecasts. Our message is simply that under the
conventional accuracy measure it does so at short and moderate, not long, horizons, in
contrast to the folk wisdom. Second, in our view, imposing cointegration certainlymay be
of value in long-horizon forecasting—the problem is simply that standard forecast
accuracy measures do not reveal it.
The upshot is that in forecast evaluation we need to think hard about what
characteristics make a good forecast good, and how best to measure those characteristics.
In that respect this paper is in the tradition of our earlier work, such as Diebold and
Mariano (1995), Diebold and Lopez (1996), and Christoffersen and Diebold (1996, 1998),
in which we argue the virtues of tailoring accuracy measures in applied forecasting to the
specifics of the problem at hand. Seemingly omnibus measures such as trace MSE,
although certainly useful in many situations, are inadequate in others.
In closing, we emphasize that the particular alternative to trace MSE that we
examine in this paper, trace MSEtri, is but one among many possibilities, and we look
forward to exploring variations in future research. The key insight, it seems to us, is that if
we value preservation of cointegrating relationships in long-horizon forecasts, then so too
should our accuracy measures, and trace MSE is a natural loss function that does so.
tri
Interestingly, it is possible to process the trace MSE differently to obtain an
accuracy measure that ranks the system forecasts as superior to the univariate forecasts,
even as the forecast horizon goes to infinity. One obvious candidate is the trace MSE
difference, as opposed to the trace MSE ratio. It follows from the results of Section 2 that
the trace MSE difference is positive and does not approach zero as the forecast horizon
grows. As stressed above, however, it seems more natural to work with alternatives to
21. - 21 -
trace MSE that explicitly value preservation of cointegrating relationships, rather then
simply processing the trace MSE differently. As the forecast horizon grows, the trace
MSE difference becomes negligible relative to either the system or the univariate trace
MSE, so that the trace MSE difference would appear to place too little value on preserving
cointegrating relationships.
ACKNOWLEDGMENTS
We thank the Co-Editor (Ruey Tsay), an Associate Editor, and two referees for
detailed and constructive comments. Helpful discussion was also provided by Dave
DeJong, Rob Engle, Clive Granger, Bruce Hansen, Dennis Hoffman, Laura Kodres, Jim
Stock, Charlie Thomas, Ken Wallis, Chuck Whiteman, Mike Wickens, Tao Zha, and
participants at the July 1996 NBER/NSF conference on Forecasting and Empirical
Methods in Macroeconomics. All remaining inadequacies are ours alone. We thank the
International Monetary Fund, the National Science Foundation, the Sloan Foundation and
the University of Pennsylvania Research Foundation for support. The views in this article
do not necessarily represent those of the International Monetary Fund.
22. Figure 1. Trace MSE Ratio of Univariate vs. System Forecasts Plotted Against
Forecast Horizon, Bivariate System with Cointegration Parameter =1 and Signal to
Noise Ratio q=1
23. Figure 2. Trace MSE tri Ratio of Univariate vs. System Forecasts Plotted Against
Forecast Horizon, Bivariate System with Cointegration Parameter =1 and Signal to
Noise Ratio q=1
24. Figure 3. Trace MSE Ratio and Trace MSE tri Ratio of Univariate vs. System
Forecasts Plotted Against Forecast Horizon, Bivariate System with Estimated
Parameters
25. Figure 4. Trace MSE Ratio of Levels VAR vs. Cointegrated System Forecasts Plotted
Against the Forecast Horizon, Bivariate System with Estimated Parameters
26. Figure 5. Trace MSE Ratio of Differenced VAR vs. Cointegrated System Forecasts
Plotted Against Forecast Horizon, Bivariate System with Estimated Parameters
27. Figure 6. Trace MSE tri Ratio of Differenced VAR vs. Cointegrated System Forecasts
Plotted Against Forecast Horizon, Bivariate System with Estimated Parameters
28. References
Abadir, K.M. (1993), “OLS Bias in a Nonstationary Autoregression,”Econometric
Theory, 9, 81-93.
Abadir, K.M., Hadri, K. and Tzavalis, E. (1996), “The Influence of VAR Dimensions on
Estimator Biases,” Economics Discussion Paper 96/14, University of York.
Banerjee, A., Dolado, J., Galbraith, J.W. and Hendry, D.F. (1993),Co-integration, Error-
correction, and the Econometric Analysis of Non-stationary Data. Oxford: Oxford
University Press.
Brandner, P. and Kunst, R.M. (1990), “Forecasting Vector Autoregressions - The
Influence of Cointegration: A Monte Carlo Study,” Research Memorandum No.
265, Institute for Advanced Studies, Vienna.
Campbell, J.Y. and Shiller, R.J. (1987), "Cointegration and Tests of Present Value
Models," Journal of Political Economy, 95, 1062-1088.
Christoffersen, P.F. and Diebold, F.X. (1996), "Further Results on Forecasting and Model
Selection Under Asymmetric Loss," Journal of Applied Econometrics, 11, 561-
571.
)))))) (1998), "Optimal Prediction Under Asymmetric Loss,"Econometric Theory, in
press.
Clements, M.P. and Hendry, D.F. (1993), “On the Limitations of Comparing Mean Square
Forecast Errors,” Journal of Forecasting, 12, 617-637.
(1994), “Towards a Theory of Economic Forecasting,” in C.P. Hargreaves (ed.),
)))))))
Nonstationary Time Series and Cointegration. Oxford: Oxford University Press.
(1995), “Forecasting in Cointegrated Systems,”Journal of Applied Econometrics,
)))))))
10, 127-146.
Diebold, F.X. and Lopez, J. (1996), "Forecast Evaluation and Combination," in G.S.
Maddala and C.R. Rao (eds.), Handbook of Statistics. Amsterdam: North-Holland,
241-268.
Diebold, F.X. and Mariano, R.S. (1995), "Comparing Predictive Accuracy,"Journal of
Business and Economic Statistics, 13, 253-265.
Engle, R.F. and Yoo, B.S. (1987), “Forecasting and Testing in Cointegrated Systems,”
Journal of Econometrics, 35, 143-159.
Granger, C.W.J. (1996), “Can We Improve the Perceived Quality of Economic
Forecasts?,” Journal of Applied Econometrics, 11, 455-473.
29. Granger, C.W.J., and Newbold, P. (1986), Forecasting Economic Time Series, Second
Edition. New York: Academic Press.
Hoffman, D.L. and Rasche, R.H. (1996), "Assessing Forecast Performance in a
Cointegrated System," Journal of Applied Econometrics, 11, 495-517.
Horvath, M.T.K. and Watson, M.W. (1995), “Testing for Cointegration When Some of
the Cointegrating Vectors are Known,” Econometric Theory, 11, 984-1014.
Lin, J.-L. and Tsay, R.S. (1996), “Cointegration Constraints and Forecasting: An
Empirical Examination,” Journal of Applied Econometrics, 11, 519-538.
Phillips, P.C.B. (1991), “Optimal Inference in Cointegrated Systems,”Econometrica, 59,
283-306.
Pope, A.L. (1990), “Biases of Estimators in Multivariate Non-Gaussian Autoregressions,”
Journal of Time Series Analysis, 11, 249-258.
Reinsel, G.C. and Ahn, S.K. (1992), “Vector Autoregressive Models with Unit Roots and
Reduced Rank Structure: Estimation, Likelihood Ratio Test, and Forecasting,”
Journal of Time Series Analysis, 13, 353-375.
Stine, R.A. and Shaman, P. (1989), “A Fixed Point Characterization for Bias of
Autoregressive Estimators,” Annals of Statistics, 17, 1275-1284.
Stock, J.H. (1995), “Point Forecasts and Prediction Intervals for Long Horizon Forecasts,”
Manuscript, J.F.K. School of Government, Harvard University.
Stock, J.H. and Watson, M.W. (1988), “Testing for Common Trends,”Journal of the
American Statistical Association, 83, 1097-1107.
Watson, M.W. (1994), “Vector Autoregressions and Cointegration,” in R.F. Engle and D.
McFadden (eds.), Handbook of Econometrics, Vol. IV, Chapter 47. Amsterdam:
North-Holland.
Wickens, M.R. (1996), “Interpreting Cointegrating Vectors and Common Stochastic
Trends,” Journal of Econometrics, 74, 255-271.
Zivot, E. (1996), “The Power of Single Equation Tests for Cointegration when the
Cointegrating Vector is Prespecified,” Manuscript, Department of Economics,
University of Washington.