This document discusses vector autoregressive (VAR) and vector error correction (VECM) models. It provides information on when to use a restricted VAR (VECM) model versus an unrestricted VAR model based on the results of cointegration tests. If trace statistics are greater than critical values and probability values are less than 0.05, this indicates cointegration between the variables and a restricted VAR (VECM) should be used. The document also discusses estimating and interpreting VECM models, including examining impulse response functions and variance decompositions.
Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
This document discusses heteroscedasticity, which occurs when the error variance is not constant. It provides examples of when the variance of errors may change, such as with income level or outliers. Graphical methods are presented for detecting heteroscedasticity by examining patterns in residual plots. Formal tests are also described, including the Park test which regresses the log of the squared residuals on explanatory variables, and the Glejser test which regresses the absolute value of residuals on variables related to the error variance. Detection of heteroscedasticity is important as it violates assumptions of the classical linear regression model.
This document discusses multicollinearity in regression analysis. It defines multicollinearity as an exact or near-exact linear relationship between explanatory variables. In cases of perfect multicollinearity, individual regression coefficients cannot be estimated. Near or imperfect multicollinearity is more common in real data and can lead to less precise coefficient estimates with wider confidence intervals. The document discusses various methods for detecting multicollinearity, such as auxiliary regressions and variance inflation factors, and potential remedies like dropping or transforming variables. However, multicollinearity diagnosis depends on the specific data sample and goals of the analysis.
The document discusses heteroscedasticity, which occurs when the variance of the error term is not constant. It defines heteroscedasticity and provides potential causes, such as errors increasing with an independent variable or model misspecification. Consequences are that OLS estimates are no longer BLUE and standard errors are biased. Several tests for detecting heteroscedasticity are outlined, including Park, Glejser, Spearman rank correlation, and Goldfeld-Quandt tests. The Goldfeld-Quandt test involves dividing data into groups and comparing regression sum of squares to test if error variance differs between groups.
This document discusses multicollinearity in econometrics. Multicollinearity occurs when there is a near-perfect linear relationship among independent variables. It can lead to unstable parameter estimates and high standard errors. Symptoms include high standard errors, unexpected parameter signs or magnitudes, and jointly significant but individually insignificant variables. Diagnosis involves examining variable correlations and testing joint significance. The variance inflation factor (VIF) measures the impact of multicollinearity, with values above 2 indicating a potential problem. Remedies include acquiring more data, dropping problematic variables, or reformulating the model, though these can introduce new issues. Multicollinearity alone does not invalidate estimates.
1. A VAR model comprises multiple time series and is an extension of the autoregressive model that allows for feedback between variables.
2. The optimal lag length is chosen using information criteria like AIC and BIC to balance model fit and complexity.
3. Cointegration testing determines whether variables have a long-run relationship and whether a VECM or VAR in differences should be specified.
This document discusses stationarity in time series analysis. It defines stationarity as a time series having a constant mean, constant variance, and constant autocorrelation structure over time. Non-stationary time series can be identified through run sequence plots, summary statistics, histograms, and augmented Dickey-Fuller tests. Common transformations like removing trends, heteroscedasticity through logging, differencing to remove autocorrelation, and removing seasonality can be used to make non-stationary time series data stationary. Python is used to demonstrate identifying and transforming non-stationary time series data.
This document discusses heteroskedasticity in econometric models. It defines heteroskedasticity as non-constant variance of the error term, in contrast to the homoskedasticity assumption of constant variance. It explains that while OLS estimates remain unbiased with heteroskedasticity, the standard errors are biased. Robust standard errors can provide consistent standard errors even with heteroskedasticity. The Breusch-Pagan and White tests are presented as methods to test for the presence of heteroskedasticity based on the residuals. Weighted least squares is also introduced as a method to obtain more efficient estimates than OLS when the form of heteroskedasticity is known.
Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
This document discusses heteroscedasticity, which occurs when the error variance is not constant. It provides examples of when the variance of errors may change, such as with income level or outliers. Graphical methods are presented for detecting heteroscedasticity by examining patterns in residual plots. Formal tests are also described, including the Park test which regresses the log of the squared residuals on explanatory variables, and the Glejser test which regresses the absolute value of residuals on variables related to the error variance. Detection of heteroscedasticity is important as it violates assumptions of the classical linear regression model.
This document discusses multicollinearity in regression analysis. It defines multicollinearity as an exact or near-exact linear relationship between explanatory variables. In cases of perfect multicollinearity, individual regression coefficients cannot be estimated. Near or imperfect multicollinearity is more common in real data and can lead to less precise coefficient estimates with wider confidence intervals. The document discusses various methods for detecting multicollinearity, such as auxiliary regressions and variance inflation factors, and potential remedies like dropping or transforming variables. However, multicollinearity diagnosis depends on the specific data sample and goals of the analysis.
The document discusses heteroscedasticity, which occurs when the variance of the error term is not constant. It defines heteroscedasticity and provides potential causes, such as errors increasing with an independent variable or model misspecification. Consequences are that OLS estimates are no longer BLUE and standard errors are biased. Several tests for detecting heteroscedasticity are outlined, including Park, Glejser, Spearman rank correlation, and Goldfeld-Quandt tests. The Goldfeld-Quandt test involves dividing data into groups and comparing regression sum of squares to test if error variance differs between groups.
This document discusses multicollinearity in econometrics. Multicollinearity occurs when there is a near-perfect linear relationship among independent variables. It can lead to unstable parameter estimates and high standard errors. Symptoms include high standard errors, unexpected parameter signs or magnitudes, and jointly significant but individually insignificant variables. Diagnosis involves examining variable correlations and testing joint significance. The variance inflation factor (VIF) measures the impact of multicollinearity, with values above 2 indicating a potential problem. Remedies include acquiring more data, dropping problematic variables, or reformulating the model, though these can introduce new issues. Multicollinearity alone does not invalidate estimates.
1. A VAR model comprises multiple time series and is an extension of the autoregressive model that allows for feedback between variables.
2. The optimal lag length is chosen using information criteria like AIC and BIC to balance model fit and complexity.
3. Cointegration testing determines whether variables have a long-run relationship and whether a VECM or VAR in differences should be specified.
This document discusses stationarity in time series analysis. It defines stationarity as a time series having a constant mean, constant variance, and constant autocorrelation structure over time. Non-stationary time series can be identified through run sequence plots, summary statistics, histograms, and augmented Dickey-Fuller tests. Common transformations like removing trends, heteroscedasticity through logging, differencing to remove autocorrelation, and removing seasonality can be used to make non-stationary time series data stationary. Python is used to demonstrate identifying and transforming non-stationary time series data.
This document discusses heteroskedasticity in econometric models. It defines heteroskedasticity as non-constant variance of the error term, in contrast to the homoskedasticity assumption of constant variance. It explains that while OLS estimates remain unbiased with heteroskedasticity, the standard errors are biased. Robust standard errors can provide consistent standard errors even with heteroskedasticity. The Breusch-Pagan and White tests are presented as methods to test for the presence of heteroskedasticity based on the residuals. Weighted least squares is also introduced as a method to obtain more efficient estimates than OLS when the form of heteroskedasticity is known.
This document discusses the order and rank conditions for identification of equations in a simultaneous equation model.
The order condition states that for an equation to be identified, the number of excluded variables must be greater than or equal to the number of endogenous variables minus one. The rank condition requires that it is possible to construct a non-zero determinant of order G-1 (where G is the number of endogenous variables) from the coefficients of excluded variables.
An example simultaneous equation model is provided to demonstrate checking if the order and rank conditions are satisfied for each equation. The first two equations satisfy both conditions and are identified, while the third equation fails the rank condition and is unidentified.
This document provides an introduction to econometrics and regression analysis. It defines econometrics as the application of statistical methods to economic data and models. The document outlines the methodology of econometrics, including specifying economic theories as mathematical and econometric models, obtaining data, estimating models, hypothesis testing, forecasting, and using models for policy purposes. It also discusses key concepts in regression analysis such as the dependent and explanatory variables, and distinguishes regression from correlation and causation.
The document provides information about several theoretical probability distributions including the normal, t, and chi-square distributions. It discusses key properties such as the mean, standard deviation, and shape of the normal distribution curve. Examples are given to demonstrate how to calculate areas under the normal distribution curve and find z-scores. The t-distribution is introduced as similar to the normal but used for smaller sample sizes. The chi-square distribution is defined as used for hypothesis testing involving categorical data.
We can define heteroscedasticity as the condition in which the variance of the error term or the residual term in a regression model varies. As you can see in the above diagram, in the case of homoscedasticity, the data points are equally scattered while in the case of heteroscedasticity, the data points are not equally scattered.
Two Conditions:
1] Known Variance
2] Unknown Variance
1. The document discusses the nature of regression analysis, which involves studying the dependence of a dependent variable on one or more explanatory variables, with the goal of estimating or predicting the average value of the dependent variable based on the explanatory variables.
2. It provides examples of regression analysis, such as studying how crop yield depends on factors like temperature, rainfall, and fertilizer. It also distinguishes between statistical and deterministic relationships, and notes that regression analysis indicates dependence but does not necessarily imply causation.
3. Regression analysis differs from correlation analysis in that it treats the dependent and explanatory variables asymmetrically, with the goal of prediction rather than just measuring the strength of the linear association between variables.
The Marshall-Lerner approach states that devaluation of a currency will improve the balance of payments if the sum of the price elasticities of demand for exports and imports is greater than one. Devaluation makes a country's exports cheaper and imports more expensive, which can increase exports and decrease imports to reduce a current account deficit. However, its effects are only seen in the long-run as consumers and producers adjust, and the approach makes simplifying assumptions and ignores factors like domestic inflation and income distribution effects.
Autocorrelation- Detection- part 2- Breusch-Godfrey Test and Durbin's h testShilpa Chaudhary
This document discusses various tests for detecting autocorrelation, including the Durbin-Watson d test, Durbin's h test, and the Breusch-Godfrey (BG) test. The BG test allows for nonstochastic regressors and higher-order autoregressive schemes unlike the Durbin-Watson d test. The steps of the BG test are outlined. An example question demonstrates how to apply the BG test and Durbin's h test to check for autocorrelation in a regression. It is noted that autocorrelation could be due to pure autocorrelation or model misspecification.
This document discusses autocorrelation, which occurs when there is a correlation between members of a series of observed data ordered over time or space. This violates an assumption of classical linear regression that error terms are uncorrelated. Causes of autocorrelation include inertia in macroeconomic data, specification bias from excluded or incorrectly specified variables, lags, data manipulation, and non-stationarity of time series data. Autocorrelation can be detected graphically or using the Durbin-Watson and Breusch-Godfrey tests. Remedial measures include first-difference transformation, generalized transformation, and using Newey-West standard errors.
This document summarizes the key assumptions and properties of Ordinary Least Squares (OLS) regression. OLS aims to minimize the sum of squared residuals by estimating the beta coefficients. It provides the best linear unbiased estimates if its assumptions are met. The key assumptions are: 1) the regression is linear in parameters; 2) the error term has a mean of zero; 3) the error term is uncorrelated with the independent variables; 4) there is no serial correlation or autocorrelation in the error term; 5) the error term has constant variance (homoskedasticity); and 6) there is no perfect multicollinearity among independent variables. When all assumptions are met, OLS estimates
Heteroscedasticity is the condition which refers to the violation of the Homoscedasticity condition of the linear regression model used in econometrics study. In simple words, it can be described as the situation which leads to increase in the variance of the residual terms with the increase in the fitted value of the variable. Copy the link given below and paste it in new browser window to get more information on Heteroscedasticity:- http://www.transtutors.com/homework-help/economics/heteroscedasticity.aspx
The document provides an overview of the Phillips curve, which shows the relationship between unemployment and inflation. It discusses the history of the Phillips curve developed by A.W. Phillips and debates around the short-run versus long-run curve. It also examines different types of inflation (cost-push, demand-pull) and unemployment (natural, structural, frictional, cyclical). Finally, it analyzes the relationship between inflation and unemployment and how policies aimed at reducing one may impact the other.
This document discusses the methodology of econometrics. It begins by defining econometrics as applying economic theory, mathematics and statistical inference to analyze economic phenomena. It then outlines the typical steps in an econometric analysis: 1) stating an economic theory or hypothesis, 2) specifying a mathematical model, 3) specifying an econometric model, 4) collecting data, 5) estimating parameters, 6) hypothesis testing, 7) forecasting, and 8) using the model for policy purposes. As an example, it walks through Keynes' consumption theory using U.S. consumption and GDP data to estimate the marginal propensity to consume.
I. The document discusses the method of ordinary least squares (OLS) regression analysis. OLS chooses estimates that minimize the sum of squared residuals between the actual and predicted y-values.
II. OLS provides point estimates for regression parameters and makes assumptions such as a linear relationship between variables, independent and homoscedastic errors, and no autocorrelation.
III. Monte Carlo experiments can test the statistical properties of OLS by repeatedly simulating the regression of randomly generated data on fixed x-values and checking if the average estimates equal the true parameter values.
This document discusses autocorrelation in time series data and its effects on regression analysis. It defines autocorrelation as errors in one time period carrying over into future periods. Autocorrelation can be caused by factors like inertia in economic cycles, specification bias, lags, and nonstationarity. While OLS estimators remain unbiased with autocorrelation, they become inefficient and hypothesis tests are invalid. Autocorrelation can be detected using graphical analysis or formal tests like the Durbin-Watson test and Breusch-Godfrey test. The Cochrane-Orcutt procedure is also described as a way to transform data and remove autocorrelation.
This document discusses time series analysis and its key components. It begins by defining a time series as a sequence of data points measured over successive time periods. The four main components of a time series are identified as: 1) Trend - the long-term pattern of increase or decrease, 2) Seasonal variations - repeating patterns over 12 months, 3) Cyclical variations - fluctuations lasting more than a year, and 4) Irregular variations - unpredictable fluctuations. Two common methods for measuring trends are introduced as the moving average method and least squares method. Formulas and examples are provided for calculating trend values using these techniques.
Expectations and Economics policy by Zegeye Paulos Borko (Asst,...Zegeye Paulos
Expectationa and Economics policy
-What do we mean by the Rational Expectations Hypothesis [REH].
-What are the implications of the REH for the conduct of economic policy?
-The “Policy-Ineffectiveness Proposition” [PIP]
-What are the implications of the REH for economicmodeling? The “Lucas critique”
This document discusses dummy variable regression models. It explains that dummy variables take values of 0 or 1 to represent the presence or absence of attributes in categorical data. An example model is provided to analyze average teacher salaries across three regions using dummy variables for each region. The results show that the slopes of the dummy variable coefficients for Punjab and KPK are not statistically significant, indicating the average salaries do not differ significantly between the three regions. Caution is advised in properly specifying and interpreting coefficients in dummy variable regression models.
This document discusses regression models, path models, and the output from AMOS software when conducting structural equation modeling (SEM). Regression models only include observed variables and assume independents are measured without error. Path models allow independents to be both causes and effects, and allow for error terms on endogenous variables. The AMOS output provides standardized and unstandardized regression weights, significance tests, and fit indexes to evaluate how well the specified model fits the sample data.
This document describes specification tests that can be used after estimating dynamic panel data models using the generalized method of moments (GMM) estimator. It presents GMM estimators for first-order autoregressive models with individual fixed effects that exploit moment restrictions from assuming serially uncorrelated errors. Monte Carlo simulations are used to evaluate the small-sample performance of tests of serial correlation based on GMM residuals, Sargan tests, and Hausman tests. The tests are also applied to estimated employment equations using an unbalanced panel of UK firms.
This document discusses the order and rank conditions for identification of equations in a simultaneous equation model.
The order condition states that for an equation to be identified, the number of excluded variables must be greater than or equal to the number of endogenous variables minus one. The rank condition requires that it is possible to construct a non-zero determinant of order G-1 (where G is the number of endogenous variables) from the coefficients of excluded variables.
An example simultaneous equation model is provided to demonstrate checking if the order and rank conditions are satisfied for each equation. The first two equations satisfy both conditions and are identified, while the third equation fails the rank condition and is unidentified.
This document provides an introduction to econometrics and regression analysis. It defines econometrics as the application of statistical methods to economic data and models. The document outlines the methodology of econometrics, including specifying economic theories as mathematical and econometric models, obtaining data, estimating models, hypothesis testing, forecasting, and using models for policy purposes. It also discusses key concepts in regression analysis such as the dependent and explanatory variables, and distinguishes regression from correlation and causation.
The document provides information about several theoretical probability distributions including the normal, t, and chi-square distributions. It discusses key properties such as the mean, standard deviation, and shape of the normal distribution curve. Examples are given to demonstrate how to calculate areas under the normal distribution curve and find z-scores. The t-distribution is introduced as similar to the normal but used for smaller sample sizes. The chi-square distribution is defined as used for hypothesis testing involving categorical data.
We can define heteroscedasticity as the condition in which the variance of the error term or the residual term in a regression model varies. As you can see in the above diagram, in the case of homoscedasticity, the data points are equally scattered while in the case of heteroscedasticity, the data points are not equally scattered.
Two Conditions:
1] Known Variance
2] Unknown Variance
1. The document discusses the nature of regression analysis, which involves studying the dependence of a dependent variable on one or more explanatory variables, with the goal of estimating or predicting the average value of the dependent variable based on the explanatory variables.
2. It provides examples of regression analysis, such as studying how crop yield depends on factors like temperature, rainfall, and fertilizer. It also distinguishes between statistical and deterministic relationships, and notes that regression analysis indicates dependence but does not necessarily imply causation.
3. Regression analysis differs from correlation analysis in that it treats the dependent and explanatory variables asymmetrically, with the goal of prediction rather than just measuring the strength of the linear association between variables.
The Marshall-Lerner approach states that devaluation of a currency will improve the balance of payments if the sum of the price elasticities of demand for exports and imports is greater than one. Devaluation makes a country's exports cheaper and imports more expensive, which can increase exports and decrease imports to reduce a current account deficit. However, its effects are only seen in the long-run as consumers and producers adjust, and the approach makes simplifying assumptions and ignores factors like domestic inflation and income distribution effects.
Autocorrelation- Detection- part 2- Breusch-Godfrey Test and Durbin's h testShilpa Chaudhary
This document discusses various tests for detecting autocorrelation, including the Durbin-Watson d test, Durbin's h test, and the Breusch-Godfrey (BG) test. The BG test allows for nonstochastic regressors and higher-order autoregressive schemes unlike the Durbin-Watson d test. The steps of the BG test are outlined. An example question demonstrates how to apply the BG test and Durbin's h test to check for autocorrelation in a regression. It is noted that autocorrelation could be due to pure autocorrelation or model misspecification.
This document discusses autocorrelation, which occurs when there is a correlation between members of a series of observed data ordered over time or space. This violates an assumption of classical linear regression that error terms are uncorrelated. Causes of autocorrelation include inertia in macroeconomic data, specification bias from excluded or incorrectly specified variables, lags, data manipulation, and non-stationarity of time series data. Autocorrelation can be detected graphically or using the Durbin-Watson and Breusch-Godfrey tests. Remedial measures include first-difference transformation, generalized transformation, and using Newey-West standard errors.
This document summarizes the key assumptions and properties of Ordinary Least Squares (OLS) regression. OLS aims to minimize the sum of squared residuals by estimating the beta coefficients. It provides the best linear unbiased estimates if its assumptions are met. The key assumptions are: 1) the regression is linear in parameters; 2) the error term has a mean of zero; 3) the error term is uncorrelated with the independent variables; 4) there is no serial correlation or autocorrelation in the error term; 5) the error term has constant variance (homoskedasticity); and 6) there is no perfect multicollinearity among independent variables. When all assumptions are met, OLS estimates
Heteroscedasticity is the condition which refers to the violation of the Homoscedasticity condition of the linear regression model used in econometrics study. In simple words, it can be described as the situation which leads to increase in the variance of the residual terms with the increase in the fitted value of the variable. Copy the link given below and paste it in new browser window to get more information on Heteroscedasticity:- http://www.transtutors.com/homework-help/economics/heteroscedasticity.aspx
The document provides an overview of the Phillips curve, which shows the relationship between unemployment and inflation. It discusses the history of the Phillips curve developed by A.W. Phillips and debates around the short-run versus long-run curve. It also examines different types of inflation (cost-push, demand-pull) and unemployment (natural, structural, frictional, cyclical). Finally, it analyzes the relationship between inflation and unemployment and how policies aimed at reducing one may impact the other.
This document discusses the methodology of econometrics. It begins by defining econometrics as applying economic theory, mathematics and statistical inference to analyze economic phenomena. It then outlines the typical steps in an econometric analysis: 1) stating an economic theory or hypothesis, 2) specifying a mathematical model, 3) specifying an econometric model, 4) collecting data, 5) estimating parameters, 6) hypothesis testing, 7) forecasting, and 8) using the model for policy purposes. As an example, it walks through Keynes' consumption theory using U.S. consumption and GDP data to estimate the marginal propensity to consume.
I. The document discusses the method of ordinary least squares (OLS) regression analysis. OLS chooses estimates that minimize the sum of squared residuals between the actual and predicted y-values.
II. OLS provides point estimates for regression parameters and makes assumptions such as a linear relationship between variables, independent and homoscedastic errors, and no autocorrelation.
III. Monte Carlo experiments can test the statistical properties of OLS by repeatedly simulating the regression of randomly generated data on fixed x-values and checking if the average estimates equal the true parameter values.
This document discusses autocorrelation in time series data and its effects on regression analysis. It defines autocorrelation as errors in one time period carrying over into future periods. Autocorrelation can be caused by factors like inertia in economic cycles, specification bias, lags, and nonstationarity. While OLS estimators remain unbiased with autocorrelation, they become inefficient and hypothesis tests are invalid. Autocorrelation can be detected using graphical analysis or formal tests like the Durbin-Watson test and Breusch-Godfrey test. The Cochrane-Orcutt procedure is also described as a way to transform data and remove autocorrelation.
This document discusses time series analysis and its key components. It begins by defining a time series as a sequence of data points measured over successive time periods. The four main components of a time series are identified as: 1) Trend - the long-term pattern of increase or decrease, 2) Seasonal variations - repeating patterns over 12 months, 3) Cyclical variations - fluctuations lasting more than a year, and 4) Irregular variations - unpredictable fluctuations. Two common methods for measuring trends are introduced as the moving average method and least squares method. Formulas and examples are provided for calculating trend values using these techniques.
Expectations and Economics policy by Zegeye Paulos Borko (Asst,...Zegeye Paulos
Expectationa and Economics policy
-What do we mean by the Rational Expectations Hypothesis [REH].
-What are the implications of the REH for the conduct of economic policy?
-The “Policy-Ineffectiveness Proposition” [PIP]
-What are the implications of the REH for economicmodeling? The “Lucas critique”
This document discusses dummy variable regression models. It explains that dummy variables take values of 0 or 1 to represent the presence or absence of attributes in categorical data. An example model is provided to analyze average teacher salaries across three regions using dummy variables for each region. The results show that the slopes of the dummy variable coefficients for Punjab and KPK are not statistically significant, indicating the average salaries do not differ significantly between the three regions. Caution is advised in properly specifying and interpreting coefficients in dummy variable regression models.
This document discusses regression models, path models, and the output from AMOS software when conducting structural equation modeling (SEM). Regression models only include observed variables and assume independents are measured without error. Path models allow independents to be both causes and effects, and allow for error terms on endogenous variables. The AMOS output provides standardized and unstandardized regression weights, significance tests, and fit indexes to evaluate how well the specified model fits the sample data.
This document describes specification tests that can be used after estimating dynamic panel data models using the generalized method of moments (GMM) estimator. It presents GMM estimators for first-order autoregressive models with individual fixed effects that exploit moment restrictions from assuming serially uncorrelated errors. Monte Carlo simulations are used to evaluate the small-sample performance of tests of serial correlation based on GMM residuals, Sargan tests, and Hausman tests. The tests are also applied to estimated employment equations using an unbalanced panel of UK firms.
1) The presentation summarized the theory of errors, which is the study of how measured quantities contain errors and how those errors propagate through calculations.
2) There are three main types of errors: mistakes, systematic errors, and accidental errors. Accidental errors follow the laws of probability and tend to be small and symmetrically distributed.
3) The presentation covered concepts like probable error, mean square error, weights, and the method of least squares. It also discussed how to determine probable errors for different types of direct and indirect observations and compute most probable values.
Econometrcis-Multivariate Time Series Analysis.pptxjbhandari1
This document discusses multivariate time series analysis and cointegration. It explains that while few time series are stationary, cointegration can be used to form stationary linear combinations of non-stationary series. Cointegration means series tend to "randomly walk together" with a stationary spread. The Engle-Granger two-step procedure tests for cointegration but has limitations, while the Johansen procedure estimates vector error-correction models to address these limitations and allow testing for multiple cointegrating relationships.
This document provides an overview of forecasting using Eviews 2.0 software. It distinguishes between ex post and ex ante forecasting. Ex post forecasts use known data to evaluate a forecasting model, while ex ante forecasts predict values using uncertain explanatory variables. The document then discusses univariate forecasting methods in Eviews, including trend extrapolation, modeling trend behavior, and analyzing residuals to check assumptions. It provides examples of estimating a trend model, viewing residuals, and making forecasts in Eviews.
ANCOVA (Analysis of Covariance) is a statistical method used to test the effects of categorical variables on a continuous dependent variable while controlling for continuous covariate variables. It extends ANOVA and regression by allowing comparison of regression lines or means between groups. ANCOVA makes several key assumptions, including that covariates are measured without error, have a linear relationship with the dependent variable, and do not influence the independent variables. It is used in experimental and observational research designs to reduce effects of non-randomized or confounding variables.
The document describes a Stata package of programs for estimating panel vector autoregression (VAR) models. The package allows for convenient estimation, model selection, inference and other analyses of panel VAR models using generalized method of moments in a Stata environment. The programs address panel VAR specification, estimation, model selection criteria, impulse response analyses, and forecast error variance decomposition. The syntax and outputs of the commands are designed to be similar to Stata's built-in VAR commands for time series data.
The document discusses probabilistic and statistical models for outlier detection, specifically focusing on methods for extreme-value analysis of univariate and multivariate data distributions. It introduces some of the earliest and most fundamental statistical methods for outlier detection, including probabilistic tail inequalities like the Markov inequality and Chebychev inequality, which can be used to determine the probability that an extreme value should be considered anomalous. It also discusses how extreme-value analysis methods can be extended from univariate to multivariate data and how mixture models provide a probabilistic approach to identifying both outliers and extreme values.
This document discusses multiple linear regression analysis. It begins by introducing the basic multiple regression model that includes more than one predictor variable. It then discusses the assumptions of multiple regression including adequate sample size, absence of outliers and multicollinearity, and normality, linearity and homoscedasticity of residuals. The document provides an example of predicting house prices using living area and distance from the city center as predictor variables. It shows how to check assumptions, interpret the regression output and make predictions using the fitted model.
This document provides highlights and key concepts for an exam on structural equation modeling (SEM). It defines terms like path coefficients, direct/indirect/total effects, identification, and discusses techniques for assessing model fit. Identification issues are more likely for models with large numbers of coefficients, reciprocal effects, or many similar concepts. The document also outlines steps in SEM like model specification, identification, estimation, and respecification.
The One-Way MANCOVA analyzes the influence of one independent variable on multiple dependent variables while controlling for one or more covariate factors. It first conducts a regression to remove the effect of covariates, then performs a MANOVA on the residuals. This allows it to increase the power of the MANOVA by explaining more variability in the model and control for confounding factors. A One-Way MANCOVA requires at least one independent variable, two or more dependent variables, and one or more covariates. It can be performed using SPSS's General Linear Model procedure.
Assumptions of Linear Regression - Machine LearningKush Kulshrestha
There are 5 key assumptions in linear regression analysis:
1. There must be a linear relationship between the dependent and independent variables.
2. The error terms cannot be correlated with each other.
3. The independent variables cannot be highly correlated with each other.
4. The error terms must have constant variance (homoscedasticity).
5. The error terms must be normally distributed. Violations of these assumptions can result in poor model fit or inaccurate predictions. Various tests can be used to check for violations.
Cointegration and error correction models are used to analyze the relationship between non-stationary time series variables. The Dickey-Fuller test determines if variables contain a unit root and are non-stationary. If two non-stationary variables have a stationary linear combination, they are cointegrated, indicating a long-run equilibrium relationship. An error correction model represents the short-run dynamic adjustment between cointegrated variables back to their long-run equilibrium when shocked.
Boris Stoyanov - Covariant and Consistent Anomalies in Gauged SupergravityBoris Stoyanov
This document summarizes properties of gauged supergravity models in four and six dimensions. It discusses covariant and consistent anomalies that arise in these theories, and how their cancellation constrains the types of fields that can be coupled to supergravity. It also presents the Pasti-Sorokin-Tonin construction for obtaining covariant actions for self-dual fields using an auxiliary scalar field. Various aspects of anomalies, field equations, and couplings in minimal supergravity theories are analyzed in detail.
The document discusses implementing the Heath-Jarrow-Morton (HJM) model for modeling interest rate dynamics using Monte Carlo simulation. It describes:
1) Using principal component analysis to analyze the yield curve and estimate volatility functions for a multi-factor HJM model from historical yield curve data.
2) Calculating the covariance matrix from differenced historical yield curve data and factorizing it to obtain eigenvalues and eigenvectors via numerical methods.
3) Deriving the stochastic differential equation for the risk-neutral forward rate curve under the HJM model using no-arbitrage arguments to obtain drift and volatility terms.
The document discusses three increasingly sophisticated Monte Carlo algorithms - Variational Monte Carlo (VMC), Diffusion Monte Carlo (DMC), and Diffusion Monte Carlo with Importance Sampling (DMC-IS) - for studying quantum systems computationally. It applies these algorithms to the harmonic oscillator as a test case. VMC converges to the correct energy when using the exact wavefunction. DMC shows large fluctuations in results. DMC-IS gives stable results, correctly obtaining the exact energy when using the exact wavefunction as the trial function, and a close approximation when using a similar but slightly incorrect trial function.
Machine learning is a type of artificial intelligence that allows systems to learn from data and improve automatically without being explicitly programmed. There are several types of machine learning algorithms, including supervised learning which uses labeled training data to predict outcomes, unsupervised learning which finds patterns in unlabeled data, and reinforcement learning which interacts with its environment to discover rewards or errors. Linear regression is an example machine learning model that fits a linear equation to describe the relationship between a dependent variable and one or more independent variables. It works by minimizing the residual sum of squares to find the coefficients that produce the best fitting line.
Canonical correlation analysis (CCA) is a statistical method used to analyze relationships between two sets of variables. CCA finds linear combinations of variables from each set that have the highest correlation with each other. The first pair of linear combinations have maximum correlation, and each subsequent pair is orthogonal to previous pairs and has successively smaller correlations. CCA can be used to understand how variables from different tests relate to each other or to build models relating two sets of variables.
Introduction to linear regression and the maths behind it like line of best fit, regression matrics. Other concepts include cost function, gradient descent, overfitting and underfitting, r squared.
The document discusses analyzing multivariate time series of five energy futures (crude oil, ethanol, gasoline, heating oil, natural gas) using vector autoregressive (VAR) and vector error correction (VEC) models. It finds the futures are cointegrated using Johansen and Engle-Granger tests, indicating they share a common stochastic trend. A VAR(1) model is estimated and found stable. The VEC model captures the error correction behavior as futures return to their long-run equilibrium. Forecasts are generated and limitations of the Engle-Granger approach discussed.
ders 3.3 Unit root testing section 3 .pptxErgin Akalpler
The document discusses various unit root tests used to determine if a time series is stationary or non-stationary. It describes the Dickey-Fuller test and Augmented Dickey-Fuller test, which test for a unit root in a time series. The Augmented Dickey-Fuller test extends the Dickey-Fuller test by including lagged difference terms to account for autocorrelation. The tests are used to distinguish between trend-stationary and difference-stationary processes, which have different implications for forecasting and detecting spurious relationships between variables.
ders 3.2 Unit root testing section 2 .pptxErgin Akalpler
The document provides information about several theoretical probability distributions including the normal, t, and chi-square distributions. It discusses their key properties and formulas. For the normal distribution, it covers the empirical rule, skewness, kurtosis, and how to calculate z-scores. Examples are given for finding areas under the normal curve and performing hypothesis tests using the t and chi-square distributions.
lesson 3.1 Unit root testing section 1 .pptxErgin Akalpler
The document discusses key concepts related to the normal distribution, including its properties, formula, and uses. Some key points:
- The normal distribution is a bell-shaped curve that is symmetric around the mean. Many natural phenomena approximate it.
- It is defined by two parameters: the mean and standard deviation. Approximately 68% of values fall within one standard deviation of the mean, 95% within two standard deviations, and 99.7% within three standard deviations.
- The normal distribution follows a specific formula involving the mean, standard deviation, and z-scores.
- Other concepts discussed include skewness, kurtosis, the t-distribution and how it resembles the normal distribution, and
CH 3.2 Macro8_Aggregate Demand _Aggregate Supply long and run.pptErgin Akalpler
The document discusses aggregate demand and supply in the short and long run. It defines aggregate supply as the total output of goods and services supplied in an economy over time. In the short run, prices are fixed and aggregate supply is horizontal, so changes in aggregate demand lead to changes in output. In the long run, aggregate supply is vertical as output is determined by factor inputs, so changes in demand lead to changes in prices, not output. The document uses IS-LM and AD-AS models to explain fluctuations in the short run and how the economy adjusts in the long run.
This chapter discusses aggregate demand and aggregate supply. Aggregate demand is the total demand for goods and services in an economy at different price levels, while aggregate supply is the total supply of goods and services available. The aggregate demand curve slopes downward as higher prices reduce real spending. Shifts in aggregate demand are caused by changes in taxes, interest rates, confidence, currency values, and government spending. Shifts in aggregate supply are caused by changes in input prices, productivity, and government regulation. Inflation can be caused by either increases in aggregate demand (demand-pull) or decreases in aggregate supply (cost-push). The government can influence the economy through policies that impact aggregate demand and aggregate supply.
1) This document describes a small open economy model where the real exchange rate keeps the goods market in equilibrium.
2) In the model, if output is not equal to consumption, investment, government spending, and net exports, the exchange rate will adjust to balance the goods market.
3) The model shows the production function and factor demand on the supply side and the consumption, investment, government spending and net exports functions that determine demand. Equilibrium occurs when savings equals investment and this is equal to net exports.
This document describes a closed economy model where:
1) Goods market equilibrium occurs when output (Y) equals consumption (C) plus investment (I) plus government expenditure (G), with the real interest rate adjusting to maintain equilibrium.
2) The loanable funds market represents the goods market split into savings (S) and investment (I), where equilibrium requires S=I.
3) Various shocks can shift the savings or investment curves and require a change in the real interest rate to re-establish loanable funds and goods market equilibrium.
CH 1.2 marginal propensity to save and MP to consume .pptErgin Akalpler
This document provides definitions and explanations of key concepts in Keynesian economics that will be used to analyze how changes in the economy and policy affect real GDP, employment, and prices using the AD-AS model. It defines aggregate demand, aggregate supply, GDP, disposable income, consumption, saving, average and marginal propensities to consume and save, and other economic terms. The relationships between these concepts will be important for understanding unit III.
1. The document discusses aggregate demand and aggregate supply, which are used to analyze short-run economic fluctuations.
2. It explains that the aggregate demand curve slopes downward, as a lower price level increases the quantity of goods and services demanded through wealth, interest rate, and exchange rate effects.
3. The aggregate supply curve is vertical in the long run but slopes upward in the short run, as firms supply more output when prices are higher due to sticky wages or prices or misperceptions.
ch04.1 arz ve talep eğrileri micro s-d theo.pptErgin Akalpler
This document discusses supply and demand and how markets work. It contains definitions of key terms like demand curves, supply curves, equilibrium, surplus and how shifts in supply and demand affect equilibrium price and quantity. Several graphs and tables are included that illustrate demand and supply schedules, how demand and supply curves are derived from those schedules, and how equilibrium is reached at the price where quantity supplied equals quantity demanded. The document also summarizes how combinations of increases or decreases in supply and demand affect equilibrium price and quantity.
1) This document describes a small open economy model where the real exchange rate keeps the goods market in equilibrium.
2) In the model, if output is not equal to consumption, investment, government spending, and net exports, the exchange rate will adjust to balance the goods market.
3) The model shows the production function and factor demand on the supply side and the consumption, investment, government spending and net exports functions that determine demand. Equilibrium occurs when savings equals investment and this equates to the trade balance, keeping the loanable funds market in balance.
This document describes a closed economy model where:
1) Goods market equilibrium occurs when output (Y) equals consumption (C) plus investment (I) plus government expenditure (G), with the real interest rate adjusting to maintain equilibrium.
2) The loanable funds market represents the goods market split into savings (S) and investment (I), with equilibrium occurring where S equals I.
3) Various shocks can shift the savings or investment curves in the loanable funds market, requiring a change in the real interest rate to re-establish equilibrium.
This document provides definitions and explanations of key concepts in Keynesian economics that will be used to analyze how changes in the economy and policy affect real GDP, employment, and prices using the AD-AS model. It defines aggregate demand, aggregate supply, GDP, disposable income, consumption, saving, average and marginal propensities to consume and save, and other economic terms. The relationships between these concepts will be important for understanding unit III.
In World Expo 2010 Shanghai – the most visited Expo in the World History
https://www.britannica.com/event/Expo-Shanghai-2010
China’s official organizer of the Expo, CCPIT (China Council for the Promotion of International Trade https://en.ccpit.org/) has chosen Dr. Alyce Su as the Cover Person with Cover Story, in the Expo’s official magazine distributed throughout the Expo, showcasing China’s New Generation of Leaders to the World.
In World Expo 2010 Shanghai – the most visited Expo in the World History
https://www.britannica.com/event/Expo-Shanghai-2010
China’s official organizer of the Expo, CCPIT (China Council for the Promotion of International Trade https://en.ccpit.org/) has chosen Dr. Alyce Su as the Cover Person with Cover Story, in the Expo’s official magazine distributed throughout the Expo, showcasing China’s New Generation of Leaders to the World.
The Rise and Fall of Ponzi Schemes in America.pptxDiana Rose
Ponzi schemes, a notorious form of financial fraud, have plagued America’s investment landscape for decades. Named after Charles Ponzi, who orchestrated one of the most infamous schemes in the early 20th century, these fraudulent operations promise high returns with little or no risk, only to collapse and leave investors with significant losses. This article explores the nature of Ponzi schemes, notable cases in American history, their impact on victims, and measures to prevent falling prey to such scams.
Understanding Ponzi Schemes
A Ponzi scheme is an investment scam where returns are paid to earlier investors using the capital from newer investors, rather than from legitimate profit earned. The scheme relies on a constant influx of new investments to continue paying the promised returns. Eventually, when the flow of new money slows down or stops, the scheme collapses, leaving the majority of investors with substantial financial losses.
Historical Context: Charles Ponzi and His Legacy
Charles Ponzi is the namesake of this deceptive practice. In the 1920s, Ponzi promised investors in Boston a 50% return within 45 days or 100% return in 90 days through arbitrage of international reply coupons. Initially, he paid returns as promised, not from profits, but from the investments of new participants. When his scheme unraveled, it resulted in losses exceeding $20 million (equivalent to about $270 million today).
Notable American Ponzi Schemes
1. Bernie Madoff: Perhaps the most notorious Ponzi scheme in recent history, Bernie Madoff’s fraud involved $65 billion. Madoff, a well-respected figure in the financial industry, promised steady, high returns through a secretive investment strategy. His scheme lasted for decades before collapsing in 2008, devastating thousands of investors, including individuals, charities, and institutional clients.
2. Allen Stanford: Through his company, Stanford Financial Group, Allen Stanford orchestrated a $7 billion Ponzi scheme, luring investors with fraudulent certificates of deposit issued by his offshore bank. Stanford promised high returns and lavish lifestyle benefits to his investors, which ultimately led to a 110-year prison sentence for the financier in 2012.
3. Tom Petters: In a scheme that lasted more than a decade, Tom Petters ran a $3.65 billion Ponzi scheme, using his company, Petters Group Worldwide. He claimed to buy and sell consumer electronics, but in reality, he used new investments to pay off old debts and fund his extravagant lifestyle. Petters was convicted in 2009 and sentenced to 50 years in prison.
4. Eric Dalius and Saivian: Eric Dalius, a prominent figure behind Saivian, a cashback program promising high returns, is under scrutiny for allegedly orchestrating a Ponzi scheme. Saivian enticed investors with promises of up to 20% cash back on everyday purchases. However, investigations suggest that the returns were paid using new investments rather than legitimate profits. The collapse of Saivian l
Discovering Delhi - India's Cultural Capital.pptxcosmo-soil
Delhi, the heartbeat of India, offers a rich blend of history, culture, and modernity. From iconic landmarks like the Red Fort to bustling commercial hubs and vibrant culinary scenes, Delhi's real estate landscape is dynamic and diverse. Discover the essence of India's capital, where tradition meets innovation.
Monthly Market Risk Update: June 2024 [SlideShare]Commonwealth
Markets rallied in May, with all three major U.S. equity indices up for the month, said Sam Millette, director of fixed income, in his latest Market Risk Update.
For more market updates, subscribe to The Independent Market Observer at https://blog.commonwealth.com/independent-market-observer.
An accounting information system (AIS) refers to tools and systems designed for the collection and display of accounting information so accountants and executives can make informed decisions.
Budgeting as a Control Tool in Government Accounting in Nigeria
Being a Paper Presented at the Nigerian Maritime Administration and Safety Agency (NIMASA) Budget Office Staff at Sojourner Hotel, GRA, Ikeja Lagos on Saturday 8th June, 2024.
KYC Compliance: A Cornerstone of Global Crypto Regulatory FrameworksAny kyc Account
This presentation explores the pivotal role of KYC compliance in shaping and enforcing global regulations within the dynamic landscape of cryptocurrencies. Dive into the intricate connection between KYC practices and the evolving legal frameworks governing the crypto industry.
How to Invest in Cryptocurrency for Beginners: A Complete GuideDaniel
Cryptocurrency is digital money that operates independently of a central authority, utilizing cryptography for security. Unlike traditional currencies issued by governments (fiat currencies), cryptocurrencies are decentralized and typically operate on a technology called blockchain. Each cryptocurrency transaction is recorded on a public ledger, ensuring transparency and security.
Cryptocurrencies can be used for various purposes, including online purchases, investment opportunities, and as a means of transferring value globally without the need for intermediaries like banks.
Navigating Your Financial Future: Comprehensive Planning with Mike Baumannmikebaumannfinancial
Learn how financial planner Mike Baumann helps individuals and families articulate their financial aspirations and develop tailored plans. This presentation delves into budgeting, investment strategies, retirement planning, tax optimization, and the importance of ongoing plan adjustments.
Navigating Your Financial Future: Comprehensive Planning with Mike Baumann
ders 7.2 VECM 1.pptx
1. Assoc Prof Dr Ergin Akalpler
VECM -Restricted VAR Model
Impulse Response and
Variant Decomposition used
2. VAR Model
VECTOR auto-regressive (VAR) integrated model
comprises multiple time series and is quite a useful
tool for forecasting. It can be considered an
extension of the auto-regressive (AR part of
ARIMA) model.
3. VAR Model
VAR model involves multiple independent variables and
therefore has more than one equations.
Each equation uses as its explanatory variables lags of all
the variables and likely a deterministic trend.
Time series models for VAR are usually based on applying
VAR to stationary series with first differences to original
series and because of that, there is always a possibility of
loss of information about the relationship among integrated
series.
4. VAR model
Differencing the series to make them stationary is
one solution, but at the cost of ignoring possibly
important (“long run”) relationships between the
levels. A better solution is to test whether the levels
regressions are trustworthy (“cointegration”.)
5. VAR Model
The usual approach is to use Johansen’s method for
testing whether or not cointegration exists. If the answer is
“yes” then a vector error correction model (VECM),
which combines levels and differences, can be estimated
instead of a VAR in levels. So, we shall check if VECM is
been able to outperform VAR for the series we have.
6. How to determine Restricted VAR –VECM- or
Unrestricted VAR
If all variables converted to first difference then they become
stationary (integrated in same order)
Null hypo: variables are stationary
Alt Hypo: Variables are not stationary
If the variables are cointegrated and have long run association
then we run restricted VAR (that is VECM),
But if the variables are not cointegrated we cannot run VECM
rather we run unrestricted VAR.
7. What is the difference between VAR and
VECM model?
Through VECM we can interpret long term and short term
equations. We need to determine the number of co-integrating
relationships. The advantage of VECM over VAR is that the
resulting VAR from VECM representation has more efficient
coefficient estimates.
When to use VAR/VECM?
You should use VECM if 1) your variables are nonstationary
and 2) you find a common trend between the variables
(cointegration).
8. UNRESTRICTED VAR
After performing cointegration test results will
shows following estimations:
Trace STATS < TCV
Null: there is no cointegration
Alt: There is cointegration
When the Trace stats is less than TCV we cannot
reject null hypo there is no cointegration
Probability values are more than 0.05
>
9. RESTRICTED VAR -VECM
After performing cointegration test results will
shows following estimations:
Trace STATS > TCV
Null: there is no cointegration
Alt: There is cointegration
When the Trace stats is more than TCV we can
reject null hypo there is cointegration
Probability values are less than 0.05
10. According to Engle and Granger (1987), two I(1) series are said to be co-
integrated if there exists some linear combination of the two which
produces a stationary trend [I(0)].
Any non-stationary series that are co-integrated may diverge in the short-
run, but they must be linked together in the longrun.
Moreover, it has been proven by Engle and Granger (1987) that if a set of
series are co-integrated, there always exists a generating mechanism,
called “error-correction model”, which forces the variables to move
closely together over time, while allowing a wide range of short-run
dynamics.
11. Introduction
The basics of the vector autoregressive model.
We lay the foundation for getting started with this crucial multivariate time
series model and cover the important details including:
•What a VAR model is.
•Who uses VAR models.
•Basic types of VAR models.
•How to specify a VAR model.
•Estimation and forecasting with VAR models.
12. To determine whether VAR model in levels is possible or not, we need to transform
VAR model in levels to a VECM model in differences (with error correction terms),
to which the Johansen test for cointegration is applied.
In other words, we take the following 4 steps
1. construct a VECM model in differences (with error correction terms)
2. apply the Johansen test to the VECM model in differences to find out the
number of cointegration (r) (none or Atmost)
3. if r = 0, estimate VAR in differences
4. if r > 0, estimate VECM model in differences or VAR in levels (at least one
cointegration equation exist)
13. Its identification depends on the number of cointegration in the following
way.
(none) or 0, r = 0 (no cointegration)
In the case of no cointegration, since all variables are non-stationary in level,
the above VECM model reduces to a VAR model with growth variables.
At most 1, r = 1 (one cointegrating vector)
At most 2, r = 2 (two cointegrating vectors)
At most 3) r = 3 (full cointegration)
In the case of full cointegration, since all variables are stationary, the above
VECM model reduces to a VAR model with level variables.
14. Johansen Test for Cointegration
The rank equals the number of its non-zero eigenvalues and the Johansen test
provides inference on this number. There are two tests for the number of co-
integration relationships.
The first test is the trace test whose test statistic is
H0 : cointegrating vectors ≤ r
H1 : cointegrating vectors ≥ r + 1
The second test is the maximum eigenvalue test whose test statistic is given by
H0 : There are r cointegrating vectors
H1 : There are r + 1 cointegrating vectors
15. RESTRICTED VAR (VECM)
Assess the selection of the optimal lag length in a VAR
Evaluate the use of impulse response functions with a VAR
Assess the importance of variations on the standard VAR
Critically appraise the use of VARs with financial models.
Assess the uses of VECMs
16. Lets start with the RESTRICTED VAR- VECM
what was the guideline
After performing cointegration test results will
shows following estimations:
Trace STATS > TCV
Null: there is no cointegration
Alt: There is cointegration
When the Trace stats is more than TCV we can
reject null hypo there is cointegration
Probability values are less than 0.05
17. How to do the Estimation Multivariate
Cointegration and VECMs
1) Test the variables for stationarity using the usual ADF tests.
2) If all the variables are I(1) include in the cointegrating
relationship.
3) Use the AIC or SIC to determine the number of lags in the
cointegration test (order of VAR)
4) Use the trace and maximal eigenvalue tests to determine the
number of cointegrating vectors present.
5) When the Trace stats is more than TCV we can reject null hypo
there is at least one cointegration eq. and our variables have
long run association in the long run they move together
18. How to do the Estimation Multivariate
Cointegration and VECMs cont.1
1) This implies we can run restricted VAR VECM because trace and
maximum eigen values are more that TCV and there is at least one
cointegration equation.
2) We reject null hypo and probability values are also less than 0.05
3) (In opposite case we run unrestricted VAR)
4) We perform and estimate the table for vector error correction
model and then find the equations for our model.
5) From equations we derive the residuals for cointegration eq. for
dependent variables.
6) We use the least square method to find long run effects of
variables.
19. How to do the Estimation Multivariate
Cointegration and VECMs cont.2
1) First coefficient indicate the speed of adjustment either towards or
move away from equilibrium in long run
2) (negative coefficient sign is good for bring back the whole system) p
va;ue must be less than 0.05 for significance)
3) T value if it is greater than 2 it is significant
4) Then after we perform wald test for short run causality
5) From ols table we go to coefficient diagnostic for performing WALD
test
6) We use following null hypo equation for performing wald test
7) C(3)=C(4)=0
8) P values must be less than 0.05 for significance
20. What is Wald test
The Wald statistic explains the short run causality
between variables whiles the statistics provided by the
lagged error correction terms explain the intensity of the
long run causality effect.
Short run Granger causalities are determined by Wald
statistic for the significance of the coefficients of the
series.
21. Vector Error Correction Models (VECM) are the basic VAR, with an
error correction term incorporated into the model and as with
bivariate cointegration, multivariate cointegration implies an
appropriate VECM can be formed.
The reason for the error correction term is the same as with the
standard error correction model, it measures any movement away
from the long-run equilibrium.
These are often used as part of a multivariate test for
cointegration, such as the Johansen test, having found evidence of
cointegration of some I(1) variables, we can then assess the short
run and potential Granger causality with a VECM.
22. The finding that many macro time series may contain a unit root has spurred
the development of the theory of non-stationary time series analysis.
Engle and Granger (1987) pointed out that a linear combination of two or more
non-stationary series may be stationary.
If such a stationary, or I(0), linear combination exists, the non-stationary (with
a unit root), time series are said to be cointegrated.
The stationary linear combination is called the cointegrating equation and may
be interpreted as a long-run equilibrium relationship between the variables.
For example, consumption and income are likely to be cointegrated. If they
were not, then in the long-run consumption might drift above or below
income, so that consumers were irrationally spending or piling up savings.
23. A vector error correction (VEC) model is a restricted VAR that has
cointegration restrictions built into the specification, so that it is
designed for use with nonstationary series that are known to be
cointegrated.
The VEC specification restricts the long-run behavior of the
endogenous variables to converge to their cointegrating
relationships while allowing a wide range of short-run dynamics.
The cointegration term is known as the error correction term
since the deviation from long-run equilibrium is corrected
gradually through a series of partial short-run adjustments.
24. VECMs
Vector Error Correction Models (VECM) are the basic VAR,
with an error correction term incorporated into the model.
The reason for the error correction term is the same as with
the standard error correction model, it measures any
movement away from the long-run equilibrium.
These are often used as part of a multivariate test for
cointegration, such as the Johansen ML -Maximum likelihood
test.
25. VECMs
However there are a number of differing approaches to
modelling VECMs, for instance how many lags should
there be on the error correction term, usually just one
regardless of the order of the VAR
The error correction term becomes more difficult to
interpret, as it is not obvious which variable it affects
following a shock
26. VECM
The most basic VECM is the following first-
order VECM:
27. VECM
First we test if the variables are stationary, i.e. I(0).
If not, they are assumed to have a unit root and are
I(1).
If a set of variables are all I(1), they should not be
estimated using OLS as there may be one or more
long-run equilibrium relationships,
i.e. cointegration. We can estimate how many
"cointegration vectors" exist between variables using
the Johansen technique.
28. VECM
If a set of variables is found to have one or more
cointegration vectors, a suitable estimation technique is a
VECM (Vector Error Correction Model) that adjusts for both
short-term changes in variables and deviations from
equilibrium.
29. Granger causality
Granger causality tests whether a variable is “helpful”
for forecasting the behavior of another variable.
It’s important to note that Granger causality only allows
us to make inferences about forecasting capabilities --
not about true causality.
30. Granger-causality statistics
As we previously discussed, Granger-causality statistics test whether
one variable is statistically significant when predicting another variable.
The Granger-causality statistics are F-statistics that test if the
coefficients of all lags of a variable are jointly equal to zero in the
equation for another variable.
As the p-value of the F-statistic decreases, evidence that a variable is
relevant for predict another variable increases.
31. The Granger causality
The Granger causality test were use when the variables are
cointegrated.
Engle and Granger (1987) warned that if the variables are
stationary after first differencing in the existence of
cointegration the application of VAR to the analysis will be
spurious.
The outcome of the stationarity test using ADF revealed
that our variables are I (1)
32. For example, in the Granger-causality test of X on Y, if the p-
value is 0.02
we would say that X does help predict Y at the 5% level.
However, if the p-value is 0.3
we would say that there is no evidence that X helps predict Y.
33. Impulse Response and Variance
decomposition
the impulse responses are the relevant tools for
interpreting the relationships between the variables
Variance decompositions examine how important each of
the shocks is as a component of the overall
(unpredictable) variance of each of the variables over
time.
35. The impulse response function traces the dynamic path of variables in the system
to shocks to other variables in the system. This is done by:
• Estimating the VAR model.
• Implementing a one-unit increase in the error of one of the variables in the model,
while holding the other errors equal to zero.
• Predicting the impacts h-period ahead of the error shock.
• Plotting the forecasted impacts, along with the one-standard-deviation confidence
intervals.
36. The results show IR (Impulse
response) to dependent variables. Only
for NIR IR function is illustrated on the
table and
as on the table seen only NIR has
positive response to CPI. But against to
this all other variables have negative
response to NIR
Impulse Response positive values
have positive negative values have
negative effects on dependent (here
CPI)
R. of
DCPI:
Period RGDP DCPI DNIR DREER
1 -3.870022 10.52160 0.000000 0.000000
2 4.350339 0.388418 0.635650 -3.964539
3 2.581088 -0.057747 1.343376 -0.210536
4 -1.406336 0.760648 0.709599 -0.485223
5 -1.189040 0.131412 0.477037 -0.098667
6 0.043845 -0.346002 0.243500 0.050212
7 0.401353 -0.000346 0.078936 0.053059
8 -0.003204 0.089603 0.006877 -0.037810
9 -0.052022 0.019648 -0.044851 -0.027014
10 -0.032278 -0.017211 0.007166 0.004444
Impulse response sample estimation and interpretation
37. Variance decomposition estimation and interpretation
On the table, the variance
decomposition results for CPI
illustrated.
RGDP and REER affects CPI
more than NIR.
Higher values have more
effects than smaller values
VD of
DCPI:
Period S.E. RGDP DCPI DNIR DREER
1 11.21076 11.91672 88.08328 0.000000 0.000000
2 12.68381 21.07330 68.90575 0.251152 9.769804
3 13.01512 23.94694 65.44426 1.303893 9.304905
4 13.14111 24.63526 64.53047 1.570594 9.263682
5 13.20444 25.21040 63.92289 1.686082 9.180623
6 13.21138 25.18501 63.92429 1.718280 9.172418
7 13.21782 25.25268 63.86205 1.720173 9.165098
8 13.21818 25.25131 63.86316 1.720107 9.165417
9 13.21840 25.25202 63.86125 1.721200 9.165528
10 13.21845 25.25241 63.86091 1.721216 9.165466
38. Forecast error decomposition separates the forecast error variance into
proportions attributed to each variable in the model.
Intuitively, this measure helps us judge how much of an impact one
variable has on another variable in the VAR model and how intertwined
our variables' dynamics are.
For example, if X is responsible for 85% of the forecast error variance of Y,
it is explaining a large amount of the forecast variation in X.
However, if X is only responsible for 20% of the forecast error variance
of Y, much of the forecast error variance of Y is left unexplained by X.
40. How to Identify possible the Structural
Shocks?
Shock run restriction?
Long run restriction?
Sign restriction?
Available convention: for example Ex rate
Exchange rate shock from flexible to peg should increase crisis
probability;
Capital Account Liberalization shock from less to more free
capital flow should increase crisis probability
What are their effects on output?