This chapter reviews literature on macroeconomic modeling and forecasting. It discusses the development of structural models based on Keynesian theory from the 1930s-1970s, which were popularized by the Cowles Commission. These models included consumption, investment, income, and price equations. The chapter evaluates the forecasting performance of early large-scale models, finding most errors were reasonable out to 8 quarters ahead. However, models struggled during the economic turbulence of the 1970s, missing turning points. While structural models have conceptual ties to theory, atheoretical models may serve as an alternative when assessing large shocks, as economic cycles are not necessarily systematic.
The output gap indicating the difference between the actual and potential levels of output is a critical factor for estimating the inflationary pressures in an economy. If the main target of a central bank is ensuring and maintaining the price stability, estimating the output gap with a minimum error is crucial for the efficiency of the monetary policy. In this study, we estimated the output gap in Turkey for the 2002-2014 period by using four different methods. Two of these estimation methods are purely statistical (Linear Trend and Hodrick-Presscot (HP) Filtering) while the others are integrated with the relations suggested by the economic theory (multivariate structural model and structural autoregressive (SVAR) model). By using empirical decision criteria common in the literature, we conclude that SVAR model produces the most reliable output gap estimates to explain inflationary pressures in Turkey. However, we also found that the Hodrick-Presscot filtering method is the second best methodology in the output gap estimation process.
Economic indicators and stock market performance an empirical case of indiaIAEME Publication
This document summarizes the proceedings from the 2nd International Conference on Current Trends in Engineering and Management held in Mysore, India in July 2014. It examines the relationship between various economic indicators (GDP, inflation, exchange rates, etc.) and stock market performance in India from 1998-2014. Using correlation and regression analysis, it finds that stock market performance as measured by the BSE Sensex is positively correlated with GDP, gross domestic savings, and gross capital formation. The regression model shows that these economic indicators explain about 77% of the variability in stock market performance.
This document presents research analyzing net migration between Portuguese regions from 1996-2002 at the NUTS II level and 1991 and 2001 at the NUTS III level. Theoretical models of migration determinants are discussed, and empirical analysis is conducted using statistical data from the Portuguese National Institute of Statistics. Regression analysis finds that at the NUTS II level, real output growth positively impacts migration while unemployment and agricultural employment negatively impact it. At the NUTS III level, amenities like housing availability are also important determinants of migration.
This document summarizes a study examining the relationship between labor share and unemployment in major OECD countries from 1972 to 2008. It analyzes whether the relationship has changed in a way that could indicate weakened bargaining power for labor. The study uses panel data and statistical methods for non-stationary panels to estimate wage curves and dynamic equations modeling how labor share adjusts to unemployment. Preliminary results suggest labor share declines in most OECD countries cannot be fully explained by rising unemployment and likely reflect weakened bargaining power for labor unions. The nature of the relationship may also differ between countries with varying wage-setting institutions and bargaining coordination.
To analyze the factors affecting the price volatility of stocks, microeconomic and macroeco-nomic elements must be considered. This paper selects elements that are appropriate with the daily data of stock prices to build the GARCH family models. External variables such as global oil prices, consumer price index, short interest rates and the exchange rate between the United States Dollar and the Euro are examined. The GARCH models are developed in order to analyze and forecast the stock price of the companies in the DAX 30, which is Germany’s most important stock exchange barometer. The volatility of the residual of the mean function is the important key point in the GARCH approach. This financial application can be extend-ed to analyze other specific shares or stock indexes in any stock market in the world. There-fore, it is necessary to understand the operating procedures of their pricing for risk manage-ment, profitability strategies, cost minimization and, in addition, to construct the optimal port-folio depending on investor’s preferences.
An Application of Tobit Regression on Socio Economic Indicators in Gujaratijtsrd
The use of factual estimation frameworks to consider human behavior in a social environment is known as social insights. In this study researcher examined. Socio Economics indicators like Education, Health and Employment in Gujarat he also used Tobit Regression as a statistical tool. It will be found that the most of the Sub Indicators are positively impact on Tobit Regression model. Dr. Mahesh Vaghela "An Application of Tobit Regression on Socio Economic Indicators in Gujarat" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-6 , October 2021, URL: https://www.ijtsrd.com/papers/ijtsrd46309.pdf Paper URL : https://www.ijtsrd.com/mathemetics/statistics/46309/an-application-of-tobit-regression-on-socio-economic-indicators-in-gujarat/dr-mahesh-vaghela
This study addresses the connection between reorganization and unemployment in the labour market. Reorganization of regional labour markets measured by simultaneous gross migration flows lowers the unemployment rate, based on evidence from a panel of Finnish regions. However, reorganization is shown to be unrelated to long-term unemployment.
The output gap indicating the difference between the actual and potential levels of output is a critical factor for estimating the inflationary pressures in an economy. If the main target of a central bank is ensuring and maintaining the price stability, estimating the output gap with a minimum error is crucial for the efficiency of the monetary policy. In this study, we estimated the output gap in Turkey for the 2002-2014 period by using four different methods. Two of these estimation methods are purely statistical (Linear Trend and Hodrick-Presscot (HP) Filtering) while the others are integrated with the relations suggested by the economic theory (multivariate structural model and structural autoregressive (SVAR) model). By using empirical decision criteria common in the literature, we conclude that SVAR model produces the most reliable output gap estimates to explain inflationary pressures in Turkey. However, we also found that the Hodrick-Presscot filtering method is the second best methodology in the output gap estimation process.
Economic indicators and stock market performance an empirical case of indiaIAEME Publication
This document summarizes the proceedings from the 2nd International Conference on Current Trends in Engineering and Management held in Mysore, India in July 2014. It examines the relationship between various economic indicators (GDP, inflation, exchange rates, etc.) and stock market performance in India from 1998-2014. Using correlation and regression analysis, it finds that stock market performance as measured by the BSE Sensex is positively correlated with GDP, gross domestic savings, and gross capital formation. The regression model shows that these economic indicators explain about 77% of the variability in stock market performance.
This document presents research analyzing net migration between Portuguese regions from 1996-2002 at the NUTS II level and 1991 and 2001 at the NUTS III level. Theoretical models of migration determinants are discussed, and empirical analysis is conducted using statistical data from the Portuguese National Institute of Statistics. Regression analysis finds that at the NUTS II level, real output growth positively impacts migration while unemployment and agricultural employment negatively impact it. At the NUTS III level, amenities like housing availability are also important determinants of migration.
This document summarizes a study examining the relationship between labor share and unemployment in major OECD countries from 1972 to 2008. It analyzes whether the relationship has changed in a way that could indicate weakened bargaining power for labor. The study uses panel data and statistical methods for non-stationary panels to estimate wage curves and dynamic equations modeling how labor share adjusts to unemployment. Preliminary results suggest labor share declines in most OECD countries cannot be fully explained by rising unemployment and likely reflect weakened bargaining power for labor unions. The nature of the relationship may also differ between countries with varying wage-setting institutions and bargaining coordination.
To analyze the factors affecting the price volatility of stocks, microeconomic and macroeco-nomic elements must be considered. This paper selects elements that are appropriate with the daily data of stock prices to build the GARCH family models. External variables such as global oil prices, consumer price index, short interest rates and the exchange rate between the United States Dollar and the Euro are examined. The GARCH models are developed in order to analyze and forecast the stock price of the companies in the DAX 30, which is Germany’s most important stock exchange barometer. The volatility of the residual of the mean function is the important key point in the GARCH approach. This financial application can be extend-ed to analyze other specific shares or stock indexes in any stock market in the world. There-fore, it is necessary to understand the operating procedures of their pricing for risk manage-ment, profitability strategies, cost minimization and, in addition, to construct the optimal port-folio depending on investor’s preferences.
An Application of Tobit Regression on Socio Economic Indicators in Gujaratijtsrd
The use of factual estimation frameworks to consider human behavior in a social environment is known as social insights. In this study researcher examined. Socio Economics indicators like Education, Health and Employment in Gujarat he also used Tobit Regression as a statistical tool. It will be found that the most of the Sub Indicators are positively impact on Tobit Regression model. Dr. Mahesh Vaghela "An Application of Tobit Regression on Socio Economic Indicators in Gujarat" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-5 | Issue-6 , October 2021, URL: https://www.ijtsrd.com/papers/ijtsrd46309.pdf Paper URL : https://www.ijtsrd.com/mathemetics/statistics/46309/an-application-of-tobit-regression-on-socio-economic-indicators-in-gujarat/dr-mahesh-vaghela
This study addresses the connection between reorganization and unemployment in the labour market. Reorganization of regional labour markets measured by simultaneous gross migration flows lowers the unemployment rate, based on evidence from a panel of Finnish regions. However, reorganization is shown to be unrelated to long-term unemployment.
Predicting Stock Returns with Macroeconomic Indicators on Bist 100Zekeriya Bildik, CMA
This document describes a study that aimed to predict stock returns on the BIST 100 index in Turkey using macroeconomic indicators. Regression analysis was used to analyze relationships between monthly changes in five BIST indexes (total, service, financial, industrial, technology) and eight macroeconomic variables over 74 months from 2012-2018. Final multiple regression models were developed for each index that maximized predictive power while controlling for multicollinearity between predictors. The models found several macroeconomic indicators had significant predictive relationships with subsequent changes in the BIST indexes.
This document discusses and compares two approaches to measuring the contribution of investment-specific technological progress to economic growth: quantitative theory and traditional growth accounting. Quantitative theory uses an explicit structural model to define technological progress impulses, propagation mechanisms, and functional forms to match model predictions to data. It finds investment-specific technological progress accounts for 58% of postwar US growth. Traditional growth accounting takes a less structural approach without functional forms or parameters, but also finds very different results regarding investment-specific technological progress. The document argues quantitative theory provides a better measure since it has a clear economic interpretation of the growth contribution from investment-specific technological progress alone, unlike traditional growth accounting.
Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...QUESTJOURNAL
This document analyzes the demand and supply structures of commodities in South Sulawesi province, Indonesia using input-output tables. It finds that 14 commodities have significant value in terms of both demand and supply, including rice, cocoa, fish, nickel, fertilizers, chemicals, oil products, cement, machinery, transportation equipment, buildings, trade services, and public services. Rice, nickel, and trade services are identified as essential commodities for the regional economy. The analysis provides information on commodity demand and supply structures that can be used for development planning in South Sulawesi.
The nature of co-movement between total output and employment during the 1990s indicates that the relationship between employment growth and economic activity has been peculiar in Finland. This has been reflected, for example, in the developments of aggregate labour productivity. In particular, the years from 1992 to 1994 were exceptional. During that period productivity growth was very rapid, and, what is important, the trend of aggregate labour productivity shifted upwards. By only analysing the relationship between total output and employment it is impossible to say what happened during the period between 1992 and 1994. In this paper the relationship is analysed by utilizing industry-level data. The analysis shows that the rapid growth in aggregate productivity and the upward shift in the productivity trend mainly reflect similar developments in manufacturing, particularly in the metal industry. Even though the investigation is based on the use of industry-level data, it is still aggregative, which makes the interpretation of the results less clearcut. The existing studies which are based on the use of micro-level (e.g. plant-level) data support the interpretation which emphasizes the role of business restructuring and labour reallocation within manufacturing as the causes of rapid productivity growth and the upward shift in the trend productivity. The analysis is based on the estimation of simple structural VAR models.
The subject of this paper is an analysis of the electricity market in Poland.
The period of 2008–2015 came under close scrutiny, whereby emphasis was laid on
the trends in electricity generation and demand, while taking into account the country’s economic development. In addition, the text mentions the forecasted demand for
electricity in 2030, and electricity prices. As regards electricity prices, both qualitative and quantitative forecasts have been presented. In the latter case, the results of
the author’s own forecast have been presented; these were obtained with the aid of
selected methods applied for the analysis of the dynamics of economic phenomena
(the exponential and linear trend models). In order to make the research problem
more specific, the text addresses the following research questions: (1) Is it possible
to point out any special characteristics of the structure and operation of the electricity
market in Poland? (2) Is it possible to point out a characteristic trend in the changes in
demand for electricity in Poland? (3) Is it possible to point out a characteristic trend in
the changes in electricity prices in Poland?
1. This document appears to be an assignment sheet for a high school economics class covering chapters 1-3 of the textbook. It includes terms and short answer questions from each chapter.
2. The assignment requires students to define key economic terms and concepts. It also asks students to provide examples, explain models and graphs, and describe economic relationships.
3. The assignment is due on February 12, 2010 and no late work will be accepted. It covers topics like scarcity, opportunity costs, markets, demand and supply, and production possibilities.
In the paper, structural change in the Finnish manufacturing industries is studied by means of the theory of the aggregation of production functions and longitudinal plant-level data for the period from 1980 until 2005. The nature of structural change in twelve industries is characterised by examination of the invariance of the aggregate production functions over time. Aggregate production functions need not be estimated because, according to the theory of the aggregation of production functions, the invariance can be analysed by the investigation of the stability of the capacity density functions which describe the distribution of value added in the industries. Even though the shapes of aggregate production functions alter over time in most industries, there are differences in timing and in the degree of turbulence across industries. The analysis confirms the result obtained earlier that in some industries, for example in the paper industry, the late 1980s marked the beginning of a period of relatively strong structural change. The food industry and the manufacture of communications equipment are examples of industries in which the 1990s was a period of turbulence.
This document discusses time series analysis and its key components. It begins by defining a time series as a sequence of data points measured over successive time periods. The four main components of a time series are identified as: 1) Trend - the long-term pattern of increase or decrease, 2) Seasonal variations - repeating patterns over 12 months, 3) Cyclical variations - fluctuations lasting more than a year, and 4) Irregular variations - unpredictable fluctuations. Two common methods for measuring trends are introduced as the moving average method and least squares method. Formulas and examples are provided for calculating trend values using these techniques.
Statistics is the science of collecting, analyzing, and interpreting numerical data. It has evolved from early uses by governments to understand populations for taxation and military purposes. Modern statistics developed in the 18th-19th centuries and saw rapid growth in the 20th century with advances in computing. Statistics has two main branches - descriptive statistics which involves data presentation and inference statistics which uses data analysis to make estimates and test hypotheses. Statistics is widely used across many fields including business, economics, mathematics, and banking to facilitate decision making.
The document outlines the basic framework of the theory of economic policy, including the Tinbergen-Theil approach. It discusses Tinbergen's fixed targets approach and Theil's flexible target approach. It also mentions rational expectations and the Lucas critique, as well as the policy game approach. The document provides definitions and classifications that are important to the theory of economic policy, such as exogenous and endogenous variables, target variables, and irrelevant variables. It displays the basic framework using a scheme and discusses the model of the economic system and preferences of the policy-maker.
Statistics as a subject (field of study):
Statistics is defined as the science of collecting, organizing, presenting, analyzing and interpreting numerical data to make decision on the bases of such analysis.(Singular sense)
Statistics as a numerical data:
Statistics is defined as aggregates of numerical expressed facts (figures) collected in a systematic manner for a predetermined purpose. (Plural sense) In this course, we shall be mainly concerned with statistics as a subject, that is, as a field of study
The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...Christoph Schulze
The European Union provides funds to disadvantaged regions to promote economic growth and convergence (in terms of per capita income) among regions within Europe.
In this study, I apply Propensity Score Matching on NUTS 3 data for the operational period of 2007 – 2013 to evaluate European structural policy. I find that results for Objective 1 policy are not robust to changes within the control group,
leading to both, positive and negative results of structural policy. Findings from the evaluation of Objective 2 policy suggest success in terms of fighting unemployment
and long term unemployment. Programs aiming at reducing youth unemployment in turn did not succeed. In fact, treated regions showed significant higher rates in youth unemployment.
This document discusses limitations and applications of statistics. It begins by covering limitations of statistics, such as it only dealing with quantitative data and groups/aggregates, and possible errors in statistical analysis. It then covers many fields that statistics can be applied to, such as actuarial science, biostatistics, econometrics, environmental statistics, epidemiology, and others. It concludes with sample multiple choice questions related to limitations and applications of statistics.
Statistics is the study of collecting, organizing, analyzing, and presenting data. It has a long history dating back to 1749. Statistical activities often use probability models and require probability theory. Key concepts in statistics like experimental design and statistical inference have impacted many fields. Statistics is used in many areas including business, education, psychology, health, engineering, and more. Descriptive statistics describes data while inferential statistics makes conclusions about populations from samples.
The document provides an overview of statistics as an academic subject. It discusses the origin and evolution of the term "statistics", from its initial use in the political sciences to refer to information about states, to its modern definition as a field of academic study involving the collection, organization, analysis, and interpretation of data. It also lists the learning objectives and outcomes of the session, which are to provide a broad overview of statistics and discuss techniques for collecting data, distinguishing between primary and secondary sources. Finally, it begins discussing the first topic of the session on the origin of statistics and the meaning and definition of the term.
The aim of the article is to analyse labour productivity key indicators of manufacturing or working efficiency of European Union (EU), it the theoretical bases and the regularities of these changes. We use regression analysis. Knowledge of the regularities of labour productivity changes allows predicting future changes and make optimal business decisions. The basis is gross domestic product (GDP) analysis. We will analyse labour productivity by turnover and gross value added per person employed of manufacturing total and partly by countries, but also GDP per capita. Taking the basis this publication and the previous works of the authors, draws conclusions and suggestions.
This chapter introduces basic concepts for handling economic data. It discusses four key areas: (1) common types of data like time series, cross-sectional, and panel data; (2) sources of economic data; (3) graphs for presenting data; and (4) descriptive statistics for summarizing data. It also covers important topics like qualitative vs. quantitative data, data transformations between levels and growth rates, and the use of price indices.
The paper proposes two econometric models of inflation for Azerbaijan: one based on monthly data and eclectic, another based on quarterly data and takes into account disequilibrium at the money market. Inflation regression based on monthly data showed that consumer prices dynamics is explained by money growth (the more money, the higher the inflation), exchange rate behaviour (appreciation drives disinflation), commodities price dynamics (“imported” inflation) and administrative changes in regulated prices. For the quarterly model, nominal money demand equation (with inflation, real non-oil GDP and nominal interest rate on foreign currency deposits as predictors) and money supply equation were estimated, and error-correction mechanism from money demand equation was included into inflation equation. It is shown that disequilibrium at the money market (supply higher than demand) drives inflation together with money supply growth and nominal exchange rate depreciation and administrative changes in prices. No cost-push variables appeared to be significant in this equation specification. Both models give similar inflation projections, but sudden changes in money demand (2012) lead to significant differences between the projections. It is shown that money is the most important inflation determinant that explains up to 97.8% of CPI growth between 2012 and 2015, and that in order to keep inflation under control the Central Bank of Azerbaijan should link money supply to real non-oil GDP growth.
Authored by: Alexander Chubrik, Przemyslaw Wozniak, Gulnar Hajiyeva
Published in 2012
The document provides an introduction to statistics, discussing the meaning, history, and applications of statistics. It defines key statistical concepts such as population and sample, descriptive and inferential statistics. It also discusses the different types of variables and levels of measurement. The document traces the history of statistics from ancient times to the present day, highlighting important contributors to the field. It provides examples of how statistics is used in different domains like education, business, research, and government.
Demand forecasting by time series analysisSunny Gandhi
Demand is a buyer's willingness and ability to pay for a product or service. Demand forecasting estimates the quantity of a product that consumers will purchase. It is important for resource distribution, production planning, pricing decisions, and reducing business risk. Demand forecasting can be done at the micro, industry, or macro level. Common forecasting methods include time series analysis of historical sales data, market testing, and qualitative techniques like educated guesses. Accurate, plausible, simple, and durable demand forecasts are ideal.
This paper provides a review of the empirical macroeconomic model (EMMA) built for forecasting purposes at the Finnish Labour Institute for Economic Research. The model is quite small, consisting of 71 endogenous and 70 exogenous variables. The number of behavioural equations is 15. The basis of the model is Keynesian, although the model has some novel properties. They are the treatment of the supply side and prices that follow the routes of the neoclassical synthesis. The parameters of the model are estimated from quarterly data that cover the years 1990–2005. The model also contains a Kalman-filtered variable to control the deep recession in Finland at the beginning of the ’90s. This special feature brings the model closer to the new calibrated models.
An Examination into the Predictive Content of the Composite Index of Leading ...Sean Delehunt
This paper examines the predictive power of components of the Composite Index of Leading Indicators (CLI) with respect to key macroeconomic variables like GDP. It reviews previous literature on leading economic indicators and their importance in predicting business cycles. The paper will analyze data on the CLI components and macroeconomic variables from 1959 to 2004 to determine which components provide overall and marginal predictive power for GDP. It aims to explore where the CLI derives its ability to predict economic activity.
Predicting Stock Returns with Macroeconomic Indicators on Bist 100Zekeriya Bildik, CMA
This document describes a study that aimed to predict stock returns on the BIST 100 index in Turkey using macroeconomic indicators. Regression analysis was used to analyze relationships between monthly changes in five BIST indexes (total, service, financial, industrial, technology) and eight macroeconomic variables over 74 months from 2012-2018. Final multiple regression models were developed for each index that maximized predictive power while controlling for multicollinearity between predictors. The models found several macroeconomic indicators had significant predictive relationships with subsequent changes in the BIST indexes.
This document discusses and compares two approaches to measuring the contribution of investment-specific technological progress to economic growth: quantitative theory and traditional growth accounting. Quantitative theory uses an explicit structural model to define technological progress impulses, propagation mechanisms, and functional forms to match model predictions to data. It finds investment-specific technological progress accounts for 58% of postwar US growth. Traditional growth accounting takes a less structural approach without functional forms or parameters, but also finds very different results regarding investment-specific technological progress. The document argues quantitative theory provides a better measure since it has a clear economic interpretation of the growth contribution from investment-specific technological progress alone, unlike traditional growth accounting.
Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...QUESTJOURNAL
This document analyzes the demand and supply structures of commodities in South Sulawesi province, Indonesia using input-output tables. It finds that 14 commodities have significant value in terms of both demand and supply, including rice, cocoa, fish, nickel, fertilizers, chemicals, oil products, cement, machinery, transportation equipment, buildings, trade services, and public services. Rice, nickel, and trade services are identified as essential commodities for the regional economy. The analysis provides information on commodity demand and supply structures that can be used for development planning in South Sulawesi.
The nature of co-movement between total output and employment during the 1990s indicates that the relationship between employment growth and economic activity has been peculiar in Finland. This has been reflected, for example, in the developments of aggregate labour productivity. In particular, the years from 1992 to 1994 were exceptional. During that period productivity growth was very rapid, and, what is important, the trend of aggregate labour productivity shifted upwards. By only analysing the relationship between total output and employment it is impossible to say what happened during the period between 1992 and 1994. In this paper the relationship is analysed by utilizing industry-level data. The analysis shows that the rapid growth in aggregate productivity and the upward shift in the productivity trend mainly reflect similar developments in manufacturing, particularly in the metal industry. Even though the investigation is based on the use of industry-level data, it is still aggregative, which makes the interpretation of the results less clearcut. The existing studies which are based on the use of micro-level (e.g. plant-level) data support the interpretation which emphasizes the role of business restructuring and labour reallocation within manufacturing as the causes of rapid productivity growth and the upward shift in the trend productivity. The analysis is based on the estimation of simple structural VAR models.
The subject of this paper is an analysis of the electricity market in Poland.
The period of 2008–2015 came under close scrutiny, whereby emphasis was laid on
the trends in electricity generation and demand, while taking into account the country’s economic development. In addition, the text mentions the forecasted demand for
electricity in 2030, and electricity prices. As regards electricity prices, both qualitative and quantitative forecasts have been presented. In the latter case, the results of
the author’s own forecast have been presented; these were obtained with the aid of
selected methods applied for the analysis of the dynamics of economic phenomena
(the exponential and linear trend models). In order to make the research problem
more specific, the text addresses the following research questions: (1) Is it possible
to point out any special characteristics of the structure and operation of the electricity
market in Poland? (2) Is it possible to point out a characteristic trend in the changes in
demand for electricity in Poland? (3) Is it possible to point out a characteristic trend in
the changes in electricity prices in Poland?
1. This document appears to be an assignment sheet for a high school economics class covering chapters 1-3 of the textbook. It includes terms and short answer questions from each chapter.
2. The assignment requires students to define key economic terms and concepts. It also asks students to provide examples, explain models and graphs, and describe economic relationships.
3. The assignment is due on February 12, 2010 and no late work will be accepted. It covers topics like scarcity, opportunity costs, markets, demand and supply, and production possibilities.
In the paper, structural change in the Finnish manufacturing industries is studied by means of the theory of the aggregation of production functions and longitudinal plant-level data for the period from 1980 until 2005. The nature of structural change in twelve industries is characterised by examination of the invariance of the aggregate production functions over time. Aggregate production functions need not be estimated because, according to the theory of the aggregation of production functions, the invariance can be analysed by the investigation of the stability of the capacity density functions which describe the distribution of value added in the industries. Even though the shapes of aggregate production functions alter over time in most industries, there are differences in timing and in the degree of turbulence across industries. The analysis confirms the result obtained earlier that in some industries, for example in the paper industry, the late 1980s marked the beginning of a period of relatively strong structural change. The food industry and the manufacture of communications equipment are examples of industries in which the 1990s was a period of turbulence.
This document discusses time series analysis and its key components. It begins by defining a time series as a sequence of data points measured over successive time periods. The four main components of a time series are identified as: 1) Trend - the long-term pattern of increase or decrease, 2) Seasonal variations - repeating patterns over 12 months, 3) Cyclical variations - fluctuations lasting more than a year, and 4) Irregular variations - unpredictable fluctuations. Two common methods for measuring trends are introduced as the moving average method and least squares method. Formulas and examples are provided for calculating trend values using these techniques.
Statistics is the science of collecting, analyzing, and interpreting numerical data. It has evolved from early uses by governments to understand populations for taxation and military purposes. Modern statistics developed in the 18th-19th centuries and saw rapid growth in the 20th century with advances in computing. Statistics has two main branches - descriptive statistics which involves data presentation and inference statistics which uses data analysis to make estimates and test hypotheses. Statistics is widely used across many fields including business, economics, mathematics, and banking to facilitate decision making.
The document outlines the basic framework of the theory of economic policy, including the Tinbergen-Theil approach. It discusses Tinbergen's fixed targets approach and Theil's flexible target approach. It also mentions rational expectations and the Lucas critique, as well as the policy game approach. The document provides definitions and classifications that are important to the theory of economic policy, such as exogenous and endogenous variables, target variables, and irrelevant variables. It displays the basic framework using a scheme and discusses the model of the economic system and preferences of the policy-maker.
Statistics as a subject (field of study):
Statistics is defined as the science of collecting, organizing, presenting, analyzing and interpreting numerical data to make decision on the bases of such analysis.(Singular sense)
Statistics as a numerical data:
Statistics is defined as aggregates of numerical expressed facts (figures) collected in a systematic manner for a predetermined purpose. (Plural sense) In this course, we shall be mainly concerned with statistics as a subject, that is, as a field of study
The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...Christoph Schulze
The European Union provides funds to disadvantaged regions to promote economic growth and convergence (in terms of per capita income) among regions within Europe.
In this study, I apply Propensity Score Matching on NUTS 3 data for the operational period of 2007 – 2013 to evaluate European structural policy. I find that results for Objective 1 policy are not robust to changes within the control group,
leading to both, positive and negative results of structural policy. Findings from the evaluation of Objective 2 policy suggest success in terms of fighting unemployment
and long term unemployment. Programs aiming at reducing youth unemployment in turn did not succeed. In fact, treated regions showed significant higher rates in youth unemployment.
This document discusses limitations and applications of statistics. It begins by covering limitations of statistics, such as it only dealing with quantitative data and groups/aggregates, and possible errors in statistical analysis. It then covers many fields that statistics can be applied to, such as actuarial science, biostatistics, econometrics, environmental statistics, epidemiology, and others. It concludes with sample multiple choice questions related to limitations and applications of statistics.
Statistics is the study of collecting, organizing, analyzing, and presenting data. It has a long history dating back to 1749. Statistical activities often use probability models and require probability theory. Key concepts in statistics like experimental design and statistical inference have impacted many fields. Statistics is used in many areas including business, education, psychology, health, engineering, and more. Descriptive statistics describes data while inferential statistics makes conclusions about populations from samples.
The document provides an overview of statistics as an academic subject. It discusses the origin and evolution of the term "statistics", from its initial use in the political sciences to refer to information about states, to its modern definition as a field of academic study involving the collection, organization, analysis, and interpretation of data. It also lists the learning objectives and outcomes of the session, which are to provide a broad overview of statistics and discuss techniques for collecting data, distinguishing between primary and secondary sources. Finally, it begins discussing the first topic of the session on the origin of statistics and the meaning and definition of the term.
The aim of the article is to analyse labour productivity key indicators of manufacturing or working efficiency of European Union (EU), it the theoretical bases and the regularities of these changes. We use regression analysis. Knowledge of the regularities of labour productivity changes allows predicting future changes and make optimal business decisions. The basis is gross domestic product (GDP) analysis. We will analyse labour productivity by turnover and gross value added per person employed of manufacturing total and partly by countries, but also GDP per capita. Taking the basis this publication and the previous works of the authors, draws conclusions and suggestions.
This chapter introduces basic concepts for handling economic data. It discusses four key areas: (1) common types of data like time series, cross-sectional, and panel data; (2) sources of economic data; (3) graphs for presenting data; and (4) descriptive statistics for summarizing data. It also covers important topics like qualitative vs. quantitative data, data transformations between levels and growth rates, and the use of price indices.
The paper proposes two econometric models of inflation for Azerbaijan: one based on monthly data and eclectic, another based on quarterly data and takes into account disequilibrium at the money market. Inflation regression based on monthly data showed that consumer prices dynamics is explained by money growth (the more money, the higher the inflation), exchange rate behaviour (appreciation drives disinflation), commodities price dynamics (“imported” inflation) and administrative changes in regulated prices. For the quarterly model, nominal money demand equation (with inflation, real non-oil GDP and nominal interest rate on foreign currency deposits as predictors) and money supply equation were estimated, and error-correction mechanism from money demand equation was included into inflation equation. It is shown that disequilibrium at the money market (supply higher than demand) drives inflation together with money supply growth and nominal exchange rate depreciation and administrative changes in prices. No cost-push variables appeared to be significant in this equation specification. Both models give similar inflation projections, but sudden changes in money demand (2012) lead to significant differences between the projections. It is shown that money is the most important inflation determinant that explains up to 97.8% of CPI growth between 2012 and 2015, and that in order to keep inflation under control the Central Bank of Azerbaijan should link money supply to real non-oil GDP growth.
Authored by: Alexander Chubrik, Przemyslaw Wozniak, Gulnar Hajiyeva
Published in 2012
The document provides an introduction to statistics, discussing the meaning, history, and applications of statistics. It defines key statistical concepts such as population and sample, descriptive and inferential statistics. It also discusses the different types of variables and levels of measurement. The document traces the history of statistics from ancient times to the present day, highlighting important contributors to the field. It provides examples of how statistics is used in different domains like education, business, research, and government.
Demand forecasting by time series analysisSunny Gandhi
Demand is a buyer's willingness and ability to pay for a product or service. Demand forecasting estimates the quantity of a product that consumers will purchase. It is important for resource distribution, production planning, pricing decisions, and reducing business risk. Demand forecasting can be done at the micro, industry, or macro level. Common forecasting methods include time series analysis of historical sales data, market testing, and qualitative techniques like educated guesses. Accurate, plausible, simple, and durable demand forecasts are ideal.
This paper provides a review of the empirical macroeconomic model (EMMA) built for forecasting purposes at the Finnish Labour Institute for Economic Research. The model is quite small, consisting of 71 endogenous and 70 exogenous variables. The number of behavioural equations is 15. The basis of the model is Keynesian, although the model has some novel properties. They are the treatment of the supply side and prices that follow the routes of the neoclassical synthesis. The parameters of the model are estimated from quarterly data that cover the years 1990–2005. The model also contains a Kalman-filtered variable to control the deep recession in Finland at the beginning of the ’90s. This special feature brings the model closer to the new calibrated models.
An Examination into the Predictive Content of the Composite Index of Leading ...Sean Delehunt
This paper examines the predictive power of components of the Composite Index of Leading Indicators (CLI) with respect to key macroeconomic variables like GDP. It reviews previous literature on leading economic indicators and their importance in predicting business cycles. The paper will analyze data on the CLI components and macroeconomic variables from 1959 to 2004 to determine which components provide overall and marginal predictive power for GDP. It aims to explore where the CLI derives its ability to predict economic activity.
Modeling market and nonmarket Intangible investments in a macro-econometric f...SPINTAN
Modeling market and nonmarket Intangible investments in a macro-econometric framework. Sociedty for Economic Measurement Annual Conference. Thessaloniki July 2016
This document summarizes a bachelor's thesis that evaluates the quality of macroeconomic predictions from different models used in the Czech Republic. Specifically, it:
1) Evaluates predictions from the Czech National Bank's DSGE model (G3), the Ministry of Finance's HUBERT model, and two commercial banks using econometric tests.
2) Finds that when the Czech National Bank adopted its G3 DSGE model in 2008, it experienced a significant improvement in prediction quality compared to the other institutions.
3) Suggests well-specified DSGE models may enhance prediction quality of key economic indicators compared to non-structural models and expert judgment.
This document discusses forecasting household consumption in the Czech Republic using data from Google Trends. It first reviews literature on using sentiment indicators and Google Trends data to predict consumption. It then describes the consumption and sentiment data from the Czech Statistical Office, as well as search data from Google Trends. Finally, it introduces the model that will be used to forecast consumption using these different data sources.
This document provides an introduction to econometrics. It defines econometrics as the integration of economic theory, statistics, and mathematics to empirically analyze economic phenomena. The chapter discusses the need, objectives, and goals of econometrics, including describing economic reality, testing hypotheses, and forecasting. It also compares economic models to econometric models, and outlines the methodology and desirable properties of econometric models. Finally, it discusses different types of data used in econometric analysis, including time series, cross-sectional, and pooled data.
Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066Sajid Ali Khan
This document appears to be the introduction or table of contents to a textbook on advanced econometrics. It includes 10 chapters that cover topics such as simple linear regression, multiple linear regression, dummy variables, autocorrelation, and simultaneous equation systems. The introduction defines econometrics and discusses its goals of policy making, forecasting, and analyzing economic theories using quantitative methods. It also outlines the methodology of econometrics, which involves stating an economic theory, specifying mathematical and statistical models, collecting data, estimating parameters, testing hypotheses, forecasting, and using models for control or policy purposes.
Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...iosrjce
In this paper, we try to uncover the economic foundations of financial sector development and its
impacts on accelerating economic growth in the given context of emerging economies. We theorize and
empirically test a causally-motivated relationship among economic growth and related key financial sector
variables pertinent to this problem. We accomplish this by analyzing a 20 year panel-data constructed for 30
countries falling within the categorization of an ‘emerging economy’. We estimate the appropriate statistical
models along with related diagnostic tests. Finally, we comment on the strengths and weaknesses of our
approach and we try to explicate the economic rationale and justification for our formulation and the evidences
that follow
Regress and Progress! An econometric characterization of the short-run relati...Matheus Albergaria
1. The paper uses structural vector autoregression (SVAR) models to examine the empirical validity of real business cycle (RBC) models based on technology shocks using Brazilian data.
2. The results cast doubt on some predictions of RBC models. Specifically, the estimated conditional correlations between labor input and productivity measures are negative for technology shocks and positive for non-technology shocks, whereas RBC models predict the opposite.
3. The labor input also displays a negative response to technology shocks over business cycles in the estimates, which challenges implications of RBC models. However, the authors note that the results do not definitively reject RBC models, but could stimulate new theoretical and empirical work.
This document discusses 5 major challenges facing financial services modelling functions in Europe: 1) The modelling scope is expanding with more models required, 2) Fully harmonized methodologies across institutions and business units are imperative for transparency and cost reduction, 3) Modelling structures need to become more efficient to reduce costs, 4) Modelling governance needs to be broadened, and 5) Emerging data and techniques allow for model innovations. It provides implications for banks, outlining a 5-point plan for banks to develop a comprehensive model review, harmonize methodologies, redesign validation processes, rethink governance, and build new expertise in data science to address these challenges. The plan aims to reduce total model count by 15% and associated
The time consistency of economic policy and the driving forces behind busines...accounting2010
1. Kydland and Prescott uncovered an inherent problem with discretionary economic policymaking known as the time consistency problem. Without the ability to commit to future policies, governments are unable to implement optimal policies due to rational expectations.
2. They showed that discretionary monetary and fiscal policy results in lower welfare than if governments could commit to future policies. This shifted analysis to designing institutions to mitigate the time consistency problem, like reforms to central bank independence.
3. Kydland and Prescott also demonstrated that technology-driven supply shocks can generate realistic business cycles without market failures. This established a new paradigm in macroeconomics based on microeconomic foundations and rational expectations.
This document provides an overview of a Principles of Macroeconomics course. It outlines the course content which covers topics such as aggregate expenditure, fiscal policy, money and monetary policy, aggregate demand and supply, unemployment, inflation, economic growth, and open economy macroeconomics. The objectives are to provide students with a basic understanding of macroeconomic concepts and models and enable them to apply their knowledge to current economic issues and policy analysis. The course will be implemented through lectures, discussions, exams and class participation. Suggested textbooks are also listed.
This document discusses a study that uses a mixed logit model to predict firm financial distress. Mixed logit is an advanced discrete choice modeling technique that relaxes assumptions of standard logit models. It allows for observed and unobserved heterogeneity across firms. The study aims to demonstrate the empirical usefulness of mixed logit in financial distress prediction by comparing its performance to standard logit models. Results and out-of-sample forecasts show mixed logit outperforms standard logit models by significant margins in predicting firm financial distress.
Informe "Assessing the Employment and Social Impact of Energy Efficiency" publicado en diciembre de 2015 que analiza la situación del mercado laboral relacionado con la Eficiencia Energética.
Fiscal Policy And Trade Openness On Unemployment EssayRachel Phillips
Here are the key points about forecasting using vector autoregression (VAR) models:
- VAR models treat every variable in the system as endogenous and explain its behavior based on its own lags and lags of other variables. This allows all variables to influence each other.
- VAR models make forecasts by projecting the dynamics of all variables in the system based on estimated relationships between the variables and their lags.
- To generate forecasts, the VAR model is used to simulate future values of the variables by recursively using their estimated relationships. The forecasted values are produced by iterating the VAR model forward.
- Forecasts from VAR models can be evaluated using common metrics like mean squared forecast error to assess their accuracy relative to other
The document discusses input-output analysis and econometrics. It defines input-output analysis as a technique used to analyze inter-industry relationships and understand how changes in one industry impact others. The document outlines the key assumptions and uses of input-output models, including for planning, forecasting, and evaluating resource requirements. It also provides an overview of econometrics, defining it as the application of statistics and mathematics to economic data and theory. The methodology of econometrics involves establishing an economic theory, specifying a mathematical model, collecting data, estimating models, testing hypotheses, forecasting, and using models for policy purposes.
This document provides an overview and introduction to the scope and method of economics. It discusses the following key points in 3 sentences:
Economics is the study of how individuals and societies make choices with scarce resources. The document outlines why economics is studied, including to learn a way of thinking, understand society and global affairs, and be an informed voter. It also describes the scope of economics in terms of microeconomics, macroeconomics, and diverse fields, as well as the method which involves theories, models, and empirical testing of economic concepts.
This document provides an overview and introduction to economics. It discusses the scope of economics, including microeconomics and macroeconomics. It also covers the reasons to study economics, such as to learn a way of thinking and to understand society and global affairs. Additionally, it summarizes the method of economics, including theories, models, and empirical testing. Economic policy goals like efficiency, equity, growth and stability are also briefly outlined.
Multivariate analysis of the impact of the commercial banks on the economic g...Alexander Decker
The document analyzes the impact of commercial banks on economic growth in Nigeria from 1970-2009 using multivariate analysis and the ordinary least squares method. It finds that commercial bank credits, deposit liabilities, and lending rates had a positive relationship with GDP, indicating they help achieve economic growth. However, the number of banks had a negative but insignificant relationship with GDP. The study concludes that policies aimed at increasing commercial bank capital bases should be pursued to increase loanable funds and sustainable economic growth and development.
Econometrics is the application of statistical and mathematical methods to economic data in order to test economic theories and estimate relationships between economic variables. The methodology of econometrics involves stating an economic theory or hypothesis, specifying the theory mathematically and as an econometric model, obtaining data, estimating the model, testing hypotheses, making forecasts, and using the model for policy purposes. Regression analysis is a key tool in econometrics that relates a dependent variable to one or more independent variables, with an error term included to account for the inexact nature of economic relationships.
1. Student projects and dissertations
Faculty: Bristol Business School
Student’s name: Boris Kisska
Award: Economics
An investigation into causal and non-causal econometric
models and their performance in forecasting the UK’s
Gross Domestic Product.
Boris Kisska
Academic year of presentation: 2013/2014
Bristol Business School
2. 2
CONTENTS
List of figures I
List of tables II
Acknowledgements III
Introduction IV
Chapter 1 Literature review
1.1 Macroeconomic theories 12
1.2 Keynesian Revolution 13
1.3 Expectations Revolution 15
1.4 The new Keynesians 17
1.5 Forecasting accuracy 18
1.6 Non-structural models 20
Chapter 2 Structural econometric model
2.1 Structural model building 23
2.1.1 Rationale for simultaneous equations 24
2.1.2 Rationale for Keynesian model
2.2 Building blocks 25
2.2.1 Consumption Function 25
2.2.2 Investment Function 26
2.2.3 Interest rate Function 27
2.2.4 Inflation Function 29
Chapter 3 Structural modelling
3.1 Modelling Methodology 31
3.1.1 Order condition identification 33
3.1.2 Hausman test 34
3.1.3 Structural model estimation 35
3.2 Analysis of the structural model results 36
3.3 Ex- post forecasting 38
3. 3
Chapter 4 Non-causal model
4.1 Introduction 42
4.2 Notation of ARMA model 43
4.3 Non-stationarity in time series 44
4.4 ARIMA methodology 45
4.4.1 Identification 45
4.4.2 Estimation 46
4.4.3 Diagnostic checking 48
4.4.4 Forecasting 49
Chapter 5 Conclusion 51
Appendix 53
4. 4
List of Figures
1. Decision making at the Bank of England 11
2. Graphical representation of the forecasting performance of the eight models 14
3. Graphical representation of the forecasting performance of the four different models 19
4. Influence diagram for simultaneous equation model 23
5. Transmission mechanism of monetary policy 28
6. Block diagram of five equation model 32
7. Historical simulation , GDP 1980q1-2007q1 38
8. Structural model, GDP q/q, forecast 2007q1-2009q1 40
9. Autocorrelation function INC 44
10. The UK`s GDP q/q values and the first difference 45
11. ACF and PACF of the GROWTH 46
12. UK’s GDP forecast ARIMA(1,1,1), 2007q1-2009q1 50
5. 5
List of Tables
1. Comparison of forecasting performance of the eight different models 14
2. Forecasting performance of the four different models 19
3. One year ahead UK forecast error - Mean Absolute Error (MAE) 20
4. Summary table of the used variables in model 32
5. The order condition of identification 33
6. SC and HT tests of individual equations-OLS estimation 35
7. Summary statistics of OLS and 2sls estimation procedures 37
8. Ex-post forecast based on 2SLS regression 39
9. Autocorrelation function and partial autocorrelation 45
10. Akaike’s and Schwarz Bayesian information criterion for model GROWTH 48
11. Ex-post forecast based on ML regression 49
6. 6
Acknowledgements
I would like to take this opportunity and thank Tony Flegg for his valuable comments throughout
the write up. I would also thank my family and friends for supporting me during the challenging
final year.
7. 7
Abstract
Accurate forecast of the direction and magnitude of the exogenous shocks to the aggregate demand
has been subjected to extensive research during the past several decades. In the aftermath of the
recent events in 2007 there has been heated debate about the validity of the nowadays econometric
models and their failure to predict recent recession. This brings into question the whole validity of
causal macroeconometric models based on the economic theory. Atheoretical models that do not
assume an underlying theory, therefore may serve as a viable alternative when assessing the
dynamics of the shock to the economy. This dissertation therefore investigates implications and
forecasting validity of different econometrics methods that identify exogenous shock to the UK`s
GDP, with the particular interest in the recent recession 2007-08.
8. 8
The relevant question to ask about the ‘assumptions’ of a theory is not whether they
are descriptively ‘realistic’ for they never are, but whether they are sufficiently good
approximations for the purpose at hand. And this question can be answered by only
seeing whether they work, which means whether it yields sufficiently accurate
predictions.
Milton Friedman (1954, p. 8)
9. 9
INTRODUCTION
This dissertation aims to provide a comprehensive analysis and evaluation of the two significantly
different macroeconometric models and their ability to forecast the UK`s Gross Domestic Product
(GDP). Particular focus is on whether structural based models perform better than their
atheoretical counterparts in forecasting turning points that are associated with the occurrence of
unusually large shocks to the economy. The crucial argument lies in the view that cycles and
trends in time-series are systematic. However, as Eugen Slutsky and Ragnar Frisch suggest the
cycles are not necessary systematic in nature but rather may be merely artefacts of random
shocks, working their way through the economy Nelson and Plosser (1972, p. 909).
Gross domestic product (GDP) is arguably the most important aggregate indicator of
economic activity in the UK Lee (2011). GDP is the value of goods and services produced in an
economy in a given year, which are determined by the common measuring of market prices and
are sensitive to changes in the average price level occurring in the economy. There are three
different approaches that can be used to measure GDP: the expenditure approach, the income
approach and the production approach. The primary focus in this dissertation is on the
expenditure measure which is comprised of: GDP (E) = household final consumption expenditure +
final consumption expenditure of non-profit institutions serving households + general government
final consumption expenditure + gross capital formation + exports – imports Lee (2012).
Accurate GDP analysis and forecasts are of great theoretical and practical value for policy
decisions and for assessments of the future state of the economy. Holden et al. (1990) state that
forecasts are required for two basic reasons: the future is uncertain; and the full impact of many
decisions taken now might not be felt until later. Consequently, accurate predictions of the future
would improve the efficiency of the decision-making process.
The use of economy-wide macro-econometric models for forecasting and simulation
analyses of the likely economic policy outcomes has expanded to the majority of countries.
Models have become an important instrument of world-wide analyses and forecasts conducted by
international organizations and renowned research institutions, as well as by central banks of
many countries (Welfe, 2013, p. 395).
10. 10
This is because they not only provide an analytical framework to link the demand and supply sides
and the resource allocation process in an economy but also may help in reducing fluctuations and
enhancing economic growth, which are two major aspects of any economy (Bahattari, 2005, p. 2)
As Figure 1 summarizes, macroeconomic models, alongside others, play a major role in informing
and disciplining monetary policy decisions at the Bank of England.
Decision making at the Bank of England
Figure 1: (Source, BoE)
The dissertation is organised as follows: Chapter 1 provides a literature review. The review is by no
means exhaustive but provides comprehensive evaluation of the past and present trends in
macroeconometric modelling and forecasting. Chapter 2 presents the rationale behind the
building the structural model. Chapter 3 introduces an econometric analysis aimed at developing a
satisfactory forecasting model. Chapter 4 concerns identification, estimation, diagnostic checking
and forecasting of non-causal the autoregressive integrated moving average (ARIMA) model.
Chapter 5 contains the conclusion. The dissertation also includes an appendix containing the
detailed calculations and statistical printouts of all the models considered.
11. 11
CHAPTER 1 Literature review
The aim of this review is to provide an evaluation of the past and existing research on the use and
forecasting performance of different econometric models with the particular focus on their ability
to forecast the UK `s Gross Domestic Product (GDP).
Forecasting models can be broadly split into two categories, based on `the trade-off between
their conceptual coherence with economic theory and their empirical coherence with economic
data` (Pagan, 2003, p. 1).
Causal or structural models are a set of behavioural equations, as well as institutional and
definitional relationships representing the main behaviour of economic agents and the operations
of an economy (Valadhkani 2004). The goal of quantitative analysis of an economy via the
estimation of an interrelated system of equations `is to achieve three purposes; descriptive,
prescriptive and predictive uses of econometrics, that is structural analysis, policy evaluation and
forecasting’ (Intriligator et al., 1978, p. 430).
Atheoretical or non-causal models, on the other hand, rely more on statistical patterns in the
data. These models attempt to exploit the reduced-form correlations in observed macroeconomic
time series, with fewer assumptions about the underlying structure of the economy (Diebold
1998, p. 2). Because of their restricted nature they are used almost exclusively for forecasting
purposes or as an accuracy benchmark for structural models.
1.1 Macroeconomic theories
The first attempts to formalize theoretical framework of the national economy as a whole took
place during the early 20th century. Three trends in the literature could be distinguished then. The
first, stemmed from general equilibrium theory formulated by Leon Walras and later developed
by Wilfredo Pareto, the second rested on the foundations of business cycle laid by Ragnar Frisch,
Joseph Schumpeter and Arthur Cecil Piggou and the third referred to J.M.Keynes’ fundamental
writings regarding unemployment and demand deficiency (Welfe, 2013, p. 8).
12. 12
1.2 Keynesian Revolution
Complete specification of the macroeconomic model shows how an economic behaviour and
institutions affect relationships between a set of conditions x and outcomes y. (Reiss and Wolak,
2007, p. 4284). Economic models, however, rest on deterministic assumptions and as such do not
perfectly fit observed data. Structural econometric modellers thus must add stochastic statistical
structure in order to rationalize why economic theory does not perfectly explain data.
Theoretical framework developed by J.M. Keynes(1936),and especially his General Theory
became a cornerstone of the concepts that led to the construction of a class of macroeconometric
models based on “Cowles commission” methodology, associated with Klein, Goldberger and
Modigliani, works that predominated in the USA and Europe for over 30 years (Welfe 2013,
p.4).Lucas and Sargent, (1981, p. 296) stress that success of Keynesian revolution was in the form
of a revolution of methods that rested on several important features: `the evolution of
macroeconomics into a quantitative, scientific discipline, the development of explicit statistical
descriptions of economic behavior, the increasing reliance of government officials on technical
economic expertise, and the introduction of the use of mathematical control theory to manage
an economy ’.
The general profile of the models based on the Cowles commission`s methodology was
macroeconomic: they contained final demand (consumption, investment), demand for labour, as
well as prices, wages and financial flows (Klein, 1991).
Variables whose introduction was theoretically unjustified were eliminated by imposing
zero restrictions on the appropriate parameters. The IS-LM/PC1 model becomes a workhorse tool
at hand in constructing and evaluating macro models. Common features linked with the Klein-
Goldberger models explicated the major feedbacks that included a consumer multiplier, where
consumption depended on national income and was one of the national income components.
Moreover, they also defined the fundamental macro-identity, i.e. national income, as being equal
to the sum of consumption, government expenditure, investment and net exports. The Klein-
Goldberger models paved the way for the builders of many other medium-term model of the US
and UK economy (Welfe, 2013, p. 4).
1
(Vroey and Malgrange, 2011, p.3) points that origin of IS-LM model can be traced to Modigliani (1944). The IS/LM model
comprises two distinct sub models, the Keynesian and the classical system. Hence, strictly speaking, it should not be considered
Keynesian. But at the time of its dominance, most economists were convinced that the Keynesian variant corresponded to reality
while the classical system was viewed as a foil. Regarding the Phillips Curve (PC) - The Klein-Goldberger model was the first to
explain wage rates assuming that their growth depended on the rate of unemployment.
13. 13
0
5
10
15
20
25
30
35
40
45
50
1 2 3 4 5 6 7 8
RMSE
Quarters ahead
Ex-ante forecast, selsected models
Brookings
ARIMA
BEA
Fair
DRI
FRB - St.louis
Wharton III
Several competing models were established such as the Wharton model, the MPS model
developed for the Fed, The H.M. Treasury Model and many others.2 Klein (1973) compared eight
models and concluded that RMSE3 of major U.S. econometric models showed that, despite some
exceptions, errors were within reasonable bounds.4
Table 1: Comparison of forecasting performance of the eight different models
Figure 2: Graphical representation of the forecasting performance of the eight models
2 The most significant include ; Economic Analysis Model (BEA ), A. Hirsch, M. Liebenberg, and G. Narasimhan; Brookings Model, G.
Fromm, L. Klein, and G. Schink; DHL III Model, University of Michigan, S. Hymans and H. Shapiro; Data Resources, Inc., Model (DRI-
71), 0. Eckstein, E. Green and associates; Fair Model, Princeton University, R. Fair; Federal Reserve Bank of St. Louis Model (FRB St.
Louis), L. Andersen and K. Carlson; MPS Model, University of Pennsylvania, A. Ando, F. Modigliani, and R. Rasche; Wharton Mark III
Model, University of Pennsylvania, F. G. Adams, V. J. Duggal, G. Green, L. Klein, and M. McCarthy; Klein, 1973).
3 RMSE is a measure of the difference between values predicted by a model and the values actually observed from the
environment that is being modelled. Aggregation of these residuals, serves as a measure of predictive power.
4 Comparing the RMSE with later studies reveal that results are not satisfactory. Possible reasons may include small sample bias and
inaccurate data. Moreover, celebrated Wharton III model underperformed even naïve ARIMA model.
RMSE of Real GNP ex - ante forecast
Simulation
interval
Number of Quarters ahead
1 2 3 4 5 6 7 8
Brookings 1966.1 - 1970.4 6.74 11.36 16.08 20.94 25.69 29.54 33.18 39.77
ARIMA 1970.3 - 1972.1 8.70 13.00 17.00 23.00 29.00 36.00
BEA 1969.1 - 1971.2 6.01 11.01 18.42 23.26 28.08 30.5
Fair 1965.1 - 1969.4 2.91 4.35 4.52 6.77 9.89
DRI 1971.3 - 1972.3 8.90 14.89 23.1 28.88
FRB - St. Louis 1970.1 - 1971.4 10.29 14.88 13.86 11.69 11.15 16.11
Wharton III 1970.2 – 1971.4 8.04 18.96 26.00 28.52 33.74 39.74 41.77 44.68
14. 14
Initial momentum for building large – scale macroeconometric models (MEM) was abruptly
interrupted in the 1970s a `decade of greater inflation, unemployment and turbulence’ (Pescatori
and Zaman, 2011, p. 2). Mincer and Zarnowitz (1969) compared a number of different models and
conclude that forecasting errors build up much faster than in earlier years and turning points were
seriously missed in the onset of recessions in 1970 and 1974, but they noted there was no decline
in accuracy as measured by the criteria of comparisons with simple extrapolations. Burns (1986
cited in Wallis 1989,p. 57) notes, 'there was not only disillusion with demand management; there
was also growing frustration with the forecasts as the increased level of noise in the economic
system led to increased margins of error`. Greenberger (1976) points that the use of modelling in
government has fallen short of expectations and the gap between expectations and actual results
is widest in the policy application. Kenway (1978) argues that MEM lost it hold because model
builders ceased to believe in the structure and the way in which the economy was believed to
work - that a macroeconomic model as a structural model, represents.
1.3 Expectations Revolution
According to Pesaran (1995) the major criticisms of the traditional models based on the Cowles
Commission approach can be summarised in terms of following issues. First, Liu (1963) argues that
there is an arbitrary assumption of zero restrictions on the variables that should be included in the
equation that are excluded to achieve identification. Secondly, the existence of the problem of
unit roots in many macroeconomic variables and ignorance of the time series properties (Plosser
and Nelson, 1982). Thirdly, insufficient connection between real and monetary variables. At
structural level Friedman (1968) argued that original Phillips curve depended on incorrect inflation
forecast owing to the existence of money illusion, therefore the trade-off between inflation and
unemployment would not hold in the long run when classical principles apply i.e. money should be
neutral.
15. 15
Friedman thus, proposed an expectation augmented Phillips curve, assuming that current
expectations of inflation were based on a weighted average5 (1) of past inflation rates as follows:
πe
t = γ[πt + (1-γ)πt-1 + (1-γ)2
πt-2 + …] = γ∑ (1 − 𝛾)𝑘
π𝑡−𝑘
∞
𝑘=0 1
Lucas, (1976, p. 41) extended Friedman’s argument and asserted that the econometric models of
the time, all derivatives of the Klein-Goldberger model, based on decision rules and estimated by
empirical relations, were a fundamentally defective paradigm for producing conditional forecasts,
because the parameters of decision rules will generally change when policy change or
expectations about policy change. Therefore, the key policy implication of the Lucas critique was
that it is impossible to surprise rational people systematically, so systematic monetary policy
aimed at stabilizing the economy is doomed to failure (Sargent and Wallace 1975).
According to Lucas, only deeper, ‘structural models’, i.e. those derived from the
fundamentals of the business cycle theory emphasizing the agents‘ preferences, and technological
constraints, based on imperfect information, rational expectations6 (2) and market clearing were
able to provide more accurate grounding for the evaluation of alternative policies and forecasting.
(Taylor, 1979) points that introduction of rational expectation assumptions is significant enough to
be called a paradigm shift. In essence the rational expectations hypothesis states that the
difference between the realized values of the expected value should be uncorrelated with the
variables in the information set at the time the expectations are formed (Muth, 1961). Muth
observed that various expectations that were used in the analysis of dynamic economic models
had little resemblance to the way the economy works. If the economic system changes, the way
expectations are formed should change, but the traditional models of expectations do not permit
any such change.
Yt = E(Yt | It-1) 2
5 Adjustment parameter 0< γ <1 says that economic agents will adapt their expectations in the light of past experience and that in
particular they will learn from their mistakes Gujarati (2004). Adaptive expectations may be formed where people may expect
prices to rise in the current year at the same rate as the previous year such that πe =
πt-1. Therefore, expected level of inflation Is
weighted average of the present level of X and the previous expected level of X.
6
The formula states that left hand side should be interpreted as the subjective predicted expectations 𝑌 at time t and right hand
side as objective expectation conditional on the information 𝐼 available at time (𝑡 − 1) (Maddala, 1992, p.444). Moreover
expectations are uncorrelated with error term otherwise forecaster has not used all available information.
16. 16
Fisher (1983, p. 271), on the other hand, stresses that the Lucas critique has not been backed by
any detailed empirical support but is rather asserted. (Bodkin and Marwah, 1988) point out that
the rational expectations is e an irrational assumption with the respect to the complete access of
typical economic agent to the raw data and the true models of the economy. Klein (1989, p. 290)
acknowledges the importance of the Lucas critique, but adds that: "I believe that there is more
persistence than change in the structure of economic relationships. The world and the economy
change without interruption, but that does not mean that parametric structure is changing;
random errors and exogenous variables may be the main sources of changes". Maddala (1992)
offers a solution to the Lucas critique: making the coefficients of the MEM depend on exogenous
policy variables. Heckman and Leamer (2007, p. 226) suggest redefining the exogenity so i.e. the
variable x is exogenous if Lucas critique doesn’t apply to it.
1.4 The new Keynesians
Significant effort had been devoted to translate Lucas’ ideas into empirical models. This efforts
includes (Kydland and Prescott 1990), (Nelson and Plosser 1982), and (Sargent and Wallace 1975),
who provided the main reference framework for the analysis of economic fluctuations and
became, to a large extent, the core of macroeconomic theory based on rational expectations and
the Real Business Cycle Theory (RBC), where the emphasis switched to the role of random shocks
to technology and the intertemporal substitution in consumption and leisure that these shocks
induced. Mankiw (2003) points out that RBC models omit any role of monetary policy,
unanticipated or otherwise, in explaining economic fluctuations. Goodhart (1982) tested the policy
irrelevance hypothesis and found evidence that unanticipated monetary shocks do have real
effects on variables like output and employment. (Howells and Bain 2009) add to the shortcoming,
stating that RBC models’ assumption about perfect and instantaneous market clearing fails in the
real world where, in fact the prices are `sticky’ as proposed by new Keynesians.
The New Keynesian approach to macroeconomics evolved in response to the monetarist
controversy and to fundamental questions raised by Lucas's critique, and in order to provide an
alternative to the competitive flexible-price framework of RBC analysis (Goodfriend and King
1997). Therefore, the main characteristics of the New Keynesian models are their emphasis on
monopolistic competition, nominal rigidities and short-run non-neutrality of monetary policy.
17. 17
Important work along those lines was undertaken by Taylor (1993) and Fair (1994) who developed
methods for incorporating rational expectations into econometric models, as well as methods for
rigorous assessment of model fit and forecasting performance.
Models in Fair –Taylor fashion are now in use at a number of leading policy organizations,
including the Fed and International Monetary Fund (Brayton et al., 1997).Shown below is a highly
aggregated econometric model, described in neo-Keynesian7 framework that incorporates rational
expectations and sticky prices:
Yt = β0 + β1Yt-1 + β2Yt-2 + β3(mt – pt) + β4(mt-1 – pt-1) + β5 π1 + β6t + ut 3
π1 = ϒ0 + ϒ1πt-1 + ϒ2Yt + vt 4
ut = ɳt – θ1εt-1 5
vt = εt – θ2εt-1 6
7
(1) Aggregate demand equation derived from IS-LM relationships. Aggregate demand Y consist of consumption, investment,
government and net foreign demand. (2) is the price determination equation, where rate of inflation π1 is defined as pt+1 + pt. The
rationale is that prices and wages are set in advance of the periods to which they apply. Moreover, the equation is perfectly
accelerationist that is output cannot be raised permanently about its potential without raising inflation (Taylor 1979, p.1270) (3)
and (4) describe stochastic structure of the random shocks ut and vt on the assumption of first order moving average form.
18. 18
1.5 Forecasting accuracy
(Fair, 1979) compared four models, each based on difference of opinion as to how the economy
operates Table 38 and concludes that Sargent`s and Sims model are no more accurate than the
naïve model making his model superior to others.
Table 2: Forecasting performance of the four different models
Figure 3: Graphical representation of the forecasting performance of the four different models
8
(1) Sargent's classical macroeconometric model, (2) Sims's six-equation unconstrained vector autoregression model, (3) a "naive"
eighth-order autoregressive model, and (4) fair new-Keynesian model. The basic forecast period was 1978.2- 1981.4, and for
the misspecification calculations the first of the 35 sample periods ended in 19681V and the last ended in 1977.1
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9
RMSE
Quarters ahead
Ex-ante forecast, selsected models
Naïve
Sargent
Sims
Fair
RMSE of Real GNP ex - ante forecast
Variable and
model
Number of Quarters ahead
1 2 3 4 5 6 7 8
Naïve 1.11 1.96 2.76 3.51 4.09 4.42 4.7 4.91
Sargent 1.31 2.26 3.4 3.77 4.27 4.59 4.89 5.00
Sims 1.42 2.54 3.54 4.79 6.34 7.79 9.36 10.98
Fair 0.79 1.26 1.63 2.12 2.59 2.97 3.24 3.52
19. 19
A Study by (Stekler and Fildes, 2000) compared various structural models used in the UK (see Table
3)9 and concluded that there was limited evidence to suggest correct prediction in cyclical turning
points. In general, those models performed on average10 (MAE< 1) better than naïve ARIMA
model.
Table 3: Source, UK Treasury
Another study conducted by (Heilemann and Stekler, 2012) found that substantial improvement in
data, theories and method had not appeared to offer substantial improvement in forecasts. While
the accuracy of GDP forecasts improved somewhat in the 1980s and 1990s, it deteriorated in the
past decade, returning to the levels of the 1970s.
The Structural models considered so far are based on theoretical assumptions about causality
(Wold, 1954, p.164) and empirical relationships between the variables in question. `Structural
models thus allow outputs in a given forecast to be traced back through the model structure as
the result of the interaction of a number of economic mechanisms and judgements’ OBR (2010, p.
6).
9
UK Treasury compilation of forecasts for the 1990-98 calculations and Treasury and Civil Service Committee. GDP is based on
preliminary figures, average estimates of GDP Based on year ahead forecasts, Table 3 shows that the MAE of the Treasury’s
forecasts of real GDP growth was 0.8% and 1.00% in 1986-90 and 1990-98, respectively. The MAE was about 25% of the mean
absolute change in the earlier period. The non-Treasury errors were slightly larger in the first period but smaller in the second one.
10 mean absolute error (MAE) is a quantity used to measure how close residuals are to the actual outcomes
One year ahead UK forecast error - Mean Absolute Error (MAE)
Forecasting
group
GDP
1986-90 1990-98
Independent average 1.2 0.95
Selected independents 1 0.87
Independent consensus N/A 0.89
City average 1 0.85
City consensus N/A 0.82
Treasury 0.8 1
Average Outcomes 3.05 1.59
Naïve forecast 1.35 1.6
20. 20
1.6 Non-structural models
Pollock (2013) stressed that main shortcomings of equations of the macroeconometric models are
that they pay insufficient attention even to the simple laws of linear dynamic systems. Non-
structural time-series models, on the other hand, may therefore offer a more pragmatic approach,
assuming that the data series itself may well contain all the necessary information for adequate
forecasts (Pokorny, 1987,p. 342).They are in a sense , agnostic or empirical models (Klein, 1991, p.
14). Significant contributions regarding the theory can be traced to a work of Yule (1927) and
Slutzky (1937), who launched the notion of stochasticity in time series by postulating that every
time series can be regarded as realization of a stochastic process. The process can be explained by
autoregressive (AR) or moving average (MA) models.
Thus Slutzky (1937) shows that cycles resembling business fluctuations can be generated by
combination of a variables` own past value and a series of random causes (Kydland and Prescott,
1990, p.6). The combined method autoregressive integrated moving average (ARIMA) model was
widely popularized by Box and Jenkins (1970) who developed a coherent four-stage iterative cycle
for time series identification, estimation, diagnostic checking and forecasting (cf. Gooijer, 2006, p.
7).
Many macroeconomic variables, including GDP exhibit properties that violates classical
Gauss-Markow assumptions of constant mean, variance and/or covariance throughout the time.
The non-stationarity was observed by Plosser and Nelson (1982) who investigated a number of
macroeconomic variables including GDP and concluded the presence of stochastic trend (random
walk); hence they argued that GDP should be modelled as a first difference stationary (DS) process
(Newbold, 1999, p. 86).This was further confirmed by Stock and Watson (1988, p. 160) who
concluded that macroeconomic time series appear to contain variable trends. Moreover,
modelling these variable trends as random walks with drift seems to provide a good
approximation to the long-run behaviour of many aggregate economic variables.
In addition, Granger and Newbold (1973, p. 117) demonstrated by an ARIMA process that if
random walks, or near random walks, are present and one includes in regression equations
variables that should in fact not be included, then it will be the rule rather than the exception to
find spurious relationships.
21. 21
Forecasts based exclusively on the statistical time-series properties of the variable in question
have often been used to provide inexpensive, yet powerful, alternatives to structural models.
Wallis (1989) finds that published model forecasts generally outperform their time-series
competitors, the margin being greater four quarters ahead than one quarter ahead. This is also
confirmed by (Pokorny 1987, p. 342), who argues that this approach is not well suited to generate
medium-to long-term forecasts, and the approach is of only limited use for the policy evaluation
process. Makridakis (1982) cited in Hendry and Clements (2003, p. 304) produced results across
many models and conclude “Although which model does best in a forecasting competition
depends on how the forecasts are evaluated and what horizons and samples are selected, ‘simple’
extrapolative methods tend to outperform econometric systems, and pooling forecasts often pays.
In conclusion, the current literature shows that macroeconomic modelling and forecasting
went through dramatic changes over time. Firstly, there was a paradigm shift in doctrines, away
from Keynesianism towards monetarism.
Secondly, there was a dramatic evolution of statistical techniques, paving a way to more
rigorous modelling based on advanced econometric models. Alternative models were also
developed based on an AR process, which in many cases can equally compete with the structural
ones. There is no doubt that econometrics is subject to important limitations, which stem largely
from the incompleteness of the economic theory and the ever-changing nature of economic data.
22. 22
CHAPTER 2 Structural econometric model
2.1 Structural model building
2.1.1. Rationale for simultaneous equations
Univariate regression models consist of a dependent variable that is expressed as a linear function
of one or set of explanatory variables. In such models implicit assumption is that the cause-and-
effect relationship between the dependent and explanatory variables is unidirectional: the
explanatory variables are the cause and the dependent variable is the effect. However, many
conceptual frameworks for understanding economic processes institutions recognize that there
are feedback mechanisms operating between many of the economic variables; that is, one
economic variable affects another economic variable and is, in turn, affected by it (Gujarati, 2004,
p. 718).The realisation that economic data are a product of the existing economic system may
then be described as a system of simultaneous relations among the random economic variables
and that these relations involve current , future and past values of some of the variables.
As shown in Figure 4, simultaneous equations11 models allow to account for the
interrelationships within set of variables.
Figure 4: Influence diagram for simultaneous equation model.
There are not many instances when we look at the economy in isolation, therefore, the
simultaneous nature of economic variable determination, each as simplified version of the data
generation process represents more accurate real-world situations (Judge, 1982, p. 600).
11
In simultaneous equations models there is recognition that variables p and q are jointly determined. The random errors εd and εs
affect both p and q. Y is fixed exogenous variable that affect the endogenous variables p and q.
Yp
εd q εs
23. 23
Since the analysis of the economy will be more difficult when there are numerous equations in the
model, small-scale models can explain the economy in better way because `it is much easier to see
the forest when the trees are fewer` (Bodkin and Marwah, 1988, p.301).
Friedman (1953, p. 14) points out that `simple models are easier to understand,
communicate and test empirically with the data`. However, (Maddala, 1992, p. 2) stresses that the
choice of a simple model to explain complex real-world phenomena may lead to oversimplification
and unrealistic assumptions. The particular role of the model should therefore be the distillation
of the most important elements and their inter-relationships in precise and quantified manner to
reveal inner working shapes or design of more complicated mechanism (Klein, 1983, p.1).
2.1.2. Rationale for Keynesian model
The case for employing structural macroeconomic models that help with the policy analysis and
forecasting, rests on arguments for abstraction and simplification of how the economy works by
using empirical equations, which are themselves based on diversity of economic thinking (Kenway,
1994, p. 6).
As outlined earlier, there are two dominant strands that attempt to explain how the
economy operates. In the Classical theory monetary policy12 has no effect on the level or real
economic variables including output assuming all prices and nominal wages are perfectly flexible
both in the short run and long run owing to neutrality of money. Therefore an increase of money
stock will increase the price level proportionally the price level.
In the Keynesian theory, it is assumed that the economy is not operating at full
employment (equilibrium), since machines are not fully utilized and some workers are
unemployed, therefore the supply of output can be increased without increasing inflation.
Moreover they claim that prices don’t adjust instantly owing to wage rigidity, menu costs and
sticky prices. Since adjustments take time, an increase in aggregate demand (generated by an
increase in money supply or government spending) will not affect the price level in the short run.
Instead, it will lead to an increase in the level of output.
12
Monetarists, as with classical, reject the fiscal policy : Government spending, financed by taxes or borrowing from the public,
results in a crowding-out of private expenditures with little, if any net increase in total spending. However monetarists claim that
change in money stock exerts strong influence on total spending. Monetarists therefore conclude that action of monetary
authorities which `result in the change of the money stock should be the main tool of economic stabilization` Mankiw (2011, p. 42)
24. 24
The methodology applied in this dissertation is based on Keynesian framework for the following
reasons: the longstanding popularity among policy makers, fairly simple calculations compared to
other approaches, straight forward inferences and plausible forecasting results and lastly wide
spread consensus that prices fail to clear markets, at least in the short run.
Basic elements utilized in the Keynesian framework for the determination of national
income measured in gross domestic product GDP and its components can be then defined in
terms of a prototype macro model (Intriligator et al. 1996, p. 430).
2.2 Building blocks
The prototype model disaggregates national income into only three components two of which are
determined endogenously namely C and I
C = β0 + ϒ1Yt + ε1 (consumption function)
I = β1 + ϒ2Yt + ε2 (investment function)
Y = C + I + G (national income equilibrium condition)
Models of Keynesian fashion however disaggregate these two components further and they also
include more equations and variables to account for certain factors not treated explicitly in
prototype model.
Since the objective of this dissertation is to estimate fairly small model it is important to select
theoretically appropriate and statistically sound variables, so there would be balance between
disaggregation and simplicity.
The underlying idea behind an analysis of the aggregate demand in the Keynesian theoretical
framework determined by the (IS/LM model) is that prices (and nominal wages) do not clear
markets in the short-run owing to an inertia in the setting of prices, especially when the economy
is operating below full capacity /full employment. A temporary increase in government spending
or money supply affects the economy mainly through the government purchases multiplier. This,
in turn, increases investments at the initial level of interest rate. Increasing the aggregate demand
beyond the potential or full employment level will lead to an inflation. Keeping this basic
assumption in mind, we are able to construct our small model.
25. 25
2.2.1. Consumption Function [CONS]
Two initial equations, CONS and INV of the model describe the IS part. An adequate explanation of
consumers’ behaviour is a key behavioural equation in the model as consumption represent two-
thirds of the UK`s GDP.
The basic tenet, as outlined by Keynes (1936), is the positive relationship between
consumption and income; as income increases, so too does consumption, so the sign of the
variable INC’s coefficient should be positive. According to Keynes` absolute income hypothesis,
current consumption is stable function of current income, the marginal propensity to consume lies
between 0 < mpc < 1 and decreases as income increases.
Friedman (1957, p. 23), however points out that people do not change their consumption
habits immediately following a change in their income because of the force of habit (inertia).
Moreover, people may not know whether a change is permanent or transitory. Therefore,
Friedman suggests that the permanent income hypothesis may be approximated by an adaptive
expectations process (7), whereby permanent income is a weighted sum of current and past
values of observed income13:
Yt
P = λYt + λ(1 – λ)Yt–1 + (1 – λ)2Yt–2
P..... (1 – λ)nYt–k
P 7
Equation 7 based on geometric convergence can, however, be replaced for simplicity by the
lagged dependent variable CONS1 (lagged consumption by one quarter) making the consumption
function dynamic. This avoids two problems with ad hoc distributed lag equations: the degrees of
freedom increase and the multicollinearity problem disappears (Studenmund, 2011, p. 409). A
positive sign is expected, as a change from the last period`s consumption should have a positive
effect on current consumption.
13
Yt
P – permanent income is inherently nonmeasurable variable whereas transitory income is an observed income (Venieris and Sebold,
1977, p.381)
26. 26
2.2.2 Investment Function [INV]
Investment INV is a smaller component of income than consumption; it is more volatile, and so is
important in the analysis as a source of the short-term fluctuation in GDP. Investment can be
described as the accumulation over time by firms of real capital goods (Levacic and Rebmann,
1982, p. 229).
The basic motive for investment carried out by firms is to make a profit. The decisions about
undertaking investment depends on the state of the economy and the opportunity cost of
accumulating capital which is present consumption foregone.
The required rate of return in the Keynesian framework the – marginal efficiency of capital
(MEC), this `is the discount rate applied to the stream of returns on capital, equate the present
value of those returns to the supply price of capital` Venieris and Sebolt (1977, p. 406). According
to the Keynesian approach the MEC can then be compared to the market rate of interest so firms
can decided whether to purchase the capital goods or defer it. Therefore, if MEC exceeds the
market rate of interest, the firm should buy the capital stock. If the MEC is less than the market
rate the firm should forgo the purchase.
To account for this assumption, a variable INT8 was included in the investment equation, with
the expectation of a negative sign. This variable is lagged by eight quarters, because it takes time
to plan and start up a project.
`Since investment is injection into the circular flow of income, these changes will cause
multiplied changes in the income` Sloman and Wride (2009, p. 496). Because relatively modest
change in income can cause much larger change in investment, the accelerator14 variable CINC1
was included. Moreover, the multiplier INC was added under the assumption that investment also
depends on the current level of GDP. The rationale behind the use of the combination of
accelerator and multiplier is that, for example, arise in government expenditure will lead to a
multiplied rise of income. This rise of GDP will cause an accelerator effect; firms will respond to
the rise in consumer demand by investing more, and this will further increase income. If this rise of
income is larger than the first one there will be again be a rise in investment, which in turn will
increase income (the multiplier). Both CYNC1 and INC should have positive coefficients, as
increases in GDP have a stimulating effect on investment.
14
Clarke (1917) specify the accelerator principle in terms of potential aggregate production Yp
as a function of existed
capital (K) and labour (N). Assuming Kt=βYt , and It = Kt – Kt-1 , then Kt – Kt-1 = β(Yt - Yt-1) that is change in output has an
impact on the level of investment.
27. 27
2.2.3. Interest Rate Function [INT]
[INT] equations represent the monetary sector of the model, hence the LM part. The short-term
interest rate [INT] is modelled in a standard money demand tradition, that is: at any given level of
GDP there will be a particular transaction and precautionary demand for money. If we assume that
the Bank of England do have some power over controlling the money supply, its actions will have
an effect on the level of short-term interest rates and inflation. This was explicitly attempted in
the UK in the 1980`s under the phrase `Medium Term Financial Strategy`. Therefore the variable
CMS is included under assumption that a decrease in the money supply will increase the interest
rate. This is because `demand for money decrease when real short term interest rate rises as the
opportunity cost of holding money increases` Pindyck and Rubinfeld (1999, p. 447).
There is also empirical evidence for the gradual adjustments of interest rate by central banks.
Coibion and Gorodnichenko (2011, p. 26) provide evidence that supports the notion that `inertia in
monetary policy action has indeed been a fundamental and deliberate component of the decision-
making process by monetary policymakers: more specifically, their evidence `strongly favours
interest rate smoothing over serially correlated policy shocks as an explanation of highly persistent
policy rates.` To account for this observation, the variable CINT1 was included to capture the
changes between the lagged interest rates.
Moreover, an increase in GDP will lead to a greater demand for money and hence to higher
interest rates if equilibrium to be maintained, so the variable INC is included in equation. In
addition, INC1 was included, so the emphasis is not only on the level of GDP but also on whether
this level is changing. The responsiveness of the demand for money to a changes in national
income will depend on the size of the, mpc which is derived from the consumption function and
hence allows for a feedback effect.
Since the mid 1990`s there has been a widely accepted assumption that the BofE changed its
reaction function from controlling the money supply to a control of interest rate to maintain low
and stable inflation.
Howells and Bain (2009, p. 14) stress that transmission mechanism of monetary policy (see
Figure 5) sees the short-term interest rate as the policy instrument, not the explicit control of
money supply, for achieving the desired outcome.
28. 28
Transmission mechanism of monetary policy
Figure 5: (Source, BofE)
Changing the policy however, poses a problem for the model because of Lucas critique.
Lucas (1979) criticised the `Cowles Commission’s’ approach on a grounds that, when the Bank of
England introduced the inflation targeting policy in the early 1990`s, that behaviour changed the
reaction function (10) to a new one which treats money supply as an endogenous variable.
Therefore, with the new reaction function, parameters of all other equations reflect choices
that were made prior the policy change. Under the new policy rule the parameters could be
significantly different in each equation causing inaccurate forecasts (Webb, 1999, p. 27). Lucas
builds his hypothesis on the assumption that rational (forward-looking) agents will change their
decisions when faced with a policy change or anticipation of the change.
To address this problem at least in part, is to determine the flow of causality between the MS
and INT in our sample period. In an attempt to identify the direction of causality which can then
help to decide whether the money supply is endogenous or exogenous variable, that is whether a
change in money supply cause a change in interest rates or vice versa, Granger’s causality test (see
Appendix A: A.1) was conducted. According to this test, calculated value for money supply F= 0.69
> 0.05 critical value, suggesting that money supply does not cause interest rate.
The calculated value for the interest rate F= 0.31 > 0.05 critical value suggesting that the
interest rate does not cause money supply. From the Granger causality test, it appears that both
variables are therefore jointly determined, with slightly stronger evidence for interest rate being
the cause as its F value is closer to rejection region. For consistency with the IS/LM approach, we
will thus model money supply as an exogenous variable.
29. 29
2.2.4. Inflation Function [INF]
The last equation in our small macroeconomic model describes inflation [INF] as a deviation of
output from its long run equilibrium. This assumption is based on the accelerator theory whereby
an increase in output cannot be raised permanently beyond its potential without creating
inflationary pressure. This is expressed in terms of relationship between the rate of inflation,
rather than unemployment (as postulated by Phillips), an output gap, the gap15 between existing
output [INC] and potential [POTY] or full-employment output Howells and Bain (2009, p. 155). To
improve the inflation function, an adaptive expectations formation variable [INF1] was
incorporated, which takes into an account a worker’s estimation of the rate of inflation. The
resulting expectation-augmented Phillips curve, as postulated by Friedman (1959), in
output/inflation space, assumes backward-looking expectations), since the past errors are built in
to future forecast.
The size of the coefficient depends on the degree of money illusion: β= 1 means that workers
base their expectations decisions on the true real wage rate, 0 < β < 1 indicates that workers are
making incorrect assumptions about the true rate of inflation in the wage-bargaining process.
Proponents of rational expectations hypothesis argue however that economic agents efficiently
apply all relevant knowledge to the best available model in order to predict future values of
economic variables, and not just the past information Howells and Bain (2009, p. 242). Howells
and Bain also points out that rational inflation expectations prevent workers from adjusting their
labour contracts immediately because these contracts are for fixed period causing wage stickiness.
Chow (2011) compared adaptive and rational expectations and concluded that there is
insufficient empirical evidence supporting the rational expectations. Chow argues that adaptive
expectations provide a better proxy for psychological expectations as required in the study of
economic behaviour. (Millet, 2007, p. 12) tested rational expectations directly, using survey data
and indirectly, by implication and concluded that there is limited relevance of the Lucas critique in
key empirical applications but it is appropriate when it comes to dealing with breaks in series.
15
This is based on Okun`s law, which states that growth is negatively related to the change in the rate of
unemployment. It is formally expressed as deviations of income from its potential level Y-Y* are proportional to the
difference between actual and full employment β(u* - u).
30. 30
Millet acknowledges the importance of the adaptive behaviour on the part of agents `emphasizing
the insight for monetary policy that imply… an eventual sensitivity to regime changes but no
drastic or immediate response as a rule – not even to important innovations to the monetary
policymaking process, such as the introduction of inflation targeting` (Millet, 2007, p. 22).
(Eckstein, 1983, p. 50) examined the past record of the DRI Model and its predictions of changes in
policy regimes and concludes `so far, the evidence suggests that changes of expectations
formation are not among the principal causes of simulation error, that forecast error is largely
created by other exogenous factors and the stochastic character of the economy`.
Accounting identity
Finally, the model is completed with the addition of a real national expenditure (gross domestic
product) accounting identity. It defines real GDP16 [INC] as the sum of consumer spending [CONS],
investment spending [INV], and government spending [GOV].
16
ONS published real GDP, is already calculated as a net of imports and exports.
31. 31
CHAPTER 3 Structural Modelling
Complete model:
CONS = α1 + β2INCt + β3CONSt-1 + ε1t 8
INV = α4 + β5(INCt-1 - INCt-2) + β6INCt - β7INTt-8 + ε2t 9
INT = α8 + β9INCt + β10(INCt - INCt-1) - β11(MSt - MSt-1)
+ β12(INTt-1 - INTt-2) + ε3t 10
INF = α13 + INFt-1 + β14(INCt - POTYt) + ε4t 11
INC≡ CONS + INV + GOV 12
3.1. Modelling Methodology
Complete model therefore consists of four behavioural equations (8 –11) and one identity
equation (12) that specify additional variables in the system and their accounting relations with
the variables in the behavioural equations. As Table 4 summarises, there are five endogenous
variables (CONS, INV, INT, INF and INC) and eight predetermined17 (exogenous) variables (CONS1,
CINC1, INT8, CINC, CINC1, CMS, OTG, and GOV).
17
We can define endogenous variables to be those that are jointly determined in the system in the current period.
Predetermined variables are independent (exogenous) variables plus any lagged endogenous variables that are in the
Strictly speaking only exogenous variable in the model are MS, GOV and OTG because they are not simultaneously
determined within the model.
32. 32
Table 4: Summary table of the used variables in model
Name Variable Definition Type
CONS Real Aggregate personal consumption endogenous
CONS1 Consumption lagged by one quarter exogenous
INV Real Investments, expressed as gross capital formation endogenous
INC Real total income q/q (GDP) endogenous
CINC1 GDP lagged one quarter minus GDP lagged by two quarters exogenous
CINC Current GDP minus last quarter GDP exogenous
INT Interest rate on 3 month treasury bills endogenous
CINT INR lagged by one quarter - INR lagged by two quarters exogenous
INT8 Interest rate on 3 month treasury bills lagged by four quarters exogenous
INF1 Inflation lagged by one quarter exogenous
INF Inflation expressed as a growth rate of retail price index endogenous
CMS Real money stock narrowly defined (M0) minus last quarter (M0) exogenous
OTG GDP minus current potential GDP exogenous
POTY Potential output (full employment) GDP exogenous
GOV Real Government expenditure exogenous
Figure 6 describes all the causal flows between variables. There is circular causal flow between
GDP and consumption and investment; consumption and investment are in part determined by
GDP but they are also component of GDP. Interest rate and inflation are simultaneously
determined with GDP that is: when we follow the change of one of the variable through the
system, the change will get back to original causal variable, but there is no circular feedback loop.
Figure 6: Block diagram of five equation model
lag lag
lag
lag
lag
lag
Inflation
[INF]
Consumption
[CONS]
Ct
Investment
[INV]
GDP
[INC]
Interest Rate
[INT]
GOV MS
33. 33
3.1.1 Order condition identification
Prior the estimation of the model, identification needs to be carried out. It follows that structural
equation is identified only when enough of the system`s predetermined variables are excluded
from each equation to `allow us to use the observed equilibrium points to distinguish the shape of
the equation in question` (Studenmund 2011, pp. 478-481). The general method which can
determine whether equations are identified is the order condition identification, which states that
the number of predetermined variables excluded from the equation must be greater than or equal
to the number of included endogenous variables minus one Pyndick and Rubinfeld (1999, p. 345).
Table 5: The order condition of identification
Equation CONS CONS1 INV INC CINC1 CINC CINT INT INT8 INF INF1 CMS OTG GOV
1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 overidentified
2 0 0 1 1 1 0 0 0 1 0 0 0 0 0 overidentified
3 0 0 0 1 0 1 1 0 0 0 0 1 0 0 overidentified
4 0 0 0 0 0 0 0 0 0 0 1 0 1 0 overidentified
5 1 0 1 1 0 0 0 0 0 0 0 0 0 0 -
Table 5 shows that all four (equation 12 does not need to be ranked) equations meet the criteria
for further estimation, because more than one value is obtainable for some parameters.
Applying OLS directly to structural models may lead to simultaneity bias, if one or more of the
explanatory variables are endogenous and therefore could be correlated with an error term. This
may be the case for the variable INC because it appears as endogenous in (12) and exogenous in
(8-11). As a result, OLS estimated structural coefficients may be inconsistent and inefficient
parameters. As an alternative to the OLS therefore, is to use the Two Stage Least Square (2sls)
estimation, especially when structural parameters are overidentified.
3.1.2 Hausman test
To test whether the simultaneity problem exists, the Hausman test is widely used method.
The hypothesis follows:
H0 : the efficient estimator (OLS) is consistent. (prefer OLS)
Ha: the efficient estimator is not consistent. (prefer 2sls)
34. 34
The rationale for the Hausman test is whether or not the difference between the two estimators is
statistically significant. According to the test, (see appendix A: A3) the p-value is 0.0406 < 0.05 so
we reject H0 at the 5% significance level. Hausman’s test confirms that simultaneity is present; i.e.
OLS is inconsistent while the (2sls), which uses instrumental variables, will be both consistent and
efficient.
3.1.3 Structural model estimation
To conduct 2sls estimation, STATA statistical package was used. 2sls process18 can be broken down
into two stages described in the following manner:
In the first stage, OLS is estimated on the reduced form equation for each of the endogenous
variable in the system. This is accomplished by regressing endogenous variables on all the
predetermined variables in the system. However there is no need to estimate reduced form of
inflation equation since variable INF isn’t explicitly used as an exogenous variable in the system.
Strictly speaking there is no need to estimate the reduced form interest rate equation as well
because it only appears in the investment function as INT8 so there is no need to worry about
inconsistency in OLS estimates Pindyck and Rubinfeld (1999, p. 396).
First stage – reduced form:
CONŜ = π0̂ + π1̂GOV + π2̂CONS1 + π3̂CINC1 + π4̂INT8 + π5̂CINC + π6̂CMS + π7̂CINT1 + π8̂OTG + π9̂INF1
+ π10̂ MS
INV̂ = π11̂ + π12̂ GOV + π13̂ CONS1 + π14̂ CINC1 + π15̂ INT8 + π16̂ CINC + π17̂ CMS + π18̂ CINT1
+ π19̂ OTG + π20̂ INF1 + π21̂ MS
INĈ = π22̂ + π23̂ GOV + π24̂ CONS1 + π25̂ CINC1 + π26̂ INT8 + π27̂ CINC + π28̂ CMS + π29̂ CINT1
+ π30̂ OTG + π31̂ INF1 + π32̂ MS
Thus by using the first stage, we constructed variables which are linearly related to the
predetermined model variables and at the same time uncorrelated with the reduced form error
term. Only important information from this stage is the coefficient of determination (R2). As
(Appendix: A Table 9) shows, fairly high R2 in all individual equations suggests high correlation of
instruments with endogenous variables.
18
The term `process` means time series – it emphasizes the dependence of the present value of the series on its
values in prior periods.
35. 35
In the second stage, endogenous variables which appear on the right hand side (only) of the
structural equations are replaced with the first stage fitted (instrumental) variables. Then
unrestricted estimation is conducted by applying OLS to each equation in the implicit version of
reduced form. Therefore by constructing 2sls we achieved consistent estimators of endogenous
and predetermined variables.
3.2 Analysis of the structural model results
Owing to the lack of an extended historical data regarding the potential output, estimation using
2sls was restricted for the period 1980q1-2007q1.The choice of a quarter as a base time19 unit
emphasizes the short run movements of the economic system. Moreover, quarterly model may be
analytically more useful than annual model and also the results are more robust because the
increased number of observations. The evaluation criteria for simultaneous models are more
challenging that those estimated by a single equation because, model as a whole has much richer
dynamic structure than any individual equations. Although, there are no formal statistical tests for
2sls estimated equations, single equation test statistics may be used as a good indication for
potential problems. It is evident form Table 6 that there is a serious problem with the serial
correlation (SC) in equation INV and INT and heteroskedasticity (HT) in equation INV, and INF.
Table 6: SC and HT tests of individual equations-OLS estimation
19
Quarterly data also have some drawbacks in terms of : effect of seasonality in quarterly data due to seasonal effects, the greater
degree of serial correlation present in quarterly as compared with annual series and the determination of the appropriate structure
of lags
BREUSCH-GODFREY TEST
FOR AUTOCORRELATION
BREUSCH-PAGAN TEST FOR
HETEROSKEDASTICITY
RAMSEY`S RESET TEST OF
FUNCTIONAL FORM
χ2 - test
statistic
p-value pass/fail χ2 - test
statistic
p-value pass/fail F - test
statistic
p-value pass/fail
CONS 5.058 0.025 fail 3.591 0.166 pass 6.19 0.000 fail
INV 63.76 0.000 fail 12.03 0.007 fail 3.66 0.007 fail
INT 56.02 0.000 fail 7.930 0.094 pass 1.56 0.116 pass
INF 3.722 0.050 pass 174.56 0.000 fail 5.43 0.000 fail
36. 36
Failure of the CS test points to the fact that standard errors of the coefficients are biased.
Moreover, dynamic nature of the model will cause also bias in coefficients. This bias should be
eliminated when 2sls is employed, producing coefficients closer to their true value although serial
correlation will persist even after 2sls estimation.
Moreover, heteroskedasticity will not be corrected using 2sls, therefore t-score and hypothesis
test may be unreliable because standard errors of coefficients are biased. Ramsey’s test for
correct use of functional form is rejected in all equations but INT, suggesting that relationships
between some of the variables is nonlinear20, this is the case at least with the variable MS which
appears to resemble exponential trend Baum (2006, p. 124). Mariscal (2012a) stresses that most
of the economic variables are non-stationary and modelling such variables may cause a spurious
result. As (Appendix A: A7) show, this is the case of all the variables but INT. It can be also seen in
fairly high R2 across the equations and failed test for homoscedasticity. Investment function is
therefore likely to be affected, and may partly explained small and wrong signs21 in variables INC
and CINC. Consumption function is reasonably cointergrated thus offsetting some of the negative
effects of non-stationarity. Inflation function does contribute the least to the whole model and t-
ratios, size and sign of the OTG and INF1 are correct. Correlation matrix unveils (see appendix A:
A6.2) that there is serious collinearity between some of the variables, but owing to simultaneous
nature of the model this is not necessary relevant.
Comparing the size of the coefficients from Table 7 between OLS and 2sls, only notable
changes are in variables INC, CINC and CINC1. This was expected as GDP is the leading variable and
so its reduced form has a bigger impact on other variables in the model then other reduced form
endogenous variables when 2sls was employed. Significance of variables improved after 2sls,
although INT8 in investment function still points on insignificance. All the remaining variables in
the model are significant with 95% confidence. R2 slightly decreased, however (Pokorny, 1989, p.
309) stresses that it is meaningless to judge success of 2sls on the basis of R2 because `this method
makes no reference to it in fact it is in conflict with the criterion of consistency`.
20 Improved results may be achieved by using log/log or semi log functional form. Moreover changing to annualized data may
improve some of the statistics.
21 All the remaining signs of the coefficients in all other equations are correct and at reasonable size.
38. 38
3.3 Ex- post forecasting
To assess robustness of forecast results of the structural model, turning points at the time of large
exogenous shocks to the economy may serve as a good benchmark. Ideal example of such an
event is the recent recession 2007-08. The magnitude and the speed with which GDP collapsed is
unprecedented and therefore, the model will be exposed to a great challenge.
To get a better perspective about the likely validity and the condition of the endogenous
variable determined by the simulation23 solution with the actual values, historical simulation was
conducted. As Figure 7 shows there has been fairly close relationships between fitted and actual
values which brake up temporarily in 1990`s. Since 2000`s there has been increasing under
prediction, that suggests a structural break of some of the variables in relation to GDP.
Figure 7: Historical simulation, GDP 1980q1-2007q1
To capture whole business cycle turning points of GDP and at the same time keeping in mind short
term characteristics of the model, ex-post forecast was employed for 2007q1 – 2009q1.
It means performing forecast at the end of the estimation period and then compared it with the
available data. This enables us to test the forecasting accuracy of the model. Summary statistics of
the ex-post one step forecast are shown in Table 8.
23
Refers to mathematical solution to a simultaneous set of difference equations, that is current value of one variable relates to
past value of other variable.
39. 39
Table 8: Ex-post forecast based on 2sls regression
Observation Actual Prediction Error Error (%) S.D. of Error t- ratio24
2007q1 383980 393237.2 -9257.2 -2.41 0.0012 -1.929
2007q2 389661 395476.4 -5815.4 -1.49 0.0012 -1.193
2007q3 394031 398551.4 -4520.4 -1.15 0.0012 -0.921
2007q4 402523 401183.4 1339.6 0.33 0.0012 0.264
2008q1 406124 406566.6 -442.6 -0.11 0.0012 -0.088
2008q2 396921 403762.1 -6841.1 -1.72 0.0012 -1.377
2008q3 391272 396367.1 -5095.1 -1.30 0.0012 -1.041
2008q4 377355 395764.1 -18409.1 -4.88 0.0012 -3.907
2009q1 370764 386480.7 -15716.7 -4.24 0.0012 -3.394
Based on 9 observations from 2007q1 to 2009q1q4
Mean Prediction Errors (MPE, %) -1.591
Mean Absolute Prediction Error (MAPE, %) 1.959
Root Mean Sum Squares Predictive Errors (RMSPE, %) 2.492
MPE shows that model over-predicted GDP on average by -1.591%, with only one under-
predictions in 2007q4. Persistent over prediction (negative bias) suggests presence of serial
correlation. Clear pattern can be also seen in plot of residuals in (see appendix A, Figure 1). A
shortcoming of the MPE is that positive and negative errors can offset each other leading to
unwarranted results (Flegg, 2012a). To overcome the problem RMSPE was also calculated 2.492,
which measures deviation of the simulated variable from its path time Pindyck and Rubinfeld,
(1998, p. 210). Pindyck and Rubinfeld argue that the magnitude of the errors can be evaluated
only by comparing it with the average size of the variable. This points that calculated errors are
fairly small compared to actual values.
24
t –ratio was calculated by dividing S.D of error by prediction error.
40. 40
Calculated t-ratios of the errors (assuming null hypothesis of rejecting significant error when
<1.96) points on insignificance in 2007q1, 2008q4 and 2009q1.MAPE- which calculates an absolute
accuracy of the fitted model, is slightly better -1.959% than RMSPE because it doesn’t take into
account relatively larger errors in 2008q4 and 2009q1. Figure 8 shows that model predicts turning
points fairly well, particularly at the exact turning point quarters 2008q1-2008q2. This might be
because errors exhibiting negative bias did not fully reflect the sudden change in the direction.
However, with increasing time horizon forecasting accuracy somehow decreases showing again
over prediction. Overall, magnitude of the forecasting errors reflects small scale properties of the
model owing to the lack of detail in the scope and simplified equation specification.
Figure 8: Ex-post forecast based on 2sls regression
41. 41
CHAPTER 4 Non-causal model
4.1 introduction
Structural econometric model discussed thus far is based on causality25 and economic theory to
capture underlying structure of the economy. Causal models described through interactions
between several interrelated markets is a step closer to real world than the single equation that
assumes only weak exogenity (one way causality).Therefore, strong exogenity used in the model
which accounts for feedback between lagged endogenous variables that appears on the right hand
side could be used to generate more accurate one step forecast (Brown 1991, p. 338). Brown
however argues that even strong exogenity may not be sufficient assumption in the light of the
change in expectations and resulting consequences to a model as outlined earlier mainly by
(Lucas, 1979). Moreover, `because the structure of the model is assumed a priory and only on the
subset of a of the A causal factors … , causality will depend on specific model` (Brown 1991, p.
337). In addition, there is a zero restriction on variables that do not comply with underlying
assumptions causing the model to omit potentially important variables as pointed by Liu (1963).
Autoregressive linear stochastic dynamic models do not offer a structural explanation for its
behaviour in terms of other variables but does resemble its past behaviour, thus provide viable
alternative. These time series models which assume random process that generate the data, are
not therefore explained by the cause and effect relationship but rather `in terms of how
randomness is embodied in process` Pindyck and Rubinfeld (1999, p. 489).
There are number of techniques now used by modellers that utilize time series models. This
dissertation though will concentrate on the use of the autoregressive moving average model
(ARMA). This particular choice is motivated by the fact that ARMA model can offer powerful and
efficient means of generating short term forecast and it is widely accepted alternative
(benchmark) to structural models (Pokorny, 1987, p. 341).
25
(Brown 1991, p.338) stresses that statistics cannot prove causality but that causality must be assumed in regression analysis.
42. 42
4.2 Notation of ARMA model
The ARMA model is a combination of an Autoregressive (AR) model and moving average (MA).
Let 𝑦𝑡 represent26 GDP at time t :
𝐴𝑅(1): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + 𝜀𝑡
where 𝛿 is the mean of Y and 𝜀𝑡 is an uncorrelated random error ~ 𝑁(0, 𝜎2
), so 𝑦𝑡 follows a first
order autoregressive AR(1) stochastic process.
For stationary autoregressive process AR(1) 𝜇 , the mean of the process is invariant with respect of
time,
𝜇 =
𝛿
1−𝜙1
𝛾0, the variance of the process is constant for |𝜙1| < 1 and 𝛿 = 0
𝛾0 = 𝐸[(𝜙1 𝑦𝑡−1 + 𝜀𝑡)2
] =
𝜎 𝜀
2
1−𝜙1
2
and 𝛾1, the covariance follows the same constant properties.
𝛾1 = 𝐸[𝑦𝑡−1(𝑦𝑡−1 + 𝜀𝑡)] = 𝜙1 𝛾0 =
𝜙1 𝜎 𝜀
2
1−𝜙1
2
The pth-order autoregressive process AR(p) can be then expressed as
𝐴𝑅(𝑝): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + 𝜙2 𝑦𝑡−2 + ⋯ + 𝜙 𝑝 𝑦𝑡−𝑝 + 𝜀𝑡 𝜀𝑡~𝑊𝑁(0, 𝜎2
)
Stationary autoregressive process of order p the current observation 𝑦𝑡 is generated by weighted
average of the past observations going back p periods.
If we assume that AR process is not the only one which can generate y we can write:
𝑀𝐴(1): 𝑦𝑡 = 𝜇 + 𝜃1 𝜀𝑡−1 + 𝜀𝑡,
Where 𝜇 is a mean of the process, and 𝜀𝑡 as before, is the stochastic error~ 𝑖𝑖𝑑(0, 𝜎2
). It follows
that 𝑦𝑡 at time t is equal to constant plus moving average of the current and past error term.
26
Small letter y explains the variable in its deviation from mean form, (𝑌𝑡 − 𝛿)
43. 43
Therefore 𝑦𝑡 follows a first order moving average MA(1) process. For the process which is
generated by the white noise process with the variance:
𝛾0 = 𝜎𝜀
2
(1 + 𝜃1
2
)
and covariance for the one lag displacement,
𝛾1 = 𝐸[𝜀𝑡 + 𝜃1 𝜀𝑡−1)(𝜀𝑡−1 + 𝜃1 𝜀𝑡−2)] = 𝜃1 𝜎𝜀
2
The pth-order autoregressive process MA(q) can be then expressed as:
𝑀𝐴(𝑞): 𝑦𝑡 = 𝜇 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 + ⋯ + 𝜃𝑞 𝜀𝑡−𝑞 + 𝜀𝑡 𝜀𝑡~𝑊𝑁(0, 𝜎2
)
Moving average process or order q states that each observation 𝑦𝑡 is generated by a moving
average of the stochastic error going back q periods. Mean 𝜇 of the moving average model is
independent of time since E (yt) = 𝜇.
When the univariate series takes characteristics of both AR and MA, the combined ARMA (1,1)
process is written as
𝐴𝑅𝑀𝐴(𝑝, 𝑞): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + ⋯ + 𝜙 𝑝 𝑦𝑡−𝑝 + 𝜃1 𝜀𝑡−1 + ⋯ +𝜃𝑞 𝜀𝑡−𝑞 + 𝜀𝑡, 𝜀𝑡~𝑊𝑁(0, 𝜎2
)
4.3 non-stationarity in time series
Time series models including ARMA process, assume stationarity that is constant mean, variance
and covariance or (autocorrelation for weak stationarity). From the Dickey-Fuller test (see
appendix A: A7) it is apparent that many economic time series including GDP are non-stationary,
that is integrated of order 1 I(1) in the case of INC. This can be also seen from Figure 9 where
autocorrelation function is the first entry in the correlogram and represents correlation between
𝑦𝑡 and 𝑦𝑡−1 the second entry is correlation between 𝑦𝑡 and 𝑦𝑡−2 , etc. pointing on geometrical
decline. Thus the process has an `infinite memory`, current value of the process depends on all
past values at declining rate.
44. 44
Figure 9: Autocorrelation function, INC
In order to estimate the model we need to difference the process at d times to make it stationary
so ARMA (p,q) becomes ARIMA (p,d,q), that is, autoregressive integrated moving average model.
This is because, if the model is to be used for forecasting, we must assume that the features of this
model are time invariant, over the future time periods. `Thus the simple reason for requiring
stationary data is that any model which is inferred from these data can itself be interpreted as
stationary or stable, therefore providing valid basis for forecasting` (Gujarati, 2004, p. 840).
4.4 ARIMA methodology
The methodology behind the ARIMA is closely associated with George E.P. Box and Gwilym
Jenkins; Box-Jenkins approach, (BJ) who proposed an iterative approach to time series modelling
comprising of four steps: identification, estimation, diagnostic checking and forecasting.
4.4.1 Identification
As noted earlier GDP time series has a unit root so the characteristics of the stochastic process
change over time. This can be observed from Figure 10 where there is a clear trend in the variable.
To remedy non-stationarity, we need to decompose the original series by removing the trend in
order to isolate the other components of the data. Thus by taking the first order difference
∆𝑌 = 𝑌𝑡 − 𝑌𝑡−1 , we eliminated trend from the time series Figure 10. To confirm that the first
difference was enough to make the series stationary, an augmented Dickey Fuller test was used
(see appendix A: Table 17).
45. 45
Figure 10: The UK`s GDP q/q values and the first difference
By plotting autocorrelation function (ACF) and partial autocorrelation (PACF) on GROWTH27 ,
Table 9 there is a clear collapse to insignificance, indicating the growth rate of real GDP is now
stationary.
Table 9 : Autocorrelation function and partial autocorrelation
27
GROWTH was constructed in Stata by gen GROWTH = lnINC-l.lnINC , where lnINC=log(INC)
15 0.0016 0.1159 26.184 0.0361
14 -0.1444 -0.1164 26.183 0.0245
13 -0.0888 -0.1118 23.16 0.0398
12 -0.0364 -0.1616 22.027 0.0372
11 0.0169 -0.0512 21.838 0.0257
10 0.0732 0.1662 21.798 0.0162
9 0.0182 0.0871 21.048 0.0124
8 -0.0587 -0.1205 21.002 0.0071
7 -0.0093 -0.0412 20.528 0.0045
6 0.0308 0.0817 20.516 0.0022
5 -0.0202 -0.0170 20.388 0.0011
4 0.0021 -0.1206 20.333 0.0004
3 0.0565 -0.0758 20.333 0.0001
2 0.3171 0.2815 19.911 0.0000
1 0.2275 0.2277 6.7308 0.0095
LAG AC PAC Q Prob>Q [Autocorrelation] [Partial Autocor]
-1 0 1 -1 0 1
46. 46
Moreover ACF and PACF may be used to get an indication of the order of lags. In order to identify
an MA(q) order of lags, we need to find how many periods the correlation lasts between terms in
ACF. It can be seen from Figure 11 that first two autocorrelations are outside of the 95%
confidence band indicating they are statistically significantly different from zero, thus correlogram
indicate MA(2).
In order to identify an AR(p) we need to look at the PACF which is the plot of autocorrelations
between 𝑦𝑡 and 𝑦𝑡−𝑘 with the correlations between the intervening correlations omitted. Here,
statistically significant are first two lags 𝑦𝑡 and 𝑦𝑡−1 = 0.227, 𝑦𝑡 and 𝑦𝑡−2 = 0.281, suggesting an
AR(2) process.
Figure 11 ACF and PACF of the GROWTH
47. 47
4.4.2 Estimation
Estimation of the ARIMA28 model was conducted by the STATA for the period 1978q1-2009q4.
Owing to the fact that error terms in MA process tend to be non-normally distributed, which
means that estimated coefficient 𝜃̂𝑡 doesn’t represent the true value of 𝜃𝑡. In these instances
maximum likelihood (ML) procedure needs to be employed instead of OLS. ML therefore
estimates the parameter 𝜃𝑡 as the value of 𝜃̂𝑡 which would maximise the probability of obtaining
the sample actually observed Pindyck and Rubinfeld (1999, p. 53). STATA fits the model by
maximizing the log of the likelihood function through optimization method and progress iteration
by iteration (Becketi, 2013, p.245).
Our tentatively identified ARMA (2, 0, 2) model however doesn’t necessary preclude the
best forecasting results. In addition to a rule of thumb lag selection, Akaike’s and Schwarz
Bayesian information criterion was conducted to get a better perspective about the validity of
alternative lags order Table 10. According to Akaike’s information criterion that prefers model
which minimises an information loss, the best fit model is ARIMA (1,0,1). According to Schwarz
criterion, the best fit is ARIMA (4,0,4). Box and Jenkins (1970) argue that there is only very limited
difference in forecasts between complex high order system and low order systems, therefor only
low-order systems will be considered namely ARIMA(1,0,1) and ARIMA(2,0,2). Comparing the
overall fit of the models in terms of log-likelihood unveils only marginal difference in the
magnitude Appendix B, Tables 1,2. All coefficients implied by the ARIMA (1,0,1) are significant at
5% level. In this specification the 𝜓 coefficients implied by the specification29 show that 72% of an
economic shock persists into the succeeding quarter, followed by 40% of the original shock.
Standard error(SE) of white noise (𝜀) 0.01 is > than 0.006, indicating that the variability of the error
is large relative to the mean of the process (Becketi, 2013, p. 249). Both models reject the Wald
statistics which test all the coefficients against the null hypothesis that they are insignificant. The
ARIMA (2,0,2) 𝜓 estimates 32.8% and -62.1% implying that the shocks to the GDP persist only
marginally and reverse in the succeeding quarters.
28
Although there will be a reference to the ARIMA (p,0,q) through a text we could just estimate log of real GDP directly in STATA
and notation would change to ARIMA (p,1,q). The results would be identical.
29 For details see Appendix B: `Dynamic response of GDP growth to economics shocks`
48. 48
Calculated t- ratios of 𝜙1 and 𝜃1 in ARIMA (2,0,2) point on insignificance at 5% level, moreover
coefficient 𝜙1is only one third of the size of the same coefficient in the former model. SE(𝜀) 0.009
is > than 0.006, indicating that the variability of the error is large relative to the mean of the
process, but slight improvement from the former model. Results from ARIMA (2,0,2) therefore
suggest that coefficients are not very precise, i.e. not providing accurate estimates of the dynamic
response of GDP growth to economic shocks.
Table 10: Akaike’s and Schwarz Bayesian information criterion for model GROWTH
Model GROWTH
Akaike`s
information
criterion
Bayesian
information
criterion
ARIMA (1,0,1) -800.01 -788.63
ARIMA (1,0,2) -806.96 -792.74
ARIMA (1,0,3) -805.67 -788.65
ARIMA (1,0,4) -806.52 -786.61
ARIMA (2,0,0) -805.63 -794.27
ARIMA (2,0,1) -804.01 -789.81
ARIMA (2,0,2) -807.45 -790.33
ARIMA (2,0,3) -805.83 -785.92
ARIMA (2,0,4) -801.85 -799.08
ARIMA (3,0,0) -804.31 -790.09
ARIMA (3,0,1) -806.05 -788.98
ARIMA (3,0,2) -806.21 -786.31
ARIMA (3,0,3) -805.09 -782.34
ARIMA (3,0,4) -801.98 -776.39
ARIMA (4,0,0) -803.98 -789.91
ARIMA (4,0,1) -802.63 -789.56
ARIMA (4,0,2) -805.98 -783.24
ARIMA (4,0,3) -803.56 -777.97
ARIMA (4,0,4) -801.58 -733.14
49. 49
4.4.3 Diagnostic checking
The next step in the BJ approach is model diagnostic checking, that is to check adequacy of
candidate ARIMA models. It follows that well specified and accurately fitted model is evidence
that residuals of its estimated error, is a white noise (Becketi, 2013, p. 254). Widely used test for
iid residuals is the Ljung–Box Portmanteau test, which considers all ACF simultaneously for
significance. According to (Appendix B: B3) the test strongly confirms no evidence that residuals
deviate from white noise in models. On the basis of the considered tests, both models performed
similarly as BJ methodology suggest. Following the BJ`s principle of parsimony and the fact that
the ARIMA (1,0,1) outperformed the ARIMA(2,0,2) in some of the tests we therefore concluded
that the ARIMA(1,0,1) would fit the GDP most accurately. Thus:
𝐴𝑅𝐼𝑀𝐴(1,0,1): 𝑦𝑡 = 0.006 + 0.670𝑦𝑡−1 − 0.446𝜀𝑡−1 + 𝜀
4.4.4 Forecasting
The last part of the time-series modelling concerns forecasting. This was employed on the same
time period as before, 2007q1-2009q1, with the use of STATA.
Table 11: Ex-post forecast based on ML regression30
Observation Actual Prediction Error Error (%) S.D. of Error t- ratio
2007q1 383981.6 381509.11 2472.53 0.644 0.0012 0.515
2007q2 389660.1 386980.66 2679.40 0.688 0.0012 0.550
2007q3 394029.1 392864.48 1164.60 0.296 0.0012 0.237
2007q4 402524.0 396951.74 5572.26 1.384 0.0012 1.108
2008q1 406122.5 406439.35 -316.90 -0.078 0.0012 -0.062
2008q2 396920.0 408926.21 -12006.22 -3.025 0.0012 -2.421
2008q3 391272.7 396796.96 -5524.28 -1.412 0.0012 -1.130
2008q4 377354.4 391910.98 -14556.61 -3.858 0.0012 -3.088
2009q1 370763.6 376103.63 -5340.01 -1.440 0.0012 -1.153
Based on 9 observations from 2007q1 to 2009q1q4
Mean Prediction Errors (MPE, %) -0.722
Mean Absolute Prediction Error (MAPE, %) 1.424
Root Mean Sum Squares Predictive Errors (RMSPE, %) 1.855
30
For simplicity actual and predicted values were converted back to values by the use of antilog, see Appendix B: B4 for details.
50. 50
All the calculated forecasting performance statistics, namely MPE, MAPE and RMPSE, points to a
more accurate forecast than the one produced by the structural model. The ARIMA model under-
predicted GDP between 2007q1-2007q4 and over-predicted between 2008q1-2009q1. Despite
persistent over and under prediction, the residuals doesn’t show any longer-term pattern (see
Appendix B: B5). Insignificant t-ratios are in 2008q2 and 2008q4. As Figure 12 shows, the exact
timing of turning point wasn’t picked up by the model well, but as the time horizon increased,
accuracy improved.
Figure 12: UK’s GDP forecast ARIMA (1,1,1), 2007q1-2009q1
51. 51
CHAPTER 5 Conclusion
In this dissertation, we investigated two models from the opposite spectrum of underlying
assumptions. The small structural model, which was based in the IS/LM/PC framework, quite
clearly responded to exogenous shocks by exhibiting a cyclical response mechanism. Although the
model responded to the exogenous shock to the GDP more accurately than the non-structural
model did, it failed to keep up in following quarters and thus forecasting accuracy gradually
decreased. Structural econometric models are no more than a reflection of the economy’s
interactive nature and so they cannot contain any more information than was put into them
during their construction. An indication of the limitations could be observed from the persistent
over-prediction since 2000 when the historical simulation was conducted. Moreover, most of the
variables used, exhibited properties that violate the classical Gauss-Markov assumptions, possibly
causing radically different implications for the forecasting results from a model that is well
specified and stationary.
Atheoretical ARIMA models, which rely solely on past observations, provided superior results
to the structural model. Although, the exact turning points were not predicted as precisely as for
the former model, the forecasting results were more consistent through the forecasting period;
this aspect is well captured by the better results from the RMSPE criterion. This point to a two
caveats. In order to build a structural model that can be compared with the atheoretical model,
further disaggregation is essential. Therefore, the prospective model builders need to assess the
cost and benefits of building a more complex model carefully. This would mean, assessing whether
the added benefits (measured in terms of improved forecast) of the simultaneous- equation
model can be expected to unweighs the added costs involved in building it. Moreover Mizon and
Hendry (2011, p. 5) points out that even being the `best forecasting model does not justify its
policy use; and forecast failure is insufficient to reject a policy model`. They argue that models that
`wins’ forecasting competitions have rarely any useful implications for an economic policy analysis,
as they lack both target variable and policy instruments. This is clearly the case for the ARIMA
model which can only be used for forecasting. Interestingly, the best result would be achieved
when the two models’ forecasts are combined, given the structural model over-prediction and the
non-structural model’s under-prediction.
52. 52
APPENDIX
APPENDIX A:
A1: glossary of variables
CONS - Final consumption expenditure, households, households &NPISH expenditure, constant prices,
seasonally adjusted 2010, chained prices, quarterly
INC - Gross Domestic Product chained volume, seasonally adjusted 2010 prices, quarterly values
CINC - first difference of INC
CINC1 - GDP lagged one quarter minus GDP lagged by two quarters
MS - M0 notes and coins outside the central bank, seasonally adjusted current prices, monthly values
CMS - Narrowly defined (M0) minus last quarter (M0)
GOV - General Government final consumption expenditure CVM, seasonally adjusted 2010, chained prices,
quarterly
INT - UK 3 month treasury bills, Yield (annualized)31
, in %
CINT - INT lagged by one quarter - INT lagged by two quarters
INF - UK CPI Index: all items (annualized), monthly in %
INF1- Inflation lagged by one quarter
INT8 - Interest rate on 3 month treasury bills lagged by eight quarters
POTY - UK’s potential output current prices, quarterly
OTG - GDP minus current potential GDP = (∆𝑙𝑜𝑔𝐼𝑁𝐶 − ∆𝑙𝑜𝑔𝑃𝑂𝑇𝑌)
INV - Gross capital formation CVM, seasonally adjusted 2010, chained prices, quarterly data
31
INT and INF was converted to quarterly values by dividing the variable by 4
53. 53
A2: Granger casualty test:
In order to conduct the test all variables need to be stationary. As an appendix A, A7 suggest, both
variables appear to be non-stationary. According to Dickey-Fuller test, first difference of variable
INT, Table1 was enough to make the variable stationary. Curiously in the case of MS there had to
be also second difference conducted for variable to pass DF test (see Table 2,3). Actual test consist
of running Vector Autoregressive Model (VAR) on both variables and their lags. Criteria for the
length of lags is subject to Akaike`s information criterion. When choosing lags that minimize the
AIC, lags 4, 5,6,7,8 were tried. As AIC was improving with the increased lag length, four lags were
chosen to simply the process Table 4. Then, the Granger causality Wald test was conducted
Table5. It means to run two Granger test for each direction. As the F-test shows, both variables
rejected null hypothesis for non-causality.
A2.1 ADF for first differenced [INT] is stationary
Table 1
_cons -.0936771 .0822875 -1.14 0.257 -.2566728 .0693186
L4D. .0689543 .0875011 0.79 0.432 -.1043685 .2422772
L3D. .22083 .1188896 1.86 0.066 -.0146674 .4563274
L2D. .1289997 .1352925 0.95 0.342 -.1389887 .3969882
LD. .1628473 .1519139 1.07 0.286 -.1380648 .4637594
L1. -.9772638 .1681747 -5.81 0.000 -1.310386 -.6441421
CINT1
D.CINT1 Coef. Std. Err. t P>|t| [95% Conf. Interval]
MacKinnon approximate p-value for Z(t) = 0.0000
Z(t) -5.811 -3.503 -2.889 -2.579
Statistic Value Value Value
Test 1% Critical 5% Critical 10% Critical
Interpolated Dickey-Fuller
Augmented Dickey-Fuller test for unit root Number of obs = 121
. . dfuller CINT,lags(4) reg
54. 54
A2.2 ADF for first differenced [MS] is non-stationary
Table 2
A2.3 ADF for second differenced [MS] is stationary
Table 3
_cons 26.49469 39.97869 0.66 0.509 -52.68815 105.6775
L4D. -.529233 .0945871 -5.60 0.000 -.7165746 -.3418914
L3D. -.707205 .1205876 -5.86 0.000 -.9460439 -.4683662
L2D. -.7359772 .1214308 -6.06 0.000 -.9764863 -.4954681
LD. -.8665757 .1161508 -7.46 0.000 -1.096627 -.6365245
L1. -.0143925 .0965072 -0.15 0.882 -.2055373 .1767522
CMS
D.CMS Coef. Std. Err. t P>|t| [95% Conf. Interval]
p-value for Z(t) = 0.4409
Z(t) -0.149 -2.359 -1.658 -1.289
Statistic Value Value Value
Test 1% Critical 5% Critical 10% Critical
Z(t) has t-distribution
Augmented Dickey-Fuller test for unit root Number of obs = 122
. . dfuller CMS,lags(4) reg drift
_cons 25.62748 20.48186 1.25 0.213 -14.94315 66.1981
L4D. .1555712 .1012939 1.54 0.127 -.0450726 .3562149
L3D. .8291929 .2101657 3.95 0.000 .4128952 1.245491
L2D. 1.648414 .3119625 5.28 0.000 1.030477 2.266352
LD. 2.485373 .4029534 6.17 0.000 1.6872 3.283546
L1. -4.434479 .469095 -9.45 0.000 -5.363666 -3.505292
dms
D.dms Coef. Std. Err. t P>|t| [95% Conf. Interval]
p-value for Z(t) = 0.0000
Z(t) -9.453 -2.359 -1.658 -1.289
Statistic Value Value Value
Test 1% Critical 5% Critical 10% Critical
Z(t) has t-distribution
Augmented Dickey-Fuller test for unit root Number of obs = 121
. . dfuller dms,lags(4) reg drift
56. 56
A3. Hausman specification test :
Rationale behind the test is to test for the presence of simultaneity; that is weather the
endogenous variable is correlated with an error term. If there is no simultaneity, OLS should
generate efficient and consistent parameter estimators. Instrumental variable (generated by 2sls)
on the other hand will be consistent but inefficient. If however simultaneity is present OLS will be
inconsistent, while 2sls will be bot consistent and efficient.
The test comprise of: regressing consumption function by OLS Table 6 and obtain residuals. Then
regressing consumption function using instrumental variable and obtain residuals Table 7. Finally
we compared the quadratic differences between the coefficients vectors scaled by the precision,
matrix which gives us a χ2 test statistics Table 8.
A3.1 single equation OLS estimation
Table 6
_cons -3411.072 1019.123 -3.35 0.001 -5431.582 -1390.562
INC .1041349 .0268763 3.87 0.000 .05085 .1574198
CONS1 .8603323 .0379196 22.69 0.000 .785153 .9355116
CONS Coef. Std. Err. t P>|t| [95% Conf. Interval]
Total 2.3372e+11 108 2.1640e+09 Root MSE = 1106.8
Adj R-squared = 0.9994
Residual 129857840 106 1225073.96 R-squared = 0.9994
Model 2.3359e+11 2 1.1679e+11 Prob > F = 0.0000
F( 2, 106) =95335.52
Source SS df MS Number of obs = 109
. . reg CONS CONS1 INC if tin(1980q1, 2007q1)
57. 57
A3.1 single equation 2sLs estimation
Table 7
A3.2 Hausman test
Table 8
Instruments: CONS1 MS INT8 CINT1 INF1 CINC1 GOV CINC OTG
Instrumented: INC
_cons -4787.321 1217.284 -3.93 0.000 -7173.153 -2401.488
CONS1 .8052211 .0463722 17.36 0.000 .7143333 .896109
INC .1432685 .0328878 4.36 0.000 .0788096 .2077275
CONS Coef. Std. Err. z P>|z| [95% Conf. Interval]
Root MSE = 1102.4
R-squared = 0.9994
Prob > chi2 = 0.0000
Wald chi2(2) = 1.9e+05
Instrumental variables (2SLS) regression Number of obs = 109
> , 2007q1)
. . ivregress 2sls CONS CONS1 (INC= MS INT8 CINT1 INF1 CINC1 GOV CINC OTG ) if tin(1980q1
Prob>chi2 = 0.0406
= 4.19
chi2(1) = (b-B)'[(V_b-V_B)^(-1)](b-B)
Test: Ho: difference in coefficients not systematic
B = inconsistent under Ha, efficient under Ho; obtained from regress
b = consistent under Ho and Ha; obtained from ivregress
CONS1 .8052211 .8603323 -.0551111 .0269089
INC .1432685 .1041349 .0391336 .0191077
tsls ols Difference S.E.
(b) (B) (b-B) sqrt(diag(V_b-V_B))
Coefficients
are on a similar scale.
unexpected and possibly consider scaling your variables so that the coefficients
problems computing the test. Examine the output of your estimators for anything
coefficients being tested (2); be sure this is what you expect, or there may be
Note: the rank of the differenced variance matrix (1) does not equal the number of
. . hausman tsls ols,sigmaless