SlideShare a Scribd company logo
Student projects and dissertations
Faculty: Bristol Business School
Student’s name: Boris Kisska
Award: Economics
An investigation into causal and non-causal econometric
models and their performance in forecasting the UK’s
Gross Domestic Product.
Boris Kisska
Academic year of presentation: 2013/2014
Bristol Business School
2
CONTENTS
List of figures I
List of tables II
Acknowledgements III
Introduction IV
Chapter 1 Literature review
1.1 Macroeconomic theories 12
1.2 Keynesian Revolution 13
1.3 Expectations Revolution 15
1.4 The new Keynesians 17
1.5 Forecasting accuracy 18
1.6 Non-structural models 20
Chapter 2 Structural econometric model
2.1 Structural model building 23
2.1.1 Rationale for simultaneous equations 24
2.1.2 Rationale for Keynesian model
2.2 Building blocks 25
2.2.1 Consumption Function 25
2.2.2 Investment Function 26
2.2.3 Interest rate Function 27
2.2.4 Inflation Function 29
Chapter 3 Structural modelling
3.1 Modelling Methodology 31
3.1.1 Order condition identification 33
3.1.2 Hausman test 34
3.1.3 Structural model estimation 35
3.2 Analysis of the structural model results 36
3.3 Ex- post forecasting 38
3
Chapter 4 Non-causal model
4.1 Introduction 42
4.2 Notation of ARMA model 43
4.3 Non-stationarity in time series 44
4.4 ARIMA methodology 45
4.4.1 Identification 45
4.4.2 Estimation 46
4.4.3 Diagnostic checking 48
4.4.4 Forecasting 49
Chapter 5 Conclusion 51
Appendix 53
4
List of Figures
1. Decision making at the Bank of England 11
2. Graphical representation of the forecasting performance of the eight models 14
3. Graphical representation of the forecasting performance of the four different models 19
4. Influence diagram for simultaneous equation model 23
5. Transmission mechanism of monetary policy 28
6. Block diagram of five equation model 32
7. Historical simulation , GDP 1980q1-2007q1 38
8. Structural model, GDP q/q, forecast 2007q1-2009q1 40
9. Autocorrelation function INC 44
10. The UK`s GDP q/q values and the first difference 45
11. ACF and PACF of the GROWTH 46
12. UK’s GDP forecast ARIMA(1,1,1), 2007q1-2009q1 50
5
List of Tables
1. Comparison of forecasting performance of the eight different models 14
2. Forecasting performance of the four different models 19
3. One year ahead UK forecast error - Mean Absolute Error (MAE) 20
4. Summary table of the used variables in model 32
5. The order condition of identification 33
6. SC and HT tests of individual equations-OLS estimation 35
7. Summary statistics of OLS and 2sls estimation procedures 37
8. Ex-post forecast based on 2SLS regression 39
9. Autocorrelation function and partial autocorrelation 45
10. Akaike’s and Schwarz Bayesian information criterion for model GROWTH 48
11. Ex-post forecast based on ML regression 49
6
Acknowledgements
I would like to take this opportunity and thank Tony Flegg for his valuable comments throughout
the write up. I would also thank my family and friends for supporting me during the challenging
final year.
7
Abstract
Accurate forecast of the direction and magnitude of the exogenous shocks to the aggregate demand
has been subjected to extensive research during the past several decades. In the aftermath of the
recent events in 2007 there has been heated debate about the validity of the nowadays econometric
models and their failure to predict recent recession. This brings into question the whole validity of
causal macroeconometric models based on the economic theory. Atheoretical models that do not
assume an underlying theory, therefore may serve as a viable alternative when assessing the
dynamics of the shock to the economy. This dissertation therefore investigates implications and
forecasting validity of different econometrics methods that identify exogenous shock to the UK`s
GDP, with the particular interest in the recent recession 2007-08.
8
The relevant question to ask about the ‘assumptions’ of a theory is not whether they
are descriptively ‘realistic’ for they never are, but whether they are sufficiently good
approximations for the purpose at hand. And this question can be answered by only
seeing whether they work, which means whether it yields sufficiently accurate
predictions.
Milton Friedman (1954, p. 8)
9
INTRODUCTION
This dissertation aims to provide a comprehensive analysis and evaluation of the two significantly
different macroeconometric models and their ability to forecast the UK`s Gross Domestic Product
(GDP). Particular focus is on whether structural based models perform better than their
atheoretical counterparts in forecasting turning points that are associated with the occurrence of
unusually large shocks to the economy. The crucial argument lies in the view that cycles and
trends in time-series are systematic. However, as Eugen Slutsky and Ragnar Frisch suggest the
cycles are not necessary systematic in nature but rather may be merely artefacts of random
shocks, working their way through the economy Nelson and Plosser (1972, p. 909).
Gross domestic product (GDP) is arguably the most important aggregate indicator of
economic activity in the UK Lee (2011). GDP is the value of goods and services produced in an
economy in a given year, which are determined by the common measuring of market prices and
are sensitive to changes in the average price level occurring in the economy. There are three
different approaches that can be used to measure GDP: the expenditure approach, the income
approach and the production approach. The primary focus in this dissertation is on the
expenditure measure which is comprised of: GDP (E) = household final consumption expenditure +
final consumption expenditure of non-profit institutions serving households + general government
final consumption expenditure + gross capital formation + exports – imports Lee (2012).
Accurate GDP analysis and forecasts are of great theoretical and practical value for policy
decisions and for assessments of the future state of the economy. Holden et al. (1990) state that
forecasts are required for two basic reasons: the future is uncertain; and the full impact of many
decisions taken now might not be felt until later. Consequently, accurate predictions of the future
would improve the efficiency of the decision-making process.
The use of economy-wide macro-econometric models for forecasting and simulation
analyses of the likely economic policy outcomes has expanded to the majority of countries.
Models have become an important instrument of world-wide analyses and forecasts conducted by
international organizations and renowned research institutions, as well as by central banks of
many countries (Welfe, 2013, p. 395).
10
This is because they not only provide an analytical framework to link the demand and supply sides
and the resource allocation process in an economy but also may help in reducing fluctuations and
enhancing economic growth, which are two major aspects of any economy (Bahattari, 2005, p. 2)
As Figure 1 summarizes, macroeconomic models, alongside others, play a major role in informing
and disciplining monetary policy decisions at the Bank of England.
Decision making at the Bank of England
Figure 1: (Source, BoE)
The dissertation is organised as follows: Chapter 1 provides a literature review. The review is by no
means exhaustive but provides comprehensive evaluation of the past and present trends in
macroeconometric modelling and forecasting. Chapter 2 presents the rationale behind the
building the structural model. Chapter 3 introduces an econometric analysis aimed at developing a
satisfactory forecasting model. Chapter 4 concerns identification, estimation, diagnostic checking
and forecasting of non-causal the autoregressive integrated moving average (ARIMA) model.
Chapter 5 contains the conclusion. The dissertation also includes an appendix containing the
detailed calculations and statistical printouts of all the models considered.
11
CHAPTER 1 Literature review
The aim of this review is to provide an evaluation of the past and existing research on the use and
forecasting performance of different econometric models with the particular focus on their ability
to forecast the UK `s Gross Domestic Product (GDP).
Forecasting models can be broadly split into two categories, based on `the trade-off between
their conceptual coherence with economic theory and their empirical coherence with economic
data` (Pagan, 2003, p. 1).
Causal or structural models are a set of behavioural equations, as well as institutional and
definitional relationships representing the main behaviour of economic agents and the operations
of an economy (Valadhkani 2004). The goal of quantitative analysis of an economy via the
estimation of an interrelated system of equations `is to achieve three purposes; descriptive,
prescriptive and predictive uses of econometrics, that is structural analysis, policy evaluation and
forecasting’ (Intriligator et al., 1978, p. 430).
Atheoretical or non-causal models, on the other hand, rely more on statistical patterns in the
data. These models attempt to exploit the reduced-form correlations in observed macroeconomic
time series, with fewer assumptions about the underlying structure of the economy (Diebold
1998, p. 2). Because of their restricted nature they are used almost exclusively for forecasting
purposes or as an accuracy benchmark for structural models.
1.1 Macroeconomic theories
The first attempts to formalize theoretical framework of the national economy as a whole took
place during the early 20th century. Three trends in the literature could be distinguished then. The
first, stemmed from general equilibrium theory formulated by Leon Walras and later developed
by Wilfredo Pareto, the second rested on the foundations of business cycle laid by Ragnar Frisch,
Joseph Schumpeter and Arthur Cecil Piggou and the third referred to J.M.Keynes’ fundamental
writings regarding unemployment and demand deficiency (Welfe, 2013, p. 8).
12
1.2 Keynesian Revolution
Complete specification of the macroeconomic model shows how an economic behaviour and
institutions affect relationships between a set of conditions x and outcomes y. (Reiss and Wolak,
2007, p. 4284). Economic models, however, rest on deterministic assumptions and as such do not
perfectly fit observed data. Structural econometric modellers thus must add stochastic statistical
structure in order to rationalize why economic theory does not perfectly explain data.
Theoretical framework developed by J.M. Keynes(1936),and especially his General Theory
became a cornerstone of the concepts that led to the construction of a class of macroeconometric
models based on “Cowles commission” methodology, associated with Klein, Goldberger and
Modigliani, works that predominated in the USA and Europe for over 30 years (Welfe 2013,
p.4).Lucas and Sargent, (1981, p. 296) stress that success of Keynesian revolution was in the form
of a revolution of methods that rested on several important features: `the evolution of
macroeconomics into a quantitative, scientific discipline, the development of explicit statistical
descriptions of economic behavior, the increasing reliance of government officials on technical
economic expertise, and the introduction of the use of mathematical control theory to manage
an economy ’.
The general profile of the models based on the Cowles commission`s methodology was
macroeconomic: they contained final demand (consumption, investment), demand for labour, as
well as prices, wages and financial flows (Klein, 1991).
Variables whose introduction was theoretically unjustified were eliminated by imposing
zero restrictions on the appropriate parameters. The IS-LM/PC1 model becomes a workhorse tool
at hand in constructing and evaluating macro models. Common features linked with the Klein-
Goldberger models explicated the major feedbacks that included a consumer multiplier, where
consumption depended on national income and was one of the national income components.
Moreover, they also defined the fundamental macro-identity, i.e. national income, as being equal
to the sum of consumption, government expenditure, investment and net exports. The Klein-
Goldberger models paved the way for the builders of many other medium-term model of the US
and UK economy (Welfe, 2013, p. 4).
1
(Vroey and Malgrange, 2011, p.3) points that origin of IS-LM model can be traced to Modigliani (1944). The IS/LM model
comprises two distinct sub models, the Keynesian and the classical system. Hence, strictly speaking, it should not be considered
Keynesian. But at the time of its dominance, most economists were convinced that the Keynesian variant corresponded to reality
while the classical system was viewed as a foil. Regarding the Phillips Curve (PC) - The Klein-Goldberger model was the first to
explain wage rates assuming that their growth depended on the rate of unemployment.
13
0
5
10
15
20
25
30
35
40
45
50
1 2 3 4 5 6 7 8
RMSE
Quarters ahead
Ex-ante forecast, selsected models
Brookings
ARIMA
BEA
Fair
DRI
FRB - St.louis
Wharton III
Several competing models were established such as the Wharton model, the MPS model
developed for the Fed, The H.M. Treasury Model and many others.2 Klein (1973) compared eight
models and concluded that RMSE3 of major U.S. econometric models showed that, despite some
exceptions, errors were within reasonable bounds.4
Table 1: Comparison of forecasting performance of the eight different models
Figure 2: Graphical representation of the forecasting performance of the eight models
2 The most significant include ; Economic Analysis Model (BEA ), A. Hirsch, M. Liebenberg, and G. Narasimhan; Brookings Model, G.
Fromm, L. Klein, and G. Schink; DHL III Model, University of Michigan, S. Hymans and H. Shapiro; Data Resources, Inc., Model (DRI-
71), 0. Eckstein, E. Green and associates; Fair Model, Princeton University, R. Fair; Federal Reserve Bank of St. Louis Model (FRB St.
Louis), L. Andersen and K. Carlson; MPS Model, University of Pennsylvania, A. Ando, F. Modigliani, and R. Rasche; Wharton Mark III
Model, University of Pennsylvania, F. G. Adams, V. J. Duggal, G. Green, L. Klein, and M. McCarthy; Klein, 1973).
3 RMSE is a measure of the difference between values predicted by a model and the values actually observed from the
environment that is being modelled. Aggregation of these residuals, serves as a measure of predictive power.
4 Comparing the RMSE with later studies reveal that results are not satisfactory. Possible reasons may include small sample bias and
inaccurate data. Moreover, celebrated Wharton III model underperformed even naïve ARIMA model.
RMSE of Real GNP ex - ante forecast
Simulation
interval
Number of Quarters ahead
1 2 3 4 5 6 7 8
Brookings 1966.1 - 1970.4 6.74 11.36 16.08 20.94 25.69 29.54 33.18 39.77
ARIMA 1970.3 - 1972.1 8.70 13.00 17.00 23.00 29.00 36.00
BEA 1969.1 - 1971.2 6.01 11.01 18.42 23.26 28.08 30.5
Fair 1965.1 - 1969.4 2.91 4.35 4.52 6.77 9.89
DRI 1971.3 - 1972.3 8.90 14.89 23.1 28.88
FRB - St. Louis 1970.1 - 1971.4 10.29 14.88 13.86 11.69 11.15 16.11
Wharton III 1970.2 – 1971.4 8.04 18.96 26.00 28.52 33.74 39.74 41.77 44.68
14
Initial momentum for building large – scale macroeconometric models (MEM) was abruptly
interrupted in the 1970s a `decade of greater inflation, unemployment and turbulence’ (Pescatori
and Zaman, 2011, p. 2). Mincer and Zarnowitz (1969) compared a number of different models and
conclude that forecasting errors build up much faster than in earlier years and turning points were
seriously missed in the onset of recessions in 1970 and 1974, but they noted there was no decline
in accuracy as measured by the criteria of comparisons with simple extrapolations. Burns (1986
cited in Wallis 1989,p. 57) notes, 'there was not only disillusion with demand management; there
was also growing frustration with the forecasts as the increased level of noise in the economic
system led to increased margins of error`. Greenberger (1976) points that the use of modelling in
government has fallen short of expectations and the gap between expectations and actual results
is widest in the policy application. Kenway (1978) argues that MEM lost it hold because model
builders ceased to believe in the structure and the way in which the economy was believed to
work - that a macroeconomic model as a structural model, represents.
1.3 Expectations Revolution
According to Pesaran (1995) the major criticisms of the traditional models based on the Cowles
Commission approach can be summarised in terms of following issues. First, Liu (1963) argues that
there is an arbitrary assumption of zero restrictions on the variables that should be included in the
equation that are excluded to achieve identification. Secondly, the existence of the problem of
unit roots in many macroeconomic variables and ignorance of the time series properties (Plosser
and Nelson, 1982). Thirdly, insufficient connection between real and monetary variables. At
structural level Friedman (1968) argued that original Phillips curve depended on incorrect inflation
forecast owing to the existence of money illusion, therefore the trade-off between inflation and
unemployment would not hold in the long run when classical principles apply i.e. money should be
neutral.
15
Friedman thus, proposed an expectation augmented Phillips curve, assuming that current
expectations of inflation were based on a weighted average5 (1) of past inflation rates as follows:
πe
t = γ[πt + (1-γ)πt-1 + (1-γ)2
πt-2 + …] = γ∑ (1 − 𝛾)𝑘
π𝑡−𝑘
∞
𝑘=0 1
Lucas, (1976, p. 41) extended Friedman’s argument and asserted that the econometric models of
the time, all derivatives of the Klein-Goldberger model, based on decision rules and estimated by
empirical relations, were a fundamentally defective paradigm for producing conditional forecasts,
because the parameters of decision rules will generally change when policy change or
expectations about policy change. Therefore, the key policy implication of the Lucas critique was
that it is impossible to surprise rational people systematically, so systematic monetary policy
aimed at stabilizing the economy is doomed to failure (Sargent and Wallace 1975).
According to Lucas, only deeper, ‘structural models’, i.e. those derived from the
fundamentals of the business cycle theory emphasizing the agents‘ preferences, and technological
constraints, based on imperfect information, rational expectations6 (2) and market clearing were
able to provide more accurate grounding for the evaluation of alternative policies and forecasting.
(Taylor, 1979) points that introduction of rational expectation assumptions is significant enough to
be called a paradigm shift. In essence the rational expectations hypothesis states that the
difference between the realized values of the expected value should be uncorrelated with the
variables in the information set at the time the expectations are formed (Muth, 1961). Muth
observed that various expectations that were used in the analysis of dynamic economic models
had little resemblance to the way the economy works. If the economic system changes, the way
expectations are formed should change, but the traditional models of expectations do not permit
any such change.
Yt = E(Yt | It-1) 2
5 Adjustment parameter 0< γ <1 says that economic agents will adapt their expectations in the light of past experience and that in
particular they will learn from their mistakes Gujarati (2004). Adaptive expectations may be formed where people may expect
prices to rise in the current year at the same rate as the previous year such that πe =
πt-1. Therefore, expected level of inflation Is
weighted average of the present level of X and the previous expected level of X.
6
The formula states that left hand side should be interpreted as the subjective predicted expectations 𝑌 at time t and right hand
side as objective expectation conditional on the information 𝐼 available at time (𝑡 − 1) (Maddala, 1992, p.444). Moreover
expectations are uncorrelated with error term otherwise forecaster has not used all available information.
16
Fisher (1983, p. 271), on the other hand, stresses that the Lucas critique has not been backed by
any detailed empirical support but is rather asserted. (Bodkin and Marwah, 1988) point out that
the rational expectations is e an irrational assumption with the respect to the complete access of
typical economic agent to the raw data and the true models of the economy. Klein (1989, p. 290)
acknowledges the importance of the Lucas critique, but adds that: "I believe that there is more
persistence than change in the structure of economic relationships. The world and the economy
change without interruption, but that does not mean that parametric structure is changing;
random errors and exogenous variables may be the main sources of changes". Maddala (1992)
offers a solution to the Lucas critique: making the coefficients of the MEM depend on exogenous
policy variables. Heckman and Leamer (2007, p. 226) suggest redefining the exogenity so i.e. the
variable x is exogenous if Lucas critique doesn’t apply to it.
1.4 The new Keynesians
Significant effort had been devoted to translate Lucas’ ideas into empirical models. This efforts
includes (Kydland and Prescott 1990), (Nelson and Plosser 1982), and (Sargent and Wallace 1975),
who provided the main reference framework for the analysis of economic fluctuations and
became, to a large extent, the core of macroeconomic theory based on rational expectations and
the Real Business Cycle Theory (RBC), where the emphasis switched to the role of random shocks
to technology and the intertemporal substitution in consumption and leisure that these shocks
induced. Mankiw (2003) points out that RBC models omit any role of monetary policy,
unanticipated or otherwise, in explaining economic fluctuations. Goodhart (1982) tested the policy
irrelevance hypothesis and found evidence that unanticipated monetary shocks do have real
effects on variables like output and employment. (Howells and Bain 2009) add to the shortcoming,
stating that RBC models’ assumption about perfect and instantaneous market clearing fails in the
real world where, in fact the prices are `sticky’ as proposed by new Keynesians.
The New Keynesian approach to macroeconomics evolved in response to the monetarist
controversy and to fundamental questions raised by Lucas's critique, and in order to provide an
alternative to the competitive flexible-price framework of RBC analysis (Goodfriend and King
1997). Therefore, the main characteristics of the New Keynesian models are their emphasis on
monopolistic competition, nominal rigidities and short-run non-neutrality of monetary policy.
17
Important work along those lines was undertaken by Taylor (1993) and Fair (1994) who developed
methods for incorporating rational expectations into econometric models, as well as methods for
rigorous assessment of model fit and forecasting performance.
Models in Fair –Taylor fashion are now in use at a number of leading policy organizations,
including the Fed and International Monetary Fund (Brayton et al., 1997).Shown below is a highly
aggregated econometric model, described in neo-Keynesian7 framework that incorporates rational
expectations and sticky prices:
Yt = β0 + β1Yt-1 + β2Yt-2 + β3(mt – pt) + β4(mt-1 – pt-1) + β5 π1 + β6t + ut 3
π1 = ϒ0 + ϒ1πt-1 + ϒ2Yt + vt 4
ut = ɳt – θ1εt-1 5
vt = εt – θ2εt-1 6
7
(1) Aggregate demand equation derived from IS-LM relationships. Aggregate demand Y consist of consumption, investment,
government and net foreign demand. (2) is the price determination equation, where rate of inflation π1 is defined as pt+1 + pt. The
rationale is that prices and wages are set in advance of the periods to which they apply. Moreover, the equation is perfectly
accelerationist that is output cannot be raised permanently about its potential without raising inflation (Taylor 1979, p.1270) (3)
and (4) describe stochastic structure of the random shocks ut and vt on the assumption of first order moving average form.
18
1.5 Forecasting accuracy
(Fair, 1979) compared four models, each based on difference of opinion as to how the economy
operates Table 38 and concludes that Sargent`s and Sims model are no more accurate than the
naïve model making his model superior to others.
Table 2: Forecasting performance of the four different models
Figure 3: Graphical representation of the forecasting performance of the four different models
8
(1) Sargent's classical macroeconometric model, (2) Sims's six-equation unconstrained vector autoregression model, (3) a "naive"
eighth-order autoregressive model, and (4) fair new-Keynesian model. The basic forecast period was 1978.2- 1981.4, and for
the misspecification calculations the first of the 35 sample periods ended in 19681V and the last ended in 1977.1
0
2
4
6
8
10
12
0 1 2 3 4 5 6 7 8 9
RMSE
Quarters ahead
Ex-ante forecast, selsected models
Naïve
Sargent
Sims
Fair
RMSE of Real GNP ex - ante forecast
Variable and
model
Number of Quarters ahead
1 2 3 4 5 6 7 8
Naïve 1.11 1.96 2.76 3.51 4.09 4.42 4.7 4.91
Sargent 1.31 2.26 3.4 3.77 4.27 4.59 4.89 5.00
Sims 1.42 2.54 3.54 4.79 6.34 7.79 9.36 10.98
Fair 0.79 1.26 1.63 2.12 2.59 2.97 3.24 3.52
19
A Study by (Stekler and Fildes, 2000) compared various structural models used in the UK (see Table
3)9 and concluded that there was limited evidence to suggest correct prediction in cyclical turning
points. In general, those models performed on average10 (MAE< 1) better than naïve ARIMA
model.
Table 3: Source, UK Treasury
Another study conducted by (Heilemann and Stekler, 2012) found that substantial improvement in
data, theories and method had not appeared to offer substantial improvement in forecasts. While
the accuracy of GDP forecasts improved somewhat in the 1980s and 1990s, it deteriorated in the
past decade, returning to the levels of the 1970s.
The Structural models considered so far are based on theoretical assumptions about causality
(Wold, 1954, p.164) and empirical relationships between the variables in question. `Structural
models thus allow outputs in a given forecast to be traced back through the model structure as
the result of the interaction of a number of economic mechanisms and judgements’ OBR (2010, p.
6).
9
UK Treasury compilation of forecasts for the 1990-98 calculations and Treasury and Civil Service Committee. GDP is based on
preliminary figures, average estimates of GDP Based on year ahead forecasts, Table 3 shows that the MAE of the Treasury’s
forecasts of real GDP growth was 0.8% and 1.00% in 1986-90 and 1990-98, respectively. The MAE was about 25% of the mean
absolute change in the earlier period. The non-Treasury errors were slightly larger in the first period but smaller in the second one.
10 mean absolute error (MAE) is a quantity used to measure how close residuals are to the actual outcomes
One year ahead UK forecast error - Mean Absolute Error (MAE)
Forecasting
group
GDP
1986-90 1990-98
Independent average 1.2 0.95
Selected independents 1 0.87
Independent consensus N/A 0.89
City average 1 0.85
City consensus N/A 0.82
Treasury 0.8 1
Average Outcomes 3.05 1.59
Naïve forecast 1.35 1.6
20
1.6 Non-structural models
Pollock (2013) stressed that main shortcomings of equations of the macroeconometric models are
that they pay insufficient attention even to the simple laws of linear dynamic systems. Non-
structural time-series models, on the other hand, may therefore offer a more pragmatic approach,
assuming that the data series itself may well contain all the necessary information for adequate
forecasts (Pokorny, 1987,p. 342).They are in a sense , agnostic or empirical models (Klein, 1991, p.
14). Significant contributions regarding the theory can be traced to a work of Yule (1927) and
Slutzky (1937), who launched the notion of stochasticity in time series by postulating that every
time series can be regarded as realization of a stochastic process. The process can be explained by
autoregressive (AR) or moving average (MA) models.
Thus Slutzky (1937) shows that cycles resembling business fluctuations can be generated by
combination of a variables` own past value and a series of random causes (Kydland and Prescott,
1990, p.6). The combined method autoregressive integrated moving average (ARIMA) model was
widely popularized by Box and Jenkins (1970) who developed a coherent four-stage iterative cycle
for time series identification, estimation, diagnostic checking and forecasting (cf. Gooijer, 2006, p.
7).
Many macroeconomic variables, including GDP exhibit properties that violates classical
Gauss-Markow assumptions of constant mean, variance and/or covariance throughout the time.
The non-stationarity was observed by Plosser and Nelson (1982) who investigated a number of
macroeconomic variables including GDP and concluded the presence of stochastic trend (random
walk); hence they argued that GDP should be modelled as a first difference stationary (DS) process
(Newbold, 1999, p. 86).This was further confirmed by Stock and Watson (1988, p. 160) who
concluded that macroeconomic time series appear to contain variable trends. Moreover,
modelling these variable trends as random walks with drift seems to provide a good
approximation to the long-run behaviour of many aggregate economic variables.
In addition, Granger and Newbold (1973, p. 117) demonstrated by an ARIMA process that if
random walks, or near random walks, are present and one includes in regression equations
variables that should in fact not be included, then it will be the rule rather than the exception to
find spurious relationships.
21
Forecasts based exclusively on the statistical time-series properties of the variable in question
have often been used to provide inexpensive, yet powerful, alternatives to structural models.
Wallis (1989) finds that published model forecasts generally outperform their time-series
competitors, the margin being greater four quarters ahead than one quarter ahead. This is also
confirmed by (Pokorny 1987, p. 342), who argues that this approach is not well suited to generate
medium-to long-term forecasts, and the approach is of only limited use for the policy evaluation
process. Makridakis (1982) cited in Hendry and Clements (2003, p. 304) produced results across
many models and conclude “Although which model does best in a forecasting competition
depends on how the forecasts are evaluated and what horizons and samples are selected, ‘simple’
extrapolative methods tend to outperform econometric systems, and pooling forecasts often pays.
In conclusion, the current literature shows that macroeconomic modelling and forecasting
went through dramatic changes over time. Firstly, there was a paradigm shift in doctrines, away
from Keynesianism towards monetarism.
Secondly, there was a dramatic evolution of statistical techniques, paving a way to more
rigorous modelling based on advanced econometric models. Alternative models were also
developed based on an AR process, which in many cases can equally compete with the structural
ones. There is no doubt that econometrics is subject to important limitations, which stem largely
from the incompleteness of the economic theory and the ever-changing nature of economic data.
22
CHAPTER 2 Structural econometric model
2.1 Structural model building
2.1.1. Rationale for simultaneous equations
Univariate regression models consist of a dependent variable that is expressed as a linear function
of one or set of explanatory variables. In such models implicit assumption is that the cause-and-
effect relationship between the dependent and explanatory variables is unidirectional: the
explanatory variables are the cause and the dependent variable is the effect. However, many
conceptual frameworks for understanding economic processes institutions recognize that there
are feedback mechanisms operating between many of the economic variables; that is, one
economic variable affects another economic variable and is, in turn, affected by it (Gujarati, 2004,
p. 718).The realisation that economic data are a product of the existing economic system may
then be described as a system of simultaneous relations among the random economic variables
and that these relations involve current , future and past values of some of the variables.
As shown in Figure 4, simultaneous equations11 models allow to account for the
interrelationships within set of variables.
Figure 4: Influence diagram for simultaneous equation model.
There are not many instances when we look at the economy in isolation, therefore, the
simultaneous nature of economic variable determination, each as simplified version of the data
generation process represents more accurate real-world situations (Judge, 1982, p. 600).
11
In simultaneous equations models there is recognition that variables p and q are jointly determined. The random errors εd and εs
affect both p and q. Y is fixed exogenous variable that affect the endogenous variables p and q.
Yp
εd q εs
23
Since the analysis of the economy will be more difficult when there are numerous equations in the
model, small-scale models can explain the economy in better way because `it is much easier to see
the forest when the trees are fewer` (Bodkin and Marwah, 1988, p.301).
Friedman (1953, p. 14) points out that `simple models are easier to understand,
communicate and test empirically with the data`. However, (Maddala, 1992, p. 2) stresses that the
choice of a simple model to explain complex real-world phenomena may lead to oversimplification
and unrealistic assumptions. The particular role of the model should therefore be the distillation
of the most important elements and their inter-relationships in precise and quantified manner to
reveal inner working shapes or design of more complicated mechanism (Klein, 1983, p.1).
2.1.2. Rationale for Keynesian model
The case for employing structural macroeconomic models that help with the policy analysis and
forecasting, rests on arguments for abstraction and simplification of how the economy works by
using empirical equations, which are themselves based on diversity of economic thinking (Kenway,
1994, p. 6).
As outlined earlier, there are two dominant strands that attempt to explain how the
economy operates. In the Classical theory monetary policy12 has no effect on the level or real
economic variables including output assuming all prices and nominal wages are perfectly flexible
both in the short run and long run owing to neutrality of money. Therefore an increase of money
stock will increase the price level proportionally the price level.
In the Keynesian theory, it is assumed that the economy is not operating at full
employment (equilibrium), since machines are not fully utilized and some workers are
unemployed, therefore the supply of output can be increased without increasing inflation.
Moreover they claim that prices don’t adjust instantly owing to wage rigidity, menu costs and
sticky prices. Since adjustments take time, an increase in aggregate demand (generated by an
increase in money supply or government spending) will not affect the price level in the short run.
Instead, it will lead to an increase in the level of output.
12
Monetarists, as with classical, reject the fiscal policy : Government spending, financed by taxes or borrowing from the public,
results in a crowding-out of private expenditures with little, if any net increase in total spending. However monetarists claim that
change in money stock exerts strong influence on total spending. Monetarists therefore conclude that action of monetary
authorities which `result in the change of the money stock should be the main tool of economic stabilization` Mankiw (2011, p. 42)
24
The methodology applied in this dissertation is based on Keynesian framework for the following
reasons: the longstanding popularity among policy makers, fairly simple calculations compared to
other approaches, straight forward inferences and plausible forecasting results and lastly wide
spread consensus that prices fail to clear markets, at least in the short run.
Basic elements utilized in the Keynesian framework for the determination of national
income measured in gross domestic product GDP and its components can be then defined in
terms of a prototype macro model (Intriligator et al. 1996, p. 430).
2.2 Building blocks
The prototype model disaggregates national income into only three components two of which are
determined endogenously namely C and I
C = β0 + ϒ1Yt + ε1 (consumption function)
I = β1 + ϒ2Yt + ε2 (investment function)
Y = C + I + G (national income equilibrium condition)
Models of Keynesian fashion however disaggregate these two components further and they also
include more equations and variables to account for certain factors not treated explicitly in
prototype model.
Since the objective of this dissertation is to estimate fairly small model it is important to select
theoretically appropriate and statistically sound variables, so there would be balance between
disaggregation and simplicity.
The underlying idea behind an analysis of the aggregate demand in the Keynesian theoretical
framework determined by the (IS/LM model) is that prices (and nominal wages) do not clear
markets in the short-run owing to an inertia in the setting of prices, especially when the economy
is operating below full capacity /full employment. A temporary increase in government spending
or money supply affects the economy mainly through the government purchases multiplier. This,
in turn, increases investments at the initial level of interest rate. Increasing the aggregate demand
beyond the potential or full employment level will lead to an inflation. Keeping this basic
assumption in mind, we are able to construct our small model.
25
2.2.1. Consumption Function [CONS]
Two initial equations, CONS and INV of the model describe the IS part. An adequate explanation of
consumers’ behaviour is a key behavioural equation in the model as consumption represent two-
thirds of the UK`s GDP.
The basic tenet, as outlined by Keynes (1936), is the positive relationship between
consumption and income; as income increases, so too does consumption, so the sign of the
variable INC’s coefficient should be positive. According to Keynes` absolute income hypothesis,
current consumption is stable function of current income, the marginal propensity to consume lies
between 0 < mpc < 1 and decreases as income increases.
Friedman (1957, p. 23), however points out that people do not change their consumption
habits immediately following a change in their income because of the force of habit (inertia).
Moreover, people may not know whether a change is permanent or transitory. Therefore,
Friedman suggests that the permanent income hypothesis may be approximated by an adaptive
expectations process (7), whereby permanent income is a weighted sum of current and past
values of observed income13:
Yt
P = λYt + λ(1 – λ)Yt–1 + (1 – λ)2Yt–2
P..... (1 – λ)nYt–k
P 7
Equation 7 based on geometric convergence can, however, be replaced for simplicity by the
lagged dependent variable CONS1 (lagged consumption by one quarter) making the consumption
function dynamic. This avoids two problems with ad hoc distributed lag equations: the degrees of
freedom increase and the multicollinearity problem disappears (Studenmund, 2011, p. 409). A
positive sign is expected, as a change from the last period`s consumption should have a positive
effect on current consumption.
13
Yt
P – permanent income is inherently nonmeasurable variable whereas transitory income is an observed income (Venieris and Sebold,
1977, p.381)
26
2.2.2 Investment Function [INV]
Investment INV is a smaller component of income than consumption; it is more volatile, and so is
important in the analysis as a source of the short-term fluctuation in GDP. Investment can be
described as the accumulation over time by firms of real capital goods (Levacic and Rebmann,
1982, p. 229).
The basic motive for investment carried out by firms is to make a profit. The decisions about
undertaking investment depends on the state of the economy and the opportunity cost of
accumulating capital which is present consumption foregone.
The required rate of return in the Keynesian framework the – marginal efficiency of capital
(MEC), this `is the discount rate applied to the stream of returns on capital, equate the present
value of those returns to the supply price of capital` Venieris and Sebolt (1977, p. 406). According
to the Keynesian approach the MEC can then be compared to the market rate of interest so firms
can decided whether to purchase the capital goods or defer it. Therefore, if MEC exceeds the
market rate of interest, the firm should buy the capital stock. If the MEC is less than the market
rate the firm should forgo the purchase.
To account for this assumption, a variable INT8 was included in the investment equation, with
the expectation of a negative sign. This variable is lagged by eight quarters, because it takes time
to plan and start up a project.
`Since investment is injection into the circular flow of income, these changes will cause
multiplied changes in the income` Sloman and Wride (2009, p. 496). Because relatively modest
change in income can cause much larger change in investment, the accelerator14 variable CINC1
was included. Moreover, the multiplier INC was added under the assumption that investment also
depends on the current level of GDP. The rationale behind the use of the combination of
accelerator and multiplier is that, for example, arise in government expenditure will lead to a
multiplied rise of income. This rise of GDP will cause an accelerator effect; firms will respond to
the rise in consumer demand by investing more, and this will further increase income. If this rise of
income is larger than the first one there will be again be a rise in investment, which in turn will
increase income (the multiplier). Both CYNC1 and INC should have positive coefficients, as
increases in GDP have a stimulating effect on investment.
14
Clarke (1917) specify the accelerator principle in terms of potential aggregate production Yp
as a function of existed
capital (K) and labour (N). Assuming Kt=βYt , and It = Kt – Kt-1 , then Kt – Kt-1 = β(Yt - Yt-1) that is change in output has an
impact on the level of investment.
27
2.2.3. Interest Rate Function [INT]
[INT] equations represent the monetary sector of the model, hence the LM part. The short-term
interest rate [INT] is modelled in a standard money demand tradition, that is: at any given level of
GDP there will be a particular transaction and precautionary demand for money. If we assume that
the Bank of England do have some power over controlling the money supply, its actions will have
an effect on the level of short-term interest rates and inflation. This was explicitly attempted in
the UK in the 1980`s under the phrase `Medium Term Financial Strategy`. Therefore the variable
CMS is included under assumption that a decrease in the money supply will increase the interest
rate. This is because `demand for money decrease when real short term interest rate rises as the
opportunity cost of holding money increases` Pindyck and Rubinfeld (1999, p. 447).
There is also empirical evidence for the gradual adjustments of interest rate by central banks.
Coibion and Gorodnichenko (2011, p. 26) provide evidence that supports the notion that `inertia in
monetary policy action has indeed been a fundamental and deliberate component of the decision-
making process by monetary policymakers: more specifically, their evidence `strongly favours
interest rate smoothing over serially correlated policy shocks as an explanation of highly persistent
policy rates.` To account for this observation, the variable CINT1 was included to capture the
changes between the lagged interest rates.
Moreover, an increase in GDP will lead to a greater demand for money and hence to higher
interest rates if equilibrium to be maintained, so the variable INC is included in equation. In
addition, INC1 was included, so the emphasis is not only on the level of GDP but also on whether
this level is changing. The responsiveness of the demand for money to a changes in national
income will depend on the size of the, mpc which is derived from the consumption function and
hence allows for a feedback effect.
Since the mid 1990`s there has been a widely accepted assumption that the BofE changed its
reaction function from controlling the money supply to a control of interest rate to maintain low
and stable inflation.
Howells and Bain (2009, p. 14) stress that transmission mechanism of monetary policy (see
Figure 5) sees the short-term interest rate as the policy instrument, not the explicit control of
money supply, for achieving the desired outcome.
28
Transmission mechanism of monetary policy
Figure 5: (Source, BofE)
Changing the policy however, poses a problem for the model because of Lucas critique.
Lucas (1979) criticised the `Cowles Commission’s’ approach on a grounds that, when the Bank of
England introduced the inflation targeting policy in the early 1990`s, that behaviour changed the
reaction function (10) to a new one which treats money supply as an endogenous variable.
Therefore, with the new reaction function, parameters of all other equations reflect choices
that were made prior the policy change. Under the new policy rule the parameters could be
significantly different in each equation causing inaccurate forecasts (Webb, 1999, p. 27). Lucas
builds his hypothesis on the assumption that rational (forward-looking) agents will change their
decisions when faced with a policy change or anticipation of the change.
To address this problem at least in part, is to determine the flow of causality between the MS
and INT in our sample period. In an attempt to identify the direction of causality which can then
help to decide whether the money supply is endogenous or exogenous variable, that is whether a
change in money supply cause a change in interest rates or vice versa, Granger’s causality test (see
Appendix A: A.1) was conducted. According to this test, calculated value for money supply F= 0.69
> 0.05 critical value, suggesting that money supply does not cause interest rate.
The calculated value for the interest rate F= 0.31 > 0.05 critical value suggesting that the
interest rate does not cause money supply. From the Granger causality test, it appears that both
variables are therefore jointly determined, with slightly stronger evidence for interest rate being
the cause as its F value is closer to rejection region. For consistency with the IS/LM approach, we
will thus model money supply as an exogenous variable.
29
2.2.4. Inflation Function [INF]
The last equation in our small macroeconomic model describes inflation [INF] as a deviation of
output from its long run equilibrium. This assumption is based on the accelerator theory whereby
an increase in output cannot be raised permanently beyond its potential without creating
inflationary pressure. This is expressed in terms of relationship between the rate of inflation,
rather than unemployment (as postulated by Phillips), an output gap, the gap15 between existing
output [INC] and potential [POTY] or full-employment output Howells and Bain (2009, p. 155). To
improve the inflation function, an adaptive expectations formation variable [INF1] was
incorporated, which takes into an account a worker’s estimation of the rate of inflation. The
resulting expectation-augmented Phillips curve, as postulated by Friedman (1959), in
output/inflation space, assumes backward-looking expectations), since the past errors are built in
to future forecast.
The size of the coefficient depends on the degree of money illusion: β= 1 means that workers
base their expectations decisions on the true real wage rate, 0 < β < 1 indicates that workers are
making incorrect assumptions about the true rate of inflation in the wage-bargaining process.
Proponents of rational expectations hypothesis argue however that economic agents efficiently
apply all relevant knowledge to the best available model in order to predict future values of
economic variables, and not just the past information Howells and Bain (2009, p. 242). Howells
and Bain also points out that rational inflation expectations prevent workers from adjusting their
labour contracts immediately because these contracts are for fixed period causing wage stickiness.
Chow (2011) compared adaptive and rational expectations and concluded that there is
insufficient empirical evidence supporting the rational expectations. Chow argues that adaptive
expectations provide a better proxy for psychological expectations as required in the study of
economic behaviour. (Millet, 2007, p. 12) tested rational expectations directly, using survey data
and indirectly, by implication and concluded that there is limited relevance of the Lucas critique in
key empirical applications but it is appropriate when it comes to dealing with breaks in series.
15
This is based on Okun`s law, which states that growth is negatively related to the change in the rate of
unemployment. It is formally expressed as deviations of income from its potential level Y-Y* are proportional to the
difference between actual and full employment β(u* - u).
30
Millet acknowledges the importance of the adaptive behaviour on the part of agents `emphasizing
the insight for monetary policy that imply… an eventual sensitivity to regime changes but no
drastic or immediate response as a rule – not even to important innovations to the monetary
policymaking process, such as the introduction of inflation targeting` (Millet, 2007, p. 22).
(Eckstein, 1983, p. 50) examined the past record of the DRI Model and its predictions of changes in
policy regimes and concludes `so far, the evidence suggests that changes of expectations
formation are not among the principal causes of simulation error, that forecast error is largely
created by other exogenous factors and the stochastic character of the economy`.
Accounting identity
Finally, the model is completed with the addition of a real national expenditure (gross domestic
product) accounting identity. It defines real GDP16 [INC] as the sum of consumer spending [CONS],
investment spending [INV], and government spending [GOV].
16
ONS published real GDP, is already calculated as a net of imports and exports.
31
CHAPTER 3 Structural Modelling
Complete model:
CONS = α1 + β2INCt + β3CONSt-1 + ε1t 8
INV = α4 + β5(INCt-1 - INCt-2) + β6INCt - β7INTt-8 + ε2t 9
INT = α8 + β9INCt + β10(INCt - INCt-1) - β11(MSt - MSt-1)
+ β12(INTt-1 - INTt-2) + ε3t 10
INF = α13 + INFt-1 + β14(INCt - POTYt) + ε4t 11
INC≡ CONS + INV + GOV 12
3.1. Modelling Methodology
Complete model therefore consists of four behavioural equations (8 –11) and one identity
equation (12) that specify additional variables in the system and their accounting relations with
the variables in the behavioural equations. As Table 4 summarises, there are five endogenous
variables (CONS, INV, INT, INF and INC) and eight predetermined17 (exogenous) variables (CONS1,
CINC1, INT8, CINC, CINC1, CMS, OTG, and GOV).
17
We can define endogenous variables to be those that are jointly determined in the system in the current period.
Predetermined variables are independent (exogenous) variables plus any lagged endogenous variables that are in the
Strictly speaking only exogenous variable in the model are MS, GOV and OTG because they are not simultaneously
determined within the model.
32
Table 4: Summary table of the used variables in model
Name Variable Definition Type
CONS Real Aggregate personal consumption endogenous
CONS1 Consumption lagged by one quarter exogenous
INV Real Investments, expressed as gross capital formation endogenous
INC Real total income q/q (GDP) endogenous
CINC1 GDP lagged one quarter minus GDP lagged by two quarters exogenous
CINC Current GDP minus last quarter GDP exogenous
INT Interest rate on 3 month treasury bills endogenous
CINT INR lagged by one quarter - INR lagged by two quarters exogenous
INT8 Interest rate on 3 month treasury bills lagged by four quarters exogenous
INF1 Inflation lagged by one quarter exogenous
INF Inflation expressed as a growth rate of retail price index endogenous
CMS Real money stock narrowly defined (M0) minus last quarter (M0) exogenous
OTG GDP minus current potential GDP exogenous
POTY Potential output (full employment) GDP exogenous
GOV Real Government expenditure exogenous
Figure 6 describes all the causal flows between variables. There is circular causal flow between
GDP and consumption and investment; consumption and investment are in part determined by
GDP but they are also component of GDP. Interest rate and inflation are simultaneously
determined with GDP that is: when we follow the change of one of the variable through the
system, the change will get back to original causal variable, but there is no circular feedback loop.
Figure 6: Block diagram of five equation model
lag lag
lag
lag
lag
lag
Inflation
[INF]
Consumption
[CONS]
Ct
Investment
[INV]
GDP
[INC]
Interest Rate
[INT]
GOV MS
33
3.1.1 Order condition identification
Prior the estimation of the model, identification needs to be carried out. It follows that structural
equation is identified only when enough of the system`s predetermined variables are excluded
from each equation to `allow us to use the observed equilibrium points to distinguish the shape of
the equation in question` (Studenmund 2011, pp. 478-481). The general method which can
determine whether equations are identified is the order condition identification, which states that
the number of predetermined variables excluded from the equation must be greater than or equal
to the number of included endogenous variables minus one Pyndick and Rubinfeld (1999, p. 345).
Table 5: The order condition of identification
Equation CONS CONS1 INV INC CINC1 CINC CINT INT INT8 INF INF1 CMS OTG GOV
1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 overidentified
2 0 0 1 1 1 0 0 0 1 0 0 0 0 0 overidentified
3 0 0 0 1 0 1 1 0 0 0 0 1 0 0 overidentified
4 0 0 0 0 0 0 0 0 0 0 1 0 1 0 overidentified
5 1 0 1 1 0 0 0 0 0 0 0 0 0 0 -
Table 5 shows that all four (equation 12 does not need to be ranked) equations meet the criteria
for further estimation, because more than one value is obtainable for some parameters.
Applying OLS directly to structural models may lead to simultaneity bias, if one or more of the
explanatory variables are endogenous and therefore could be correlated with an error term. This
may be the case for the variable INC because it appears as endogenous in (12) and exogenous in
(8-11). As a result, OLS estimated structural coefficients may be inconsistent and inefficient
parameters. As an alternative to the OLS therefore, is to use the Two Stage Least Square (2sls)
estimation, especially when structural parameters are overidentified.
3.1.2 Hausman test
To test whether the simultaneity problem exists, the Hausman test is widely used method.
The hypothesis follows:
H0 : the efficient estimator (OLS) is consistent. (prefer OLS)
Ha: the efficient estimator is not consistent. (prefer 2sls)
34
The rationale for the Hausman test is whether or not the difference between the two estimators is
statistically significant. According to the test, (see appendix A: A3) the p-value is 0.0406 < 0.05 so
we reject H0 at the 5% significance level. Hausman’s test confirms that simultaneity is present; i.e.
OLS is inconsistent while the (2sls), which uses instrumental variables, will be both consistent and
efficient.
3.1.3 Structural model estimation
To conduct 2sls estimation, STATA statistical package was used. 2sls process18 can be broken down
into two stages described in the following manner:
In the first stage, OLS is estimated on the reduced form equation for each of the endogenous
variable in the system. This is accomplished by regressing endogenous variables on all the
predetermined variables in the system. However there is no need to estimate reduced form of
inflation equation since variable INF isn’t explicitly used as an exogenous variable in the system.
Strictly speaking there is no need to estimate the reduced form interest rate equation as well
because it only appears in the investment function as INT8 so there is no need to worry about
inconsistency in OLS estimates Pindyck and Rubinfeld (1999, p. 396).
First stage – reduced form:
CONŜ = π0̂ + π1̂GOV + π2̂CONS1 + π3̂CINC1 + π4̂INT8 + π5̂CINC + π6̂CMS + π7̂CINT1 + π8̂OTG + π9̂INF1
+ π10̂ MS
INV̂ = π11̂ + π12̂ GOV + π13̂ CONS1 + π14̂ CINC1 + π15̂ INT8 + π16̂ CINC + π17̂ CMS + π18̂ CINT1
+ π19̂ OTG + π20̂ INF1 + π21̂ MS
INĈ = π22̂ + π23̂ GOV + π24̂ CONS1 + π25̂ CINC1 + π26̂ INT8 + π27̂ CINC + π28̂ CMS + π29̂ CINT1
+ π30̂ OTG + π31̂ INF1 + π32̂ MS
Thus by using the first stage, we constructed variables which are linearly related to the
predetermined model variables and at the same time uncorrelated with the reduced form error
term. Only important information from this stage is the coefficient of determination (R2). As
(Appendix: A Table 9) shows, fairly high R2 in all individual equations suggests high correlation of
instruments with endogenous variables.
18
The term `process` means time series – it emphasizes the dependence of the present value of the series on its
values in prior periods.
35
In the second stage, endogenous variables which appear on the right hand side (only) of the
structural equations are replaced with the first stage fitted (instrumental) variables. Then
unrestricted estimation is conducted by applying OLS to each equation in the implicit version of
reduced form. Therefore by constructing 2sls we achieved consistent estimators of endogenous
and predetermined variables.
3.2 Analysis of the structural model results
Owing to the lack of an extended historical data regarding the potential output, estimation using
2sls was restricted for the period 1980q1-2007q1.The choice of a quarter as a base time19 unit
emphasizes the short run movements of the economic system. Moreover, quarterly model may be
analytically more useful than annual model and also the results are more robust because the
increased number of observations. The evaluation criteria for simultaneous models are more
challenging that those estimated by a single equation because, model as a whole has much richer
dynamic structure than any individual equations. Although, there are no formal statistical tests for
2sls estimated equations, single equation test statistics may be used as a good indication for
potential problems. It is evident form Table 6 that there is a serious problem with the serial
correlation (SC) in equation INV and INT and heteroskedasticity (HT) in equation INV, and INF.
Table 6: SC and HT tests of individual equations-OLS estimation
19
Quarterly data also have some drawbacks in terms of : effect of seasonality in quarterly data due to seasonal effects, the greater
degree of serial correlation present in quarterly as compared with annual series and the determination of the appropriate structure
of lags
BREUSCH-GODFREY TEST
FOR AUTOCORRELATION
BREUSCH-PAGAN TEST FOR
HETEROSKEDASTICITY
RAMSEY`S RESET TEST OF
FUNCTIONAL FORM
χ2 - test
statistic
p-value pass/fail χ2 - test
statistic
p-value pass/fail F - test
statistic
p-value pass/fail
CONS 5.058 0.025 fail 3.591 0.166 pass 6.19 0.000 fail
INV 63.76 0.000 fail 12.03 0.007 fail 3.66 0.007 fail
INT 56.02 0.000 fail 7.930 0.094 pass 1.56 0.116 pass
INF 3.722 0.050 pass 174.56 0.000 fail 5.43 0.000 fail
36
Failure of the CS test points to the fact that standard errors of the coefficients are biased.
Moreover, dynamic nature of the model will cause also bias in coefficients. This bias should be
eliminated when 2sls is employed, producing coefficients closer to their true value although serial
correlation will persist even after 2sls estimation.
Moreover, heteroskedasticity will not be corrected using 2sls, therefore t-score and hypothesis
test may be unreliable because standard errors of coefficients are biased. Ramsey’s test for
correct use of functional form is rejected in all equations but INT, suggesting that relationships
between some of the variables is nonlinear20, this is the case at least with the variable MS which
appears to resemble exponential trend Baum (2006, p. 124). Mariscal (2012a) stresses that most
of the economic variables are non-stationary and modelling such variables may cause a spurious
result. As (Appendix A: A7) show, this is the case of all the variables but INT. It can be also seen in
fairly high R2 across the equations and failed test for homoscedasticity. Investment function is
therefore likely to be affected, and may partly explained small and wrong signs21 in variables INC
and CINC. Consumption function is reasonably cointergrated thus offsetting some of the negative
effects of non-stationarity. Inflation function does contribute the least to the whole model and t-
ratios, size and sign of the OTG and INF1 are correct. Correlation matrix unveils (see appendix A:
A6.2) that there is serious collinearity between some of the variables, but owing to simultaneous
nature of the model this is not necessary relevant.
Comparing the size of the coefficients from Table 7 between OLS and 2sls, only notable
changes are in variables INC, CINC and CINC1. This was expected as GDP is the leading variable and
so its reduced form has a bigger impact on other variables in the model then other reduced form
endogenous variables when 2sls was employed. Significance of variables improved after 2sls,
although INT8 in investment function still points on insignificance. All the remaining variables in
the model are significant with 95% confidence. R2 slightly decreased, however (Pokorny, 1989, p.
309) stresses that it is meaningless to judge success of 2sls on the basis of R2 because `this method
makes no reference to it in fact it is in conflict with the criterion of consistency`.
20 Improved results may be achieved by using log/log or semi log functional form. Moreover changing to annualized data may
improve some of the statistics.
21 All the remaining signs of the coefficients in all other equations are correct and at reasonable size.
37
Table 7: Summary statistics of OLS and 2sls estimation procedures22
Single eq.
OLS 2sls
CONS
CONS1
0.860
0.037
(22.69)
0.789
0.060
(12.96)
INC
0.104
0.026
(3.87)
0.147
0.042
(3.46)
R2
0.999 0.999
INV
CINC1
-0.042
0.092
(0.460)
0.267
0.071
(3.73)
INC
0.196
0.005
(39.14)
0.185
0.005
(34.57)
INT8
-46.69
92.38
(0.510)
-122.93
110.92
(1.110)
R2
0.978 0.972
INT
INC
-0.00003
0.00008
(9.160)
-0.00003
0.00006
(9.560)
CINC
-0.0003
0.00008
(3.610)
-0.0015
0.00008
(2.390)
CMS
-0.02
0.0008
(2.590)
-0.02
0.0008
(2.850)
CINT1
0.829
0.200
(4.130)
0.838
0.204
(4.100)
R2
0.719 0.712
INF
OTG
0.137
0.039
(3.490)
0.128
0.034
(3.750)
INF1
0.956
0.023
(40.57)
0.974
0.022
(41.57)
R2
0.944 0.938
22
Note that t-values are in parentheses beneath of coefficients and standard errors in bold.
38
3.3 Ex- post forecasting
To assess robustness of forecast results of the structural model, turning points at the time of large
exogenous shocks to the economy may serve as a good benchmark. Ideal example of such an
event is the recent recession 2007-08. The magnitude and the speed with which GDP collapsed is
unprecedented and therefore, the model will be exposed to a great challenge.
To get a better perspective about the likely validity and the condition of the endogenous
variable determined by the simulation23 solution with the actual values, historical simulation was
conducted. As Figure 7 shows there has been fairly close relationships between fitted and actual
values which brake up temporarily in 1990`s. Since 2000`s there has been increasing under
prediction, that suggests a structural break of some of the variables in relation to GDP.
Figure 7: Historical simulation, GDP 1980q1-2007q1
To capture whole business cycle turning points of GDP and at the same time keeping in mind short
term characteristics of the model, ex-post forecast was employed for 2007q1 – 2009q1.
It means performing forecast at the end of the estimation period and then compared it with the
available data. This enables us to test the forecasting accuracy of the model. Summary statistics of
the ex-post one step forecast are shown in Table 8.
23
Refers to mathematical solution to a simultaneous set of difference equations, that is current value of one variable relates to
past value of other variable.
39
Table 8: Ex-post forecast based on 2sls regression
Observation Actual Prediction Error Error (%) S.D. of Error t- ratio24
2007q1 383980 393237.2 -9257.2 -2.41 0.0012 -1.929
2007q2 389661 395476.4 -5815.4 -1.49 0.0012 -1.193
2007q3 394031 398551.4 -4520.4 -1.15 0.0012 -0.921
2007q4 402523 401183.4 1339.6 0.33 0.0012 0.264
2008q1 406124 406566.6 -442.6 -0.11 0.0012 -0.088
2008q2 396921 403762.1 -6841.1 -1.72 0.0012 -1.377
2008q3 391272 396367.1 -5095.1 -1.30 0.0012 -1.041
2008q4 377355 395764.1 -18409.1 -4.88 0.0012 -3.907
2009q1 370764 386480.7 -15716.7 -4.24 0.0012 -3.394
Based on 9 observations from 2007q1 to 2009q1q4
Mean Prediction Errors (MPE, %) -1.591
Mean Absolute Prediction Error (MAPE, %) 1.959
Root Mean Sum Squares Predictive Errors (RMSPE, %) 2.492
MPE shows that model over-predicted GDP on average by -1.591%, with only one under-
predictions in 2007q4. Persistent over prediction (negative bias) suggests presence of serial
correlation. Clear pattern can be also seen in plot of residuals in (see appendix A, Figure 1). A
shortcoming of the MPE is that positive and negative errors can offset each other leading to
unwarranted results (Flegg, 2012a). To overcome the problem RMSPE was also calculated 2.492,
which measures deviation of the simulated variable from its path time Pindyck and Rubinfeld,
(1998, p. 210). Pindyck and Rubinfeld argue that the magnitude of the errors can be evaluated
only by comparing it with the average size of the variable. This points that calculated errors are
fairly small compared to actual values.
24
t –ratio was calculated by dividing S.D of error by prediction error.
40
Calculated t-ratios of the errors (assuming null hypothesis of rejecting significant error when
<1.96) points on insignificance in 2007q1, 2008q4 and 2009q1.MAPE- which calculates an absolute
accuracy of the fitted model, is slightly better -1.959% than RMSPE because it doesn’t take into
account relatively larger errors in 2008q4 and 2009q1. Figure 8 shows that model predicts turning
points fairly well, particularly at the exact turning point quarters 2008q1-2008q2. This might be
because errors exhibiting negative bias did not fully reflect the sudden change in the direction.
However, with increasing time horizon forecasting accuracy somehow decreases showing again
over prediction. Overall, magnitude of the forecasting errors reflects small scale properties of the
model owing to the lack of detail in the scope and simplified equation specification.
Figure 8: Ex-post forecast based on 2sls regression
41
CHAPTER 4 Non-causal model
4.1 introduction
Structural econometric model discussed thus far is based on causality25 and economic theory to
capture underlying structure of the economy. Causal models described through interactions
between several interrelated markets is a step closer to real world than the single equation that
assumes only weak exogenity (one way causality).Therefore, strong exogenity used in the model
which accounts for feedback between lagged endogenous variables that appears on the right hand
side could be used to generate more accurate one step forecast (Brown 1991, p. 338). Brown
however argues that even strong exogenity may not be sufficient assumption in the light of the
change in expectations and resulting consequences to a model as outlined earlier mainly by
(Lucas, 1979). Moreover, `because the structure of the model is assumed a priory and only on the
subset of a of the A causal factors … , causality will depend on specific model` (Brown 1991, p.
337). In addition, there is a zero restriction on variables that do not comply with underlying
assumptions causing the model to omit potentially important variables as pointed by Liu (1963).
Autoregressive linear stochastic dynamic models do not offer a structural explanation for its
behaviour in terms of other variables but does resemble its past behaviour, thus provide viable
alternative. These time series models which assume random process that generate the data, are
not therefore explained by the cause and effect relationship but rather `in terms of how
randomness is embodied in process` Pindyck and Rubinfeld (1999, p. 489).
There are number of techniques now used by modellers that utilize time series models. This
dissertation though will concentrate on the use of the autoregressive moving average model
(ARMA). This particular choice is motivated by the fact that ARMA model can offer powerful and
efficient means of generating short term forecast and it is widely accepted alternative
(benchmark) to structural models (Pokorny, 1987, p. 341).
25
(Brown 1991, p.338) stresses that statistics cannot prove causality but that causality must be assumed in regression analysis.
42
4.2 Notation of ARMA model
The ARMA model is a combination of an Autoregressive (AR) model and moving average (MA).
Let 𝑦𝑡 represent26 GDP at time t :
𝐴𝑅(1): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + 𝜀𝑡
where 𝛿 is the mean of Y and 𝜀𝑡 is an uncorrelated random error ~ 𝑁(0, 𝜎2
), so 𝑦𝑡 follows a first
order autoregressive AR(1) stochastic process.
For stationary autoregressive process AR(1) 𝜇 , the mean of the process is invariant with respect of
time,
𝜇 =
𝛿
1−𝜙1
𝛾0, the variance of the process is constant for |𝜙1| < 1 and 𝛿 = 0
𝛾0 = 𝐸[(𝜙1 𝑦𝑡−1 + 𝜀𝑡)2
] =
𝜎 𝜀
2
1−𝜙1
2
and 𝛾1, the covariance follows the same constant properties.
𝛾1 = 𝐸[𝑦𝑡−1(𝑦𝑡−1 + 𝜀𝑡)] = 𝜙1 𝛾0 =
𝜙1 𝜎 𝜀
2
1−𝜙1
2
The pth-order autoregressive process AR(p) can be then expressed as
𝐴𝑅(𝑝): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + 𝜙2 𝑦𝑡−2 + ⋯ + 𝜙 𝑝 𝑦𝑡−𝑝 + 𝜀𝑡 𝜀𝑡~𝑊𝑁(0, 𝜎2
)
Stationary autoregressive process of order p the current observation 𝑦𝑡 is generated by weighted
average of the past observations going back p periods.
If we assume that AR process is not the only one which can generate y we can write:
𝑀𝐴(1): 𝑦𝑡 = 𝜇 + 𝜃1 𝜀𝑡−1 + 𝜀𝑡,
Where 𝜇 is a mean of the process, and 𝜀𝑡 as before, is the stochastic error~ 𝑖𝑖𝑑(0, 𝜎2
). It follows
that 𝑦𝑡 at time t is equal to constant plus moving average of the current and past error term.
26
Small letter y explains the variable in its deviation from mean form, (𝑌𝑡 − 𝛿)
43
Therefore 𝑦𝑡 follows a first order moving average MA(1) process. For the process which is
generated by the white noise process with the variance:
𝛾0 = 𝜎𝜀
2
(1 + 𝜃1
2
)
and covariance for the one lag displacement,
𝛾1 = 𝐸[𝜀𝑡 + 𝜃1 𝜀𝑡−1)(𝜀𝑡−1 + 𝜃1 𝜀𝑡−2)] = 𝜃1 𝜎𝜀
2
The pth-order autoregressive process MA(q) can be then expressed as:
𝑀𝐴(𝑞): 𝑦𝑡 = 𝜇 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 + ⋯ + 𝜃𝑞 𝜀𝑡−𝑞 + 𝜀𝑡 𝜀𝑡~𝑊𝑁(0, 𝜎2
)
Moving average process or order q states that each observation 𝑦𝑡 is generated by a moving
average of the stochastic error going back q periods. Mean 𝜇 of the moving average model is
independent of time since E (yt) = 𝜇.
When the univariate series takes characteristics of both AR and MA, the combined ARMA (1,1)
process is written as
𝐴𝑅𝑀𝐴(𝑝, 𝑞): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + ⋯ + 𝜙 𝑝 𝑦𝑡−𝑝 + 𝜃1 𝜀𝑡−1 + ⋯ +𝜃𝑞 𝜀𝑡−𝑞 + 𝜀𝑡, 𝜀𝑡~𝑊𝑁(0, 𝜎2
)
4.3 non-stationarity in time series
Time series models including ARMA process, assume stationarity that is constant mean, variance
and covariance or (autocorrelation for weak stationarity). From the Dickey-Fuller test (see
appendix A: A7) it is apparent that many economic time series including GDP are non-stationary,
that is integrated of order 1 I(1) in the case of INC. This can be also seen from Figure 9 where
autocorrelation function is the first entry in the correlogram and represents correlation between
𝑦𝑡 and 𝑦𝑡−1 the second entry is correlation between 𝑦𝑡 and 𝑦𝑡−2 , etc. pointing on geometrical
decline. Thus the process has an `infinite memory`, current value of the process depends on all
past values at declining rate.
44
Figure 9: Autocorrelation function, INC
In order to estimate the model we need to difference the process at d times to make it stationary
so ARMA (p,q) becomes ARIMA (p,d,q), that is, autoregressive integrated moving average model.
This is because, if the model is to be used for forecasting, we must assume that the features of this
model are time invariant, over the future time periods. `Thus the simple reason for requiring
stationary data is that any model which is inferred from these data can itself be interpreted as
stationary or stable, therefore providing valid basis for forecasting` (Gujarati, 2004, p. 840).
4.4 ARIMA methodology
The methodology behind the ARIMA is closely associated with George E.P. Box and Gwilym
Jenkins; Box-Jenkins approach, (BJ) who proposed an iterative approach to time series modelling
comprising of four steps: identification, estimation, diagnostic checking and forecasting.
4.4.1 Identification
As noted earlier GDP time series has a unit root so the characteristics of the stochastic process
change over time. This can be observed from Figure 10 where there is a clear trend in the variable.
To remedy non-stationarity, we need to decompose the original series by removing the trend in
order to isolate the other components of the data. Thus by taking the first order difference
∆𝑌 = 𝑌𝑡 − 𝑌𝑡−1 , we eliminated trend from the time series Figure 10. To confirm that the first
difference was enough to make the series stationary, an augmented Dickey Fuller test was used
(see appendix A: Table 17).
45
Figure 10: The UK`s GDP q/q values and the first difference
By plotting autocorrelation function (ACF) and partial autocorrelation (PACF) on GROWTH27 ,
Table 9 there is a clear collapse to insignificance, indicating the growth rate of real GDP is now
stationary.
Table 9 : Autocorrelation function and partial autocorrelation
27
GROWTH was constructed in Stata by gen GROWTH = lnINC-l.lnINC , where lnINC=log(INC)
15 0.0016 0.1159 26.184 0.0361
14 -0.1444 -0.1164 26.183 0.0245
13 -0.0888 -0.1118 23.16 0.0398
12 -0.0364 -0.1616 22.027 0.0372
11 0.0169 -0.0512 21.838 0.0257
10 0.0732 0.1662 21.798 0.0162
9 0.0182 0.0871 21.048 0.0124
8 -0.0587 -0.1205 21.002 0.0071
7 -0.0093 -0.0412 20.528 0.0045
6 0.0308 0.0817 20.516 0.0022
5 -0.0202 -0.0170 20.388 0.0011
4 0.0021 -0.1206 20.333 0.0004
3 0.0565 -0.0758 20.333 0.0001
2 0.3171 0.2815 19.911 0.0000
1 0.2275 0.2277 6.7308 0.0095
LAG AC PAC Q Prob>Q [Autocorrelation] [Partial Autocor]
-1 0 1 -1 0 1
46
Moreover ACF and PACF may be used to get an indication of the order of lags. In order to identify
an MA(q) order of lags, we need to find how many periods the correlation lasts between terms in
ACF. It can be seen from Figure 11 that first two autocorrelations are outside of the 95%
confidence band indicating they are statistically significantly different from zero, thus correlogram
indicate MA(2).
In order to identify an AR(p) we need to look at the PACF which is the plot of autocorrelations
between 𝑦𝑡 and 𝑦𝑡−𝑘 with the correlations between the intervening correlations omitted. Here,
statistically significant are first two lags 𝑦𝑡 and 𝑦𝑡−1 = 0.227, 𝑦𝑡 and 𝑦𝑡−2 = 0.281, suggesting an
AR(2) process.
Figure 11 ACF and PACF of the GROWTH
47
4.4.2 Estimation
Estimation of the ARIMA28 model was conducted by the STATA for the period 1978q1-2009q4.
Owing to the fact that error terms in MA process tend to be non-normally distributed, which
means that estimated coefficient 𝜃̂𝑡 doesn’t represent the true value of 𝜃𝑡. In these instances
maximum likelihood (ML) procedure needs to be employed instead of OLS. ML therefore
estimates the parameter 𝜃𝑡 as the value of 𝜃̂𝑡 which would maximise the probability of obtaining
the sample actually observed Pindyck and Rubinfeld (1999, p. 53). STATA fits the model by
maximizing the log of the likelihood function through optimization method and progress iteration
by iteration (Becketi, 2013, p.245).
Our tentatively identified ARMA (2, 0, 2) model however doesn’t necessary preclude the
best forecasting results. In addition to a rule of thumb lag selection, Akaike’s and Schwarz
Bayesian information criterion was conducted to get a better perspective about the validity of
alternative lags order Table 10. According to Akaike’s information criterion that prefers model
which minimises an information loss, the best fit model is ARIMA (1,0,1). According to Schwarz
criterion, the best fit is ARIMA (4,0,4). Box and Jenkins (1970) argue that there is only very limited
difference in forecasts between complex high order system and low order systems, therefor only
low-order systems will be considered namely ARIMA(1,0,1) and ARIMA(2,0,2). Comparing the
overall fit of the models in terms of log-likelihood unveils only marginal difference in the
magnitude Appendix B, Tables 1,2. All coefficients implied by the ARIMA (1,0,1) are significant at
5% level. In this specification the 𝜓 coefficients implied by the specification29 show that 72% of an
economic shock persists into the succeeding quarter, followed by 40% of the original shock.
Standard error(SE) of white noise (𝜀) 0.01 is > than 0.006, indicating that the variability of the error
is large relative to the mean of the process (Becketi, 2013, p. 249). Both models reject the Wald
statistics which test all the coefficients against the null hypothesis that they are insignificant. The
ARIMA (2,0,2) 𝜓 estimates 32.8% and -62.1% implying that the shocks to the GDP persist only
marginally and reverse in the succeeding quarters.
28
Although there will be a reference to the ARIMA (p,0,q) through a text we could just estimate log of real GDP directly in STATA
and notation would change to ARIMA (p,1,q). The results would be identical.
29 For details see Appendix B: `Dynamic response of GDP growth to economics shocks`
48
Calculated t- ratios of 𝜙1 and 𝜃1 in ARIMA (2,0,2) point on insignificance at 5% level, moreover
coefficient 𝜙1is only one third of the size of the same coefficient in the former model. SE(𝜀) 0.009
is > than 0.006, indicating that the variability of the error is large relative to the mean of the
process, but slight improvement from the former model. Results from ARIMA (2,0,2) therefore
suggest that coefficients are not very precise, i.e. not providing accurate estimates of the dynamic
response of GDP growth to economic shocks.
Table 10: Akaike’s and Schwarz Bayesian information criterion for model GROWTH
Model GROWTH
Akaike`s
information
criterion
Bayesian
information
criterion
ARIMA (1,0,1) -800.01 -788.63
ARIMA (1,0,2) -806.96 -792.74
ARIMA (1,0,3) -805.67 -788.65
ARIMA (1,0,4) -806.52 -786.61
ARIMA (2,0,0) -805.63 -794.27
ARIMA (2,0,1) -804.01 -789.81
ARIMA (2,0,2) -807.45 -790.33
ARIMA (2,0,3) -805.83 -785.92
ARIMA (2,0,4) -801.85 -799.08
ARIMA (3,0,0) -804.31 -790.09
ARIMA (3,0,1) -806.05 -788.98
ARIMA (3,0,2) -806.21 -786.31
ARIMA (3,0,3) -805.09 -782.34
ARIMA (3,0,4) -801.98 -776.39
ARIMA (4,0,0) -803.98 -789.91
ARIMA (4,0,1) -802.63 -789.56
ARIMA (4,0,2) -805.98 -783.24
ARIMA (4,0,3) -803.56 -777.97
ARIMA (4,0,4) -801.58 -733.14
49
4.4.3 Diagnostic checking
The next step in the BJ approach is model diagnostic checking, that is to check adequacy of
candidate ARIMA models. It follows that well specified and accurately fitted model is evidence
that residuals of its estimated error, is a white noise (Becketi, 2013, p. 254). Widely used test for
iid residuals is the Ljung–Box Portmanteau test, which considers all ACF simultaneously for
significance. According to (Appendix B: B3) the test strongly confirms no evidence that residuals
deviate from white noise in models. On the basis of the considered tests, both models performed
similarly as BJ methodology suggest. Following the BJ`s principle of parsimony and the fact that
the ARIMA (1,0,1) outperformed the ARIMA(2,0,2) in some of the tests we therefore concluded
that the ARIMA(1,0,1) would fit the GDP most accurately. Thus:
𝐴𝑅𝐼𝑀𝐴(1,0,1): 𝑦𝑡 = 0.006 + 0.670𝑦𝑡−1 − 0.446𝜀𝑡−1 + 𝜀
4.4.4 Forecasting
The last part of the time-series modelling concerns forecasting. This was employed on the same
time period as before, 2007q1-2009q1, with the use of STATA.
Table 11: Ex-post forecast based on ML regression30
Observation Actual Prediction Error Error (%) S.D. of Error t- ratio
2007q1 383981.6 381509.11 2472.53 0.644 0.0012 0.515
2007q2 389660.1 386980.66 2679.40 0.688 0.0012 0.550
2007q3 394029.1 392864.48 1164.60 0.296 0.0012 0.237
2007q4 402524.0 396951.74 5572.26 1.384 0.0012 1.108
2008q1 406122.5 406439.35 -316.90 -0.078 0.0012 -0.062
2008q2 396920.0 408926.21 -12006.22 -3.025 0.0012 -2.421
2008q3 391272.7 396796.96 -5524.28 -1.412 0.0012 -1.130
2008q4 377354.4 391910.98 -14556.61 -3.858 0.0012 -3.088
2009q1 370763.6 376103.63 -5340.01 -1.440 0.0012 -1.153
Based on 9 observations from 2007q1 to 2009q1q4
Mean Prediction Errors (MPE, %) -0.722
Mean Absolute Prediction Error (MAPE, %) 1.424
Root Mean Sum Squares Predictive Errors (RMSPE, %) 1.855
30
For simplicity actual and predicted values were converted back to values by the use of antilog, see Appendix B: B4 for details.
50
All the calculated forecasting performance statistics, namely MPE, MAPE and RMPSE, points to a
more accurate forecast than the one produced by the structural model. The ARIMA model under-
predicted GDP between 2007q1-2007q4 and over-predicted between 2008q1-2009q1. Despite
persistent over and under prediction, the residuals doesn’t show any longer-term pattern (see
Appendix B: B5). Insignificant t-ratios are in 2008q2 and 2008q4. As Figure 12 shows, the exact
timing of turning point wasn’t picked up by the model well, but as the time horizon increased,
accuracy improved.
Figure 12: UK’s GDP forecast ARIMA (1,1,1), 2007q1-2009q1
51
CHAPTER 5 Conclusion
In this dissertation, we investigated two models from the opposite spectrum of underlying
assumptions. The small structural model, which was based in the IS/LM/PC framework, quite
clearly responded to exogenous shocks by exhibiting a cyclical response mechanism. Although the
model responded to the exogenous shock to the GDP more accurately than the non-structural
model did, it failed to keep up in following quarters and thus forecasting accuracy gradually
decreased. Structural econometric models are no more than a reflection of the economy’s
interactive nature and so they cannot contain any more information than was put into them
during their construction. An indication of the limitations could be observed from the persistent
over-prediction since 2000 when the historical simulation was conducted. Moreover, most of the
variables used, exhibited properties that violate the classical Gauss-Markov assumptions, possibly
causing radically different implications for the forecasting results from a model that is well
specified and stationary.
Atheoretical ARIMA models, which rely solely on past observations, provided superior results
to the structural model. Although, the exact turning points were not predicted as precisely as for
the former model, the forecasting results were more consistent through the forecasting period;
this aspect is well captured by the better results from the RMSPE criterion. This point to a two
caveats. In order to build a structural model that can be compared with the atheoretical model,
further disaggregation is essential. Therefore, the prospective model builders need to assess the
cost and benefits of building a more complex model carefully. This would mean, assessing whether
the added benefits (measured in terms of improved forecast) of the simultaneous- equation
model can be expected to unweighs the added costs involved in building it. Moreover Mizon and
Hendry (2011, p. 5) points out that even being the `best forecasting model does not justify its
policy use; and forecast failure is insufficient to reject a policy model`. They argue that models that
`wins’ forecasting competitions have rarely any useful implications for an economic policy analysis,
as they lack both target variable and policy instruments. This is clearly the case for the ARIMA
model which can only be used for forecasting. Interestingly, the best result would be achieved
when the two models’ forecasts are combined, given the structural model over-prediction and the
non-structural model’s under-prediction.
52
APPENDIX
APPENDIX A:
A1: glossary of variables
CONS - Final consumption expenditure, households, households &NPISH expenditure, constant prices,
seasonally adjusted 2010, chained prices, quarterly
INC - Gross Domestic Product chained volume, seasonally adjusted 2010 prices, quarterly values
CINC - first difference of INC
CINC1 - GDP lagged one quarter minus GDP lagged by two quarters
MS - M0 notes and coins outside the central bank, seasonally adjusted current prices, monthly values
CMS - Narrowly defined (M0) minus last quarter (M0)
GOV - General Government final consumption expenditure CVM, seasonally adjusted 2010, chained prices,
quarterly
INT - UK 3 month treasury bills, Yield (annualized)31
, in %
CINT - INT lagged by one quarter - INT lagged by two quarters
INF - UK CPI Index: all items (annualized), monthly in %
INF1- Inflation lagged by one quarter
INT8 - Interest rate on 3 month treasury bills lagged by eight quarters
POTY - UK’s potential output current prices, quarterly
OTG - GDP minus current potential GDP = (∆𝑙𝑜𝑔𝐼𝑁𝐶 − ∆𝑙𝑜𝑔𝑃𝑂𝑇𝑌)
INV - Gross capital formation CVM, seasonally adjusted 2010, chained prices, quarterly data
31
INT and INF was converted to quarterly values by dividing the variable by 4
53
A2: Granger casualty test:
In order to conduct the test all variables need to be stationary. As an appendix A, A7 suggest, both
variables appear to be non-stationary. According to Dickey-Fuller test, first difference of variable
INT, Table1 was enough to make the variable stationary. Curiously in the case of MS there had to
be also second difference conducted for variable to pass DF test (see Table 2,3). Actual test consist
of running Vector Autoregressive Model (VAR) on both variables and their lags. Criteria for the
length of lags is subject to Akaike`s information criterion. When choosing lags that minimize the
AIC, lags 4, 5,6,7,8 were tried. As AIC was improving with the increased lag length, four lags were
chosen to simply the process Table 4. Then, the Granger causality Wald test was conducted
Table5. It means to run two Granger test for each direction. As the F-test shows, both variables
rejected null hypothesis for non-causality.
A2.1 ADF for first differenced [INT] is stationary
Table 1
_cons -.0936771 .0822875 -1.14 0.257 -.2566728 .0693186
L4D. .0689543 .0875011 0.79 0.432 -.1043685 .2422772
L3D. .22083 .1188896 1.86 0.066 -.0146674 .4563274
L2D. .1289997 .1352925 0.95 0.342 -.1389887 .3969882
LD. .1628473 .1519139 1.07 0.286 -.1380648 .4637594
L1. -.9772638 .1681747 -5.81 0.000 -1.310386 -.6441421
CINT1
D.CINT1 Coef. Std. Err. t P>|t| [95% Conf. Interval]
MacKinnon approximate p-value for Z(t) = 0.0000
Z(t) -5.811 -3.503 -2.889 -2.579
Statistic Value Value Value
Test 1% Critical 5% Critical 10% Critical
Interpolated Dickey-Fuller
Augmented Dickey-Fuller test for unit root Number of obs = 121
. . dfuller CINT,lags(4) reg
54
A2.2 ADF for first differenced [MS] is non-stationary
Table 2
A2.3 ADF for second differenced [MS] is stationary
Table 3
_cons 26.49469 39.97869 0.66 0.509 -52.68815 105.6775
L4D. -.529233 .0945871 -5.60 0.000 -.7165746 -.3418914
L3D. -.707205 .1205876 -5.86 0.000 -.9460439 -.4683662
L2D. -.7359772 .1214308 -6.06 0.000 -.9764863 -.4954681
LD. -.8665757 .1161508 -7.46 0.000 -1.096627 -.6365245
L1. -.0143925 .0965072 -0.15 0.882 -.2055373 .1767522
CMS
D.CMS Coef. Std. Err. t P>|t| [95% Conf. Interval]
p-value for Z(t) = 0.4409
Z(t) -0.149 -2.359 -1.658 -1.289
Statistic Value Value Value
Test 1% Critical 5% Critical 10% Critical
Z(t) has t-distribution
Augmented Dickey-Fuller test for unit root Number of obs = 122
. . dfuller CMS,lags(4) reg drift
_cons 25.62748 20.48186 1.25 0.213 -14.94315 66.1981
L4D. .1555712 .1012939 1.54 0.127 -.0450726 .3562149
L3D. .8291929 .2101657 3.95 0.000 .4128952 1.245491
L2D. 1.648414 .3119625 5.28 0.000 1.030477 2.266352
LD. 2.485373 .4029534 6.17 0.000 1.6872 3.283546
L1. -4.434479 .469095 -9.45 0.000 -5.363666 -3.505292
dms
D.dms Coef. Std. Err. t P>|t| [95% Conf. Interval]
p-value for Z(t) = 0.0000
Z(t) -9.453 -2.359 -1.658 -1.289
Statistic Value Value Value
Test 1% Critical 5% Critical 10% Critical
Z(t) has t-distribution
Augmented Dickey-Fuller test for unit root Number of obs = 121
. . dfuller dms,lags(4) reg drift
55
A 2.4 VAR estimation of [MS] and [INT]
Table 4
A2.5 Granger causality test
Table 5
_cons 17.49074 19.63273 0.89 0.375 -21.40523 56.38671
L4. -.5743644 .0868972 -6.61 0.000 -.7465234 -.4022054
L3. -.7778699 .1055064 -7.37 0.000 -.9868972 -.5688426
L2. -.7964502 .0986536 -8.07 0.000 -.9919006 -.6009997
L1. -.9032189 .0787349 -11.47 0.000 -1.059207 -.7472309
dms
L4. -10.43342 20.99026 -0.50 0.620 -52.01891 31.15206
L3. -8.937376 21.28898 -0.42 0.675 -51.11469 33.23993
L2. -31.70797 21.51865 -1.47 0.143 -74.3403 10.92436
L1. -23.36729 21.83926 -1.07 0.287 -66.6348 19.90022
CINT1
dms
_cons -.0705722 .0815818 -0.87 0.389 -.2322005 .091056
L4. .000107 .0003611 0.30 0.768 -.0006084 .0008224
L3. .000039 .0004384 0.09 0.929 -.0008296 .0009076
L2. -.0001827 .0004099 -0.45 0.657 -.0009949 .0006295
L1. -.0004127 .0003272 -1.26 0.210 -.0010608 .0002355
dms
L4. -.0772208 .0872229 -0.89 0.378 -.2500251 .0955835
L3. .0668469 .0884642 0.76 0.451 -.1084167 .2421105
L2. .0217379 .0894186 0.24 0.808 -.1554164 .1988923
L1. .1661345 .0907508 1.83 0.070 -.0136593 .3459283
CINT1
CINT1
Coef. Std. Err. t P>|t| [95% Conf. Interval]
dms 9 222.96 0.5405 17.93564 0.0000
CINT1 9 .926489 0.0595 .9642081 0.4676
Equation Parms RMSE R-sq F P > F
Det(Sigma_ml) = 36005.71 SBIC = 16.87598
FPE = 48390.16 HQIC = 16.6303
Log likelihood = -986.1984 AIC = 16.46227
Sample: 1979q3 - 2009q4 No. of obs = 122
Vector autoregression
. . var CINT dms, lags(1/4) small
dms ALL 1.2091 4 113 0.3109
dms CINT1 1.2091 4 113 0.3109
CINT1 ALL .55336 4 113 0.6970
CINT1 dms .55336 4 113 0.6970
Equation Excluded F df df_r Prob > F
Granger causality Wald tests
. . vargranger
56
A3. Hausman specification test :
Rationale behind the test is to test for the presence of simultaneity; that is weather the
endogenous variable is correlated with an error term. If there is no simultaneity, OLS should
generate efficient and consistent parameter estimators. Instrumental variable (generated by 2sls)
on the other hand will be consistent but inefficient. If however simultaneity is present OLS will be
inconsistent, while 2sls will be bot consistent and efficient.
The test comprise of: regressing consumption function by OLS Table 6 and obtain residuals. Then
regressing consumption function using instrumental variable and obtain residuals Table 7. Finally
we compared the quadratic differences between the coefficients vectors scaled by the precision,
matrix which gives us a χ2 test statistics Table 8.
A3.1 single equation OLS estimation
Table 6
_cons -3411.072 1019.123 -3.35 0.001 -5431.582 -1390.562
INC .1041349 .0268763 3.87 0.000 .05085 .1574198
CONS1 .8603323 .0379196 22.69 0.000 .785153 .9355116
CONS Coef. Std. Err. t P>|t| [95% Conf. Interval]
Total 2.3372e+11 108 2.1640e+09 Root MSE = 1106.8
Adj R-squared = 0.9994
Residual 129857840 106 1225073.96 R-squared = 0.9994
Model 2.3359e+11 2 1.1679e+11 Prob > F = 0.0000
F( 2, 106) =95335.52
Source SS df MS Number of obs = 109
. . reg CONS CONS1 INC if tin(1980q1, 2007q1)
57
A3.1 single equation 2sLs estimation
Table 7
A3.2 Hausman test
Table 8
Instruments: CONS1 MS INT8 CINT1 INF1 CINC1 GOV CINC OTG
Instrumented: INC
_cons -4787.321 1217.284 -3.93 0.000 -7173.153 -2401.488
CONS1 .8052211 .0463722 17.36 0.000 .7143333 .896109
INC .1432685 .0328878 4.36 0.000 .0788096 .2077275
CONS Coef. Std. Err. z P>|z| [95% Conf. Interval]
Root MSE = 1102.4
R-squared = 0.9994
Prob > chi2 = 0.0000
Wald chi2(2) = 1.9e+05
Instrumental variables (2SLS) regression Number of obs = 109
> , 2007q1)
. . ivregress 2sls CONS CONS1 (INC= MS INT8 CINT1 INF1 CINC1 GOV CINC OTG ) if tin(1980q1
Prob>chi2 = 0.0406
= 4.19
chi2(1) = (b-B)'[(V_b-V_B)^(-1)](b-B)
Test: Ho: difference in coefficients not systematic
B = inconsistent under Ha, efficient under Ho; obtained from regress
b = consistent under Ho and Ha; obtained from ivregress
CONS1 .8052211 .8603323 -.0551111 .0269089
INC .1432685 .1041349 .0391336 .0191077
tsls ols Difference S.E.
(b) (B) (b-B) sqrt(diag(V_b-V_B))
Coefficients
are on a similar scale.
unexpected and possibly consider scaling your variables so that the coefficients
problems computing the test. Examine the output of your estimators for anything
coefficients being tested (2); be sure this is what you expect, or there may be
Note: the rank of the differenced variance matrix (1) does not equal the number of
. . hausman tsls ols,sigmaless
58
A4. Structural model - results
A.4.1 2sls estimation:
Table 9: First stage
_cons 5922.56 3710.857 1.60 0.113 -1432.238 13277.36
MS .0032858 .0883523 0.04 0.970 -.1718256 .1783971
GOV -.0565133 .0854569 -0.66 0.510 -.225886 .1128594
INF1 -80.7754 36.91972 -2.19 0.031 -153.9491 -7.601722
OTG 70.4786 61.45123 1.15 0.254 -51.31575 192.2729
CINT1 -113.2905 122.3191 -0.93 0.356 -355.7229 129.142
CMS .5504231 .4745044 1.16 0.249 -.3900293 1.490875
CINC .2130746 .039604 5.38 0.000 .1345809 .2915684
INT8 -104.5998 55.47193 -1.89 0.062 -214.5434 5.343762
CINC1 .0814872 .0411971 1.98 0.050 -.0001641 .1631385
CONS1 .995946 .0122958 81.00 0.000 .9715762 1.020316
CONS Coef. Std. Err. t P>|t| [95% Conf. Interval]
Total 3.0414e+11 119 2.5558e+09 Root MSE = 1088.6
Adj R-squared = 0.9995
Residual 129162840 109 1184980.18 R-squared = 0.9996
Model 3.0401e+11 10 3.0401e+10 Prob > F = 0.0000
F( 10, 109) =25655.64
Source SS df MS Number of obs = 120
-----------------------
First-stage regressions
> TG INF1), endo(INC) exog(GOV OTG MS) 2sls first
. . reg3 (CONS = INC CONS1) (INV = CINC1 INC INT8) (INT = INC CINC CMS CINT1) (INF = O
_cons 9348.136 7060.012 1.32 0.188 -4644.579 23340.85
MS .1306359 .1680927 0.78 0.439 -.2025185 .4637902
GOV -.3563384 .1625841 -2.19 0.031 -.6785749 -.034102
INF1 209.5245 70.24082 2.98 0.004 70.30948 348.7395
OTG 571.1541 116.9127 4.89 0.000 339.4369 802.8714
CINT1 -178.3855 232.7156 -0.77 0.445 -639.6202 282.8492
CMS -.5415002 .9027583 -0.60 0.550 -2.330738 1.247737
CINC .0373861 .0753477 0.50 0.621 -.1119505 .1867228
INT8 9.380083 105.5369 0.09 0.929 -199.7907 218.5509
CINC1 .142514 .0783786 1.82 0.072 -.01283 .2978579
CONS1 .3085774 .0233931 13.19 0.000 .2622131 .3549416
INV Coef. Std. Err. t P>|t| [95% Conf. Interval]
Total 2.3496e+10 119 197446602 Root MSE = 2071
Adj R-squared = 0.9783
Residual 467519793 109 4289172.41 R-squared = 0.9801
Model 2.3029e+10 10 2.3029e+09 Prob > F = 0.0000
F( 10, 109) = 536.90
Source SS df MS Number of obs = 120
59
_cons 14.29523 3.939822 3.63 0.000 6.486631 22.10383
MS .0000938 .0000938 1.00 0.320 -.0000921 .0002797
GOV -.000135 .0000907 -1.49 0.140 -.0003148 .0000448
INF1 .4678207 .0391977 11.93 0.000 .3901321 .5455093
OTG .4562767 .0652429 6.99 0.000 .3269675 .5855859
CINT1 .5645513 .1298663 4.35 0.000 .3071605 .8219422
CMS -.0012305 .0005038 -2.44 0.016 -.002229 -.000232
CINC -.0000262 .000042 -0.62 0.535 -.0001095 .0000572
INT8 .2573257 .0588946 4.37 0.000 .1405985 .3740529
CINC1 -.0000796 .0000437 -1.82 0.071 -.0001663 7.05e-06
CONS1 -.0000246 .0000131 -1.88 0.063 -.0000504 1.32e-06
INT Coef. Std. Err. t P>|t| [95% Conf. Interval]
Total 1599.02896 119 13.4372181 Root MSE = 1.1557
Adj R-squared = 0.9006
Residual 145.593593 109 1.33572103 R-squared = 0.9089
Model 1453.43537 10 145.343537 Prob > F = 0.0000
F( 10, 109) = 108.81
Source SS df MS Number of obs = 120
_cons 3.85524 2.62755 1.47 0.145 -1.352479 9.062958
MS .0000992 .0000626 1.59 0.116 -.0000248 .0002232
GOV -.0000807 .0000605 -1.33 0.185 -.0002006 .0000393
INF1 .9262634 .0261418 35.43 0.000 .8744512 .9780755
OTG .0885327 .0435118 2.03 0.044 .0022937 .1747717
CINT1 .5302765 .0866106 6.12 0.000 .3586171 .7019358
CMS -.0006023 .000336 -1.79 0.076 -.0012682 .0000636
CINC -.0000149 .000028 -0.53 0.597 -.0000704 .0000407
INT8 .0382188 .0392781 0.97 0.333 -.0396291 .1160666
CINC1 .0000381 .0000292 1.30 0.195 -.0000198 .0000959
CONS1 -6.96e-06 8.71e-06 -0.80 0.426 -.0000242 .0000103
INF Coef. Std. Err. t P>|t| [95% Conf. Interval]
Total 1607.78367 119 13.5107871 Root MSE = .77078
Adj R-squared = 0.9560
Residual 64.7576504 109 .594106884 R-squared = 0.9597
Model 1543.02602 10 154.302602 Prob > F = 0.0000
F( 10, 109) = 259.72
Source SS df MS Number of obs = 120
60
Second stage:
_cons 70865.7 11376.71 6.23 0.000 48317.43 93413.98
MS 1.253788 .2708696 4.63 0.000 .7169331 1.790643
GOV -.2161911 .2619929 -0.83 0.411 -.7354525 .3030703
INF1 32.23124 113.1881 0.28 0.776 -192.104 256.5665
OTG 480.3305 188.3966 2.55 0.012 106.9345 853.7265
CINT1 -334.7356 375.0048 -0.89 0.374 -1077.983 408.5117
CMS -1.816995 1.454732 -1.25 0.214 -4.700225 1.066236
CINC .3570231 .1214175 2.94 0.004 .1163776 .5976686
INT8 -701.2326 170.0653 -4.12 0.000 -1038.297 -364.1686
CINC1 .5130313 .1263017 4.06 0.000 .2627055 .7633571
CONS1 1.112142 .0376963 29.50 0.000 1.037429 1.186854
INC Coef. Std. Err. t P>|t| [95% Conf. Interval]
Total 6.2038e+11 119 5.2133e+09 Root MSE = 3337.3
Adj R-squared = 0.9979
Residual 1.2140e+09 109 11137718.2 R-squared = 0.9980
Model 6.1916e+11 10 6.1916e+10 Prob > F = 0.0000
F( 10, 109) = 5559.16
Source SS df MS Number of obs = 120
Exogenous variables: CONS1 CINC1 INT8 CINC CMS CINT1 OTG INF1 GOV MS
Endogenous variables: CONS INV INT INF INC
_cons .1356703 .1351685 1.00 0.316 -.1299465 .4012871
INF1 .947737 .0227979 41.57 0.000 .9029374 .9925366
OTG .1283367 .0342462 3.75 0.000 .0610402 .1956331
INF
_cons 17.91581 .7799207 22.97 0.000 16.3832 19.44841
CINT1 .8381642 .2046285 4.10 0.000 .4360531 1.240275
CMS -.0023777 .0008356 -2.85 0.005 -.0040196 -.0007358
CINC -.0001509 .0000631 -2.39 0.017 -.0002749 -.0000269
INC -.0000336 3.52e-06 -9.56 0.000 -.0000405 -.0000267
INT
_cons -6972.469 2281.213 -3.06 0.002 -11455.23 -2489.706
INT8 -122.93 110.9292 -1.11 0.268 -340.9145 95.05453
INC .1853121 .005361 34.57 0.000 .1747774 .1958469
CINC1 .2678231 .0717983 3.73 0.000 .1267339 .4089124
INV
_cons -3628.45 1480.371 -2.45 0.015 -6537.495 -719.4044
CONS1 .7891827 .0609076 12.96 0.000 .6694945 .9088708
INC .1477249 .0426785 3.46 0.001 .0638584 .2315915
CONS
Coef. Std. Err. t P>|t| [95% Conf. Interval]
INF 120 2 .9185514 0.9386 894.28 0.0000
INT 120 4 1.999967 0.7123 71.03 0.0000
INV 120 3 2383.319 0.9720 1332.06 0.0000
CONS 120 2 1523.078 0.9991 65503.10 0.0000
Equation Obs Parms RMSE "R-sq" F-Stat P
Two-stage least-squares regression
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final
project final

More Related Content

What's hot

Predicting Stock Returns with Macroeconomic Indicators on Bist 100
Predicting Stock Returns with Macroeconomic Indicators on Bist 100Predicting Stock Returns with Macroeconomic Indicators on Bist 100
Predicting Stock Returns with Macroeconomic Indicators on Bist 100
Zekeriya Bildik, CMA
 
j.jmoneco.2006.02.008
j.jmoneco.2006.02.008j.jmoneco.2006.02.008
j.jmoneco.2006.02.008
Ali Alavi
 
Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...
Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...
Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...
QUESTJOURNAL
 
Jobs and Growth in Finland: Industry-level Evidence from the 1990s
Jobs and Growth in Finland: Industry-level Evidence from the 1990sJobs and Growth in Finland: Industry-level Evidence from the 1990s
Jobs and Growth in Finland: Industry-level Evidence from the 1990s
Palkansaajien tutkimuslaitos
 
Poland’s electricity market. Forecasts of demand for electricity and of elect...
Poland’s electricity market. Forecasts of demand for electricity and of elect...Poland’s electricity market. Forecasts of demand for electricity and of elect...
Poland’s electricity market. Forecasts of demand for electricity and of elect...
Środkowoeuropejskie Studia Polityczne
 
Unit 1 Textbook Assignment
Unit 1 Textbook AssignmentUnit 1 Textbook Assignment
Unit 1 Textbook Assignment
Mr. Pauly's Classes
 
Structural change in Finnish manufacturing: The theory of the aggregation of ...
Structural change in Finnish manufacturing: The theory of the aggregation of ...Structural change in Finnish manufacturing: The theory of the aggregation of ...
Structural change in Finnish manufacturing: The theory of the aggregation of ...
Palkansaajien tutkimuslaitos
 
Time Series, Moving Average
Time Series, Moving AverageTime Series, Moving Average
Time Series, Moving Average
SOMASUNDARAM T
 
Statistics...
Statistics...Statistics...
Economic policy course
Economic policy courseEconomic policy course
Economic policy course
Mahmoud Fath-Allah
 
Basic stat
Basic statBasic stat
Basic stat
kula jilo
 
The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...
The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...
The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...
Christoph Schulze
 
INTRODUCTION TO STATISTICS
INTRODUCTION TO STATISTICSINTRODUCTION TO STATISTICS
INTRODUCTION TO STATISTICS
AkkiMaruthi
 
Statistics
StatisticsStatistics
Statistics
Don Joreck Santos
 
INTRODUCTION TO STATISTICS
INTRODUCTION TO STATISTICSINTRODUCTION TO STATISTICS
INTRODUCTION TO STATISTICS
AkkiMaruthi
 
Labour Productivity Dynamics Regularities Analyses by Manufacturing in Europe...
Labour Productivity Dynamics Regularities Analyses by Manufacturing in Europe...Labour Productivity Dynamics Regularities Analyses by Manufacturing in Europe...
Labour Productivity Dynamics Regularities Analyses by Manufacturing in Europe...
International Journal of World Policy and Development Studies
 
Chapter 2 and 3: basic Data handling koop
Chapter 2 and 3: basic Data handling koop Chapter 2 and 3: basic Data handling koop
Chapter 2 and 3: basic Data handling koop
FLBeS
 
CASE Network Studie and Analyses 435 - Two Exercises of Inflation Modelling a...
CASE Network Studie and Analyses 435 - Two Exercises of Inflation Modelling a...CASE Network Studie and Analyses 435 - Two Exercises of Inflation Modelling a...
CASE Network Studie and Analyses 435 - Two Exercises of Inflation Modelling a...
CASE Center for Social and Economic Research
 
Introduction to Elementary statistics
Introduction to Elementary statisticsIntroduction to Elementary statistics
Introduction to Elementary statistics
krizza joy dela cruz
 
Demand forecasting by time series analysis
Demand forecasting by time series analysisDemand forecasting by time series analysis
Demand forecasting by time series analysis
Sunny Gandhi
 

What's hot (20)

Predicting Stock Returns with Macroeconomic Indicators on Bist 100
Predicting Stock Returns with Macroeconomic Indicators on Bist 100Predicting Stock Returns with Macroeconomic Indicators on Bist 100
Predicting Stock Returns with Macroeconomic Indicators on Bist 100
 
j.jmoneco.2006.02.008
j.jmoneco.2006.02.008j.jmoneco.2006.02.008
j.jmoneco.2006.02.008
 
Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...
Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...
Analysis of Demand and Supply Commodities Originally A Region (Case Study; Pr...
 
Jobs and Growth in Finland: Industry-level Evidence from the 1990s
Jobs and Growth in Finland: Industry-level Evidence from the 1990sJobs and Growth in Finland: Industry-level Evidence from the 1990s
Jobs and Growth in Finland: Industry-level Evidence from the 1990s
 
Poland’s electricity market. Forecasts of demand for electricity and of elect...
Poland’s electricity market. Forecasts of demand for electricity and of elect...Poland’s electricity market. Forecasts of demand for electricity and of elect...
Poland’s electricity market. Forecasts of demand for electricity and of elect...
 
Unit 1 Textbook Assignment
Unit 1 Textbook AssignmentUnit 1 Textbook Assignment
Unit 1 Textbook Assignment
 
Structural change in Finnish manufacturing: The theory of the aggregation of ...
Structural change in Finnish manufacturing: The theory of the aggregation of ...Structural change in Finnish manufacturing: The theory of the aggregation of ...
Structural change in Finnish manufacturing: The theory of the aggregation of ...
 
Time Series, Moving Average
Time Series, Moving AverageTime Series, Moving Average
Time Series, Moving Average
 
Statistics...
Statistics...Statistics...
Statistics...
 
Economic policy course
Economic policy courseEconomic policy course
Economic policy course
 
Basic stat
Basic statBasic stat
Basic stat
 
The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...
The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...
The Effects of European Regional Policy - An Empirical Evaluation of Objectiv...
 
INTRODUCTION TO STATISTICS
INTRODUCTION TO STATISTICSINTRODUCTION TO STATISTICS
INTRODUCTION TO STATISTICS
 
Statistics
StatisticsStatistics
Statistics
 
INTRODUCTION TO STATISTICS
INTRODUCTION TO STATISTICSINTRODUCTION TO STATISTICS
INTRODUCTION TO STATISTICS
 
Labour Productivity Dynamics Regularities Analyses by Manufacturing in Europe...
Labour Productivity Dynamics Regularities Analyses by Manufacturing in Europe...Labour Productivity Dynamics Regularities Analyses by Manufacturing in Europe...
Labour Productivity Dynamics Regularities Analyses by Manufacturing in Europe...
 
Chapter 2 and 3: basic Data handling koop
Chapter 2 and 3: basic Data handling koop Chapter 2 and 3: basic Data handling koop
Chapter 2 and 3: basic Data handling koop
 
CASE Network Studie and Analyses 435 - Two Exercises of Inflation Modelling a...
CASE Network Studie and Analyses 435 - Two Exercises of Inflation Modelling a...CASE Network Studie and Analyses 435 - Two Exercises of Inflation Modelling a...
CASE Network Studie and Analyses 435 - Two Exercises of Inflation Modelling a...
 
Introduction to Elementary statistics
Introduction to Elementary statisticsIntroduction to Elementary statistics
Introduction to Elementary statistics
 
Demand forecasting by time series analysis
Demand forecasting by time series analysisDemand forecasting by time series analysis
Demand forecasting by time series analysis
 

Similar to project final

Empirical Macroeconomic Model of the Finnish Economy (EMMA)
Empirical Macroeconomic Model of the Finnish Economy (EMMA)Empirical Macroeconomic Model of the Finnish Economy (EMMA)
Empirical Macroeconomic Model of the Finnish Economy (EMMA)
Palkansaajien tutkimuslaitos
 
An Examination into the Predictive Content of the Composite Index of Leading ...
An Examination into the Predictive Content of the Composite Index of Leading ...An Examination into the Predictive Content of the Composite Index of Leading ...
An Examination into the Predictive Content of the Composite Index of Leading ...
Sean Delehunt
 
Modeling market and nonmarket Intangible investments in a macro-econometric f...
Modeling market and nonmarket Intangible investments in a macro-econometric f...Modeling market and nonmarket Intangible investments in a macro-econometric f...
Modeling market and nonmarket Intangible investments in a macro-econometric f...
SPINTAN
 
tkacik_final
tkacik_finaltkacik_final
tkacik_final
Marcel Tkacik
 
Google Trends
Google TrendsGoogle Trends
Google Trends
Radan Papousek
 
Econometrics1,2,3,4,5,6,7,8_ChaptersALL.pdf
Econometrics1,2,3,4,5,6,7,8_ChaptersALL.pdfEconometrics1,2,3,4,5,6,7,8_ChaptersALL.pdf
Econometrics1,2,3,4,5,6,7,8_ChaptersALL.pdf
nazerjibril
 
Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066
Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066
Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066
Sajid Ali Khan
 
Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...
Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...
Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...
iosrjce
 
Regress and Progress! An econometric characterization of the short-run relati...
Regress and Progress! An econometric characterization of the short-run relati...Regress and Progress! An econometric characterization of the short-run relati...
Regress and Progress! An econometric characterization of the short-run relati...
Matheus Albergaria
 
Gra wp modelling perspectives
Gra wp modelling perspectivesGra wp modelling perspectives
Gra wp modelling perspectives
Genest Benoit
 
The time consistency of economic policy and the driving forces behind busines...
The time consistency of economic policy and the driving forces behind busines...The time consistency of economic policy and the driving forces behind busines...
The time consistency of economic policy and the driving forces behind busines...
accounting2010
 
C1-Overview.pptx
C1-Overview.pptxC1-Overview.pptx
C1-Overview.pptx
BichNgocNguyn1
 
02. predicting financial distress logit mode jones
02. predicting financial distress logit mode jones02. predicting financial distress logit mode jones
02. predicting financial distress logit mode jones
Sailendra Nangadam
 
Informe sobre empleo y eficiencia energética
Informe sobre empleo y eficiencia energéticaInforme sobre empleo y eficiencia energética
Informe sobre empleo y eficiencia energética
MARATUM Marketing A Tu Medida
 
Fiscal Policy And Trade Openness On Unemployment Essay
Fiscal Policy And Trade Openness On Unemployment EssayFiscal Policy And Trade Openness On Unemployment Essay
Fiscal Policy And Trade Openness On Unemployment Essay
Rachel Phillips
 
Econometrics.pptx
Econometrics.pptxEconometrics.pptx
Econometrics.pptx
SandeepSingh286037
 
Case Econ08 Ppt 01
Case Econ08 Ppt 01Case Econ08 Ppt 01
Case Econ08 Ppt 01
Amba Research
 
Case Econ08 Ab Az Ppt 01
Case Econ08 Ab Az Ppt 01Case Econ08 Ab Az Ppt 01
Case Econ08 Ab Az Ppt 01
guest9850dd4e
 
Multivariate analysis of the impact of the commercial banks on the economic g...
Multivariate analysis of the impact of the commercial banks on the economic g...Multivariate analysis of the impact of the commercial banks on the economic g...
Multivariate analysis of the impact of the commercial banks on the economic g...
Alexander Decker
 
Econometrics
EconometricsEconometrics
Econometrics
Pawan Kawan
 

Similar to project final (20)

Empirical Macroeconomic Model of the Finnish Economy (EMMA)
Empirical Macroeconomic Model of the Finnish Economy (EMMA)Empirical Macroeconomic Model of the Finnish Economy (EMMA)
Empirical Macroeconomic Model of the Finnish Economy (EMMA)
 
An Examination into the Predictive Content of the Composite Index of Leading ...
An Examination into the Predictive Content of the Composite Index of Leading ...An Examination into the Predictive Content of the Composite Index of Leading ...
An Examination into the Predictive Content of the Composite Index of Leading ...
 
Modeling market and nonmarket Intangible investments in a macro-econometric f...
Modeling market and nonmarket Intangible investments in a macro-econometric f...Modeling market and nonmarket Intangible investments in a macro-econometric f...
Modeling market and nonmarket Intangible investments in a macro-econometric f...
 
tkacik_final
tkacik_finaltkacik_final
tkacik_final
 
Google Trends
Google TrendsGoogle Trends
Google Trends
 
Econometrics1,2,3,4,5,6,7,8_ChaptersALL.pdf
Econometrics1,2,3,4,5,6,7,8_ChaptersALL.pdfEconometrics1,2,3,4,5,6,7,8_ChaptersALL.pdf
Econometrics1,2,3,4,5,6,7,8_ChaptersALL.pdf
 
Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066
Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066
Advanced Econometrics by Sajid Ali Khan Rawalakot: 0334-5439066
 
Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...
Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...
Foundations of Financial Sector Mechanisms and Economic Growth in Emerging Ec...
 
Regress and Progress! An econometric characterization of the short-run relati...
Regress and Progress! An econometric characterization of the short-run relati...Regress and Progress! An econometric characterization of the short-run relati...
Regress and Progress! An econometric characterization of the short-run relati...
 
Gra wp modelling perspectives
Gra wp modelling perspectivesGra wp modelling perspectives
Gra wp modelling perspectives
 
The time consistency of economic policy and the driving forces behind busines...
The time consistency of economic policy and the driving forces behind busines...The time consistency of economic policy and the driving forces behind busines...
The time consistency of economic policy and the driving forces behind busines...
 
C1-Overview.pptx
C1-Overview.pptxC1-Overview.pptx
C1-Overview.pptx
 
02. predicting financial distress logit mode jones
02. predicting financial distress logit mode jones02. predicting financial distress logit mode jones
02. predicting financial distress logit mode jones
 
Informe sobre empleo y eficiencia energética
Informe sobre empleo y eficiencia energéticaInforme sobre empleo y eficiencia energética
Informe sobre empleo y eficiencia energética
 
Fiscal Policy And Trade Openness On Unemployment Essay
Fiscal Policy And Trade Openness On Unemployment EssayFiscal Policy And Trade Openness On Unemployment Essay
Fiscal Policy And Trade Openness On Unemployment Essay
 
Econometrics.pptx
Econometrics.pptxEconometrics.pptx
Econometrics.pptx
 
Case Econ08 Ppt 01
Case Econ08 Ppt 01Case Econ08 Ppt 01
Case Econ08 Ppt 01
 
Case Econ08 Ab Az Ppt 01
Case Econ08 Ab Az Ppt 01Case Econ08 Ab Az Ppt 01
Case Econ08 Ab Az Ppt 01
 
Multivariate analysis of the impact of the commercial banks on the economic g...
Multivariate analysis of the impact of the commercial banks on the economic g...Multivariate analysis of the impact of the commercial banks on the economic g...
Multivariate analysis of the impact of the commercial banks on the economic g...
 
Econometrics
EconometricsEconometrics
Econometrics
 

project final

  • 1. Student projects and dissertations Faculty: Bristol Business School Student’s name: Boris Kisska Award: Economics An investigation into causal and non-causal econometric models and their performance in forecasting the UK’s Gross Domestic Product. Boris Kisska Academic year of presentation: 2013/2014 Bristol Business School
  • 2. 2 CONTENTS List of figures I List of tables II Acknowledgements III Introduction IV Chapter 1 Literature review 1.1 Macroeconomic theories 12 1.2 Keynesian Revolution 13 1.3 Expectations Revolution 15 1.4 The new Keynesians 17 1.5 Forecasting accuracy 18 1.6 Non-structural models 20 Chapter 2 Structural econometric model 2.1 Structural model building 23 2.1.1 Rationale for simultaneous equations 24 2.1.2 Rationale for Keynesian model 2.2 Building blocks 25 2.2.1 Consumption Function 25 2.2.2 Investment Function 26 2.2.3 Interest rate Function 27 2.2.4 Inflation Function 29 Chapter 3 Structural modelling 3.1 Modelling Methodology 31 3.1.1 Order condition identification 33 3.1.2 Hausman test 34 3.1.3 Structural model estimation 35 3.2 Analysis of the structural model results 36 3.3 Ex- post forecasting 38
  • 3. 3 Chapter 4 Non-causal model 4.1 Introduction 42 4.2 Notation of ARMA model 43 4.3 Non-stationarity in time series 44 4.4 ARIMA methodology 45 4.4.1 Identification 45 4.4.2 Estimation 46 4.4.3 Diagnostic checking 48 4.4.4 Forecasting 49 Chapter 5 Conclusion 51 Appendix 53
  • 4. 4 List of Figures 1. Decision making at the Bank of England 11 2. Graphical representation of the forecasting performance of the eight models 14 3. Graphical representation of the forecasting performance of the four different models 19 4. Influence diagram for simultaneous equation model 23 5. Transmission mechanism of monetary policy 28 6. Block diagram of five equation model 32 7. Historical simulation , GDP 1980q1-2007q1 38 8. Structural model, GDP q/q, forecast 2007q1-2009q1 40 9. Autocorrelation function INC 44 10. The UK`s GDP q/q values and the first difference 45 11. ACF and PACF of the GROWTH 46 12. UK’s GDP forecast ARIMA(1,1,1), 2007q1-2009q1 50
  • 5. 5 List of Tables 1. Comparison of forecasting performance of the eight different models 14 2. Forecasting performance of the four different models 19 3. One year ahead UK forecast error - Mean Absolute Error (MAE) 20 4. Summary table of the used variables in model 32 5. The order condition of identification 33 6. SC and HT tests of individual equations-OLS estimation 35 7. Summary statistics of OLS and 2sls estimation procedures 37 8. Ex-post forecast based on 2SLS regression 39 9. Autocorrelation function and partial autocorrelation 45 10. Akaike’s and Schwarz Bayesian information criterion for model GROWTH 48 11. Ex-post forecast based on ML regression 49
  • 6. 6 Acknowledgements I would like to take this opportunity and thank Tony Flegg for his valuable comments throughout the write up. I would also thank my family and friends for supporting me during the challenging final year.
  • 7. 7 Abstract Accurate forecast of the direction and magnitude of the exogenous shocks to the aggregate demand has been subjected to extensive research during the past several decades. In the aftermath of the recent events in 2007 there has been heated debate about the validity of the nowadays econometric models and their failure to predict recent recession. This brings into question the whole validity of causal macroeconometric models based on the economic theory. Atheoretical models that do not assume an underlying theory, therefore may serve as a viable alternative when assessing the dynamics of the shock to the economy. This dissertation therefore investigates implications and forecasting validity of different econometrics methods that identify exogenous shock to the UK`s GDP, with the particular interest in the recent recession 2007-08.
  • 8. 8 The relevant question to ask about the ‘assumptions’ of a theory is not whether they are descriptively ‘realistic’ for they never are, but whether they are sufficiently good approximations for the purpose at hand. And this question can be answered by only seeing whether they work, which means whether it yields sufficiently accurate predictions. Milton Friedman (1954, p. 8)
  • 9. 9 INTRODUCTION This dissertation aims to provide a comprehensive analysis and evaluation of the two significantly different macroeconometric models and their ability to forecast the UK`s Gross Domestic Product (GDP). Particular focus is on whether structural based models perform better than their atheoretical counterparts in forecasting turning points that are associated with the occurrence of unusually large shocks to the economy. The crucial argument lies in the view that cycles and trends in time-series are systematic. However, as Eugen Slutsky and Ragnar Frisch suggest the cycles are not necessary systematic in nature but rather may be merely artefacts of random shocks, working their way through the economy Nelson and Plosser (1972, p. 909). Gross domestic product (GDP) is arguably the most important aggregate indicator of economic activity in the UK Lee (2011). GDP is the value of goods and services produced in an economy in a given year, which are determined by the common measuring of market prices and are sensitive to changes in the average price level occurring in the economy. There are three different approaches that can be used to measure GDP: the expenditure approach, the income approach and the production approach. The primary focus in this dissertation is on the expenditure measure which is comprised of: GDP (E) = household final consumption expenditure + final consumption expenditure of non-profit institutions serving households + general government final consumption expenditure + gross capital formation + exports – imports Lee (2012). Accurate GDP analysis and forecasts are of great theoretical and practical value for policy decisions and for assessments of the future state of the economy. Holden et al. (1990) state that forecasts are required for two basic reasons: the future is uncertain; and the full impact of many decisions taken now might not be felt until later. Consequently, accurate predictions of the future would improve the efficiency of the decision-making process. The use of economy-wide macro-econometric models for forecasting and simulation analyses of the likely economic policy outcomes has expanded to the majority of countries. Models have become an important instrument of world-wide analyses and forecasts conducted by international organizations and renowned research institutions, as well as by central banks of many countries (Welfe, 2013, p. 395).
  • 10. 10 This is because they not only provide an analytical framework to link the demand and supply sides and the resource allocation process in an economy but also may help in reducing fluctuations and enhancing economic growth, which are two major aspects of any economy (Bahattari, 2005, p. 2) As Figure 1 summarizes, macroeconomic models, alongside others, play a major role in informing and disciplining monetary policy decisions at the Bank of England. Decision making at the Bank of England Figure 1: (Source, BoE) The dissertation is organised as follows: Chapter 1 provides a literature review. The review is by no means exhaustive but provides comprehensive evaluation of the past and present trends in macroeconometric modelling and forecasting. Chapter 2 presents the rationale behind the building the structural model. Chapter 3 introduces an econometric analysis aimed at developing a satisfactory forecasting model. Chapter 4 concerns identification, estimation, diagnostic checking and forecasting of non-causal the autoregressive integrated moving average (ARIMA) model. Chapter 5 contains the conclusion. The dissertation also includes an appendix containing the detailed calculations and statistical printouts of all the models considered.
  • 11. 11 CHAPTER 1 Literature review The aim of this review is to provide an evaluation of the past and existing research on the use and forecasting performance of different econometric models with the particular focus on their ability to forecast the UK `s Gross Domestic Product (GDP). Forecasting models can be broadly split into two categories, based on `the trade-off between their conceptual coherence with economic theory and their empirical coherence with economic data` (Pagan, 2003, p. 1). Causal or structural models are a set of behavioural equations, as well as institutional and definitional relationships representing the main behaviour of economic agents and the operations of an economy (Valadhkani 2004). The goal of quantitative analysis of an economy via the estimation of an interrelated system of equations `is to achieve three purposes; descriptive, prescriptive and predictive uses of econometrics, that is structural analysis, policy evaluation and forecasting’ (Intriligator et al., 1978, p. 430). Atheoretical or non-causal models, on the other hand, rely more on statistical patterns in the data. These models attempt to exploit the reduced-form correlations in observed macroeconomic time series, with fewer assumptions about the underlying structure of the economy (Diebold 1998, p. 2). Because of their restricted nature they are used almost exclusively for forecasting purposes or as an accuracy benchmark for structural models. 1.1 Macroeconomic theories The first attempts to formalize theoretical framework of the national economy as a whole took place during the early 20th century. Three trends in the literature could be distinguished then. The first, stemmed from general equilibrium theory formulated by Leon Walras and later developed by Wilfredo Pareto, the second rested on the foundations of business cycle laid by Ragnar Frisch, Joseph Schumpeter and Arthur Cecil Piggou and the third referred to J.M.Keynes’ fundamental writings regarding unemployment and demand deficiency (Welfe, 2013, p. 8).
  • 12. 12 1.2 Keynesian Revolution Complete specification of the macroeconomic model shows how an economic behaviour and institutions affect relationships between a set of conditions x and outcomes y. (Reiss and Wolak, 2007, p. 4284). Economic models, however, rest on deterministic assumptions and as such do not perfectly fit observed data. Structural econometric modellers thus must add stochastic statistical structure in order to rationalize why economic theory does not perfectly explain data. Theoretical framework developed by J.M. Keynes(1936),and especially his General Theory became a cornerstone of the concepts that led to the construction of a class of macroeconometric models based on “Cowles commission” methodology, associated with Klein, Goldberger and Modigliani, works that predominated in the USA and Europe for over 30 years (Welfe 2013, p.4).Lucas and Sargent, (1981, p. 296) stress that success of Keynesian revolution was in the form of a revolution of methods that rested on several important features: `the evolution of macroeconomics into a quantitative, scientific discipline, the development of explicit statistical descriptions of economic behavior, the increasing reliance of government officials on technical economic expertise, and the introduction of the use of mathematical control theory to manage an economy ’. The general profile of the models based on the Cowles commission`s methodology was macroeconomic: they contained final demand (consumption, investment), demand for labour, as well as prices, wages and financial flows (Klein, 1991). Variables whose introduction was theoretically unjustified were eliminated by imposing zero restrictions on the appropriate parameters. The IS-LM/PC1 model becomes a workhorse tool at hand in constructing and evaluating macro models. Common features linked with the Klein- Goldberger models explicated the major feedbacks that included a consumer multiplier, where consumption depended on national income and was one of the national income components. Moreover, they also defined the fundamental macro-identity, i.e. national income, as being equal to the sum of consumption, government expenditure, investment and net exports. The Klein- Goldberger models paved the way for the builders of many other medium-term model of the US and UK economy (Welfe, 2013, p. 4). 1 (Vroey and Malgrange, 2011, p.3) points that origin of IS-LM model can be traced to Modigliani (1944). The IS/LM model comprises two distinct sub models, the Keynesian and the classical system. Hence, strictly speaking, it should not be considered Keynesian. But at the time of its dominance, most economists were convinced that the Keynesian variant corresponded to reality while the classical system was viewed as a foil. Regarding the Phillips Curve (PC) - The Klein-Goldberger model was the first to explain wage rates assuming that their growth depended on the rate of unemployment.
  • 13. 13 0 5 10 15 20 25 30 35 40 45 50 1 2 3 4 5 6 7 8 RMSE Quarters ahead Ex-ante forecast, selsected models Brookings ARIMA BEA Fair DRI FRB - St.louis Wharton III Several competing models were established such as the Wharton model, the MPS model developed for the Fed, The H.M. Treasury Model and many others.2 Klein (1973) compared eight models and concluded that RMSE3 of major U.S. econometric models showed that, despite some exceptions, errors were within reasonable bounds.4 Table 1: Comparison of forecasting performance of the eight different models Figure 2: Graphical representation of the forecasting performance of the eight models 2 The most significant include ; Economic Analysis Model (BEA ), A. Hirsch, M. Liebenberg, and G. Narasimhan; Brookings Model, G. Fromm, L. Klein, and G. Schink; DHL III Model, University of Michigan, S. Hymans and H. Shapiro; Data Resources, Inc., Model (DRI- 71), 0. Eckstein, E. Green and associates; Fair Model, Princeton University, R. Fair; Federal Reserve Bank of St. Louis Model (FRB St. Louis), L. Andersen and K. Carlson; MPS Model, University of Pennsylvania, A. Ando, F. Modigliani, and R. Rasche; Wharton Mark III Model, University of Pennsylvania, F. G. Adams, V. J. Duggal, G. Green, L. Klein, and M. McCarthy; Klein, 1973). 3 RMSE is a measure of the difference between values predicted by a model and the values actually observed from the environment that is being modelled. Aggregation of these residuals, serves as a measure of predictive power. 4 Comparing the RMSE with later studies reveal that results are not satisfactory. Possible reasons may include small sample bias and inaccurate data. Moreover, celebrated Wharton III model underperformed even naïve ARIMA model. RMSE of Real GNP ex - ante forecast Simulation interval Number of Quarters ahead 1 2 3 4 5 6 7 8 Brookings 1966.1 - 1970.4 6.74 11.36 16.08 20.94 25.69 29.54 33.18 39.77 ARIMA 1970.3 - 1972.1 8.70 13.00 17.00 23.00 29.00 36.00 BEA 1969.1 - 1971.2 6.01 11.01 18.42 23.26 28.08 30.5 Fair 1965.1 - 1969.4 2.91 4.35 4.52 6.77 9.89 DRI 1971.3 - 1972.3 8.90 14.89 23.1 28.88 FRB - St. Louis 1970.1 - 1971.4 10.29 14.88 13.86 11.69 11.15 16.11 Wharton III 1970.2 – 1971.4 8.04 18.96 26.00 28.52 33.74 39.74 41.77 44.68
  • 14. 14 Initial momentum for building large – scale macroeconometric models (MEM) was abruptly interrupted in the 1970s a `decade of greater inflation, unemployment and turbulence’ (Pescatori and Zaman, 2011, p. 2). Mincer and Zarnowitz (1969) compared a number of different models and conclude that forecasting errors build up much faster than in earlier years and turning points were seriously missed in the onset of recessions in 1970 and 1974, but they noted there was no decline in accuracy as measured by the criteria of comparisons with simple extrapolations. Burns (1986 cited in Wallis 1989,p. 57) notes, 'there was not only disillusion with demand management; there was also growing frustration with the forecasts as the increased level of noise in the economic system led to increased margins of error`. Greenberger (1976) points that the use of modelling in government has fallen short of expectations and the gap between expectations and actual results is widest in the policy application. Kenway (1978) argues that MEM lost it hold because model builders ceased to believe in the structure and the way in which the economy was believed to work - that a macroeconomic model as a structural model, represents. 1.3 Expectations Revolution According to Pesaran (1995) the major criticisms of the traditional models based on the Cowles Commission approach can be summarised in terms of following issues. First, Liu (1963) argues that there is an arbitrary assumption of zero restrictions on the variables that should be included in the equation that are excluded to achieve identification. Secondly, the existence of the problem of unit roots in many macroeconomic variables and ignorance of the time series properties (Plosser and Nelson, 1982). Thirdly, insufficient connection between real and monetary variables. At structural level Friedman (1968) argued that original Phillips curve depended on incorrect inflation forecast owing to the existence of money illusion, therefore the trade-off between inflation and unemployment would not hold in the long run when classical principles apply i.e. money should be neutral.
  • 15. 15 Friedman thus, proposed an expectation augmented Phillips curve, assuming that current expectations of inflation were based on a weighted average5 (1) of past inflation rates as follows: πe t = γ[πt + (1-γ)πt-1 + (1-γ)2 πt-2 + …] = γ∑ (1 − 𝛾)𝑘 π𝑡−𝑘 ∞ 𝑘=0 1 Lucas, (1976, p. 41) extended Friedman’s argument and asserted that the econometric models of the time, all derivatives of the Klein-Goldberger model, based on decision rules and estimated by empirical relations, were a fundamentally defective paradigm for producing conditional forecasts, because the parameters of decision rules will generally change when policy change or expectations about policy change. Therefore, the key policy implication of the Lucas critique was that it is impossible to surprise rational people systematically, so systematic monetary policy aimed at stabilizing the economy is doomed to failure (Sargent and Wallace 1975). According to Lucas, only deeper, ‘structural models’, i.e. those derived from the fundamentals of the business cycle theory emphasizing the agents‘ preferences, and technological constraints, based on imperfect information, rational expectations6 (2) and market clearing were able to provide more accurate grounding for the evaluation of alternative policies and forecasting. (Taylor, 1979) points that introduction of rational expectation assumptions is significant enough to be called a paradigm shift. In essence the rational expectations hypothesis states that the difference between the realized values of the expected value should be uncorrelated with the variables in the information set at the time the expectations are formed (Muth, 1961). Muth observed that various expectations that were used in the analysis of dynamic economic models had little resemblance to the way the economy works. If the economic system changes, the way expectations are formed should change, but the traditional models of expectations do not permit any such change. Yt = E(Yt | It-1) 2 5 Adjustment parameter 0< γ <1 says that economic agents will adapt their expectations in the light of past experience and that in particular they will learn from their mistakes Gujarati (2004). Adaptive expectations may be formed where people may expect prices to rise in the current year at the same rate as the previous year such that πe = πt-1. Therefore, expected level of inflation Is weighted average of the present level of X and the previous expected level of X. 6 The formula states that left hand side should be interpreted as the subjective predicted expectations 𝑌 at time t and right hand side as objective expectation conditional on the information 𝐼 available at time (𝑡 − 1) (Maddala, 1992, p.444). Moreover expectations are uncorrelated with error term otherwise forecaster has not used all available information.
  • 16. 16 Fisher (1983, p. 271), on the other hand, stresses that the Lucas critique has not been backed by any detailed empirical support but is rather asserted. (Bodkin and Marwah, 1988) point out that the rational expectations is e an irrational assumption with the respect to the complete access of typical economic agent to the raw data and the true models of the economy. Klein (1989, p. 290) acknowledges the importance of the Lucas critique, but adds that: "I believe that there is more persistence than change in the structure of economic relationships. The world and the economy change without interruption, but that does not mean that parametric structure is changing; random errors and exogenous variables may be the main sources of changes". Maddala (1992) offers a solution to the Lucas critique: making the coefficients of the MEM depend on exogenous policy variables. Heckman and Leamer (2007, p. 226) suggest redefining the exogenity so i.e. the variable x is exogenous if Lucas critique doesn’t apply to it. 1.4 The new Keynesians Significant effort had been devoted to translate Lucas’ ideas into empirical models. This efforts includes (Kydland and Prescott 1990), (Nelson and Plosser 1982), and (Sargent and Wallace 1975), who provided the main reference framework for the analysis of economic fluctuations and became, to a large extent, the core of macroeconomic theory based on rational expectations and the Real Business Cycle Theory (RBC), where the emphasis switched to the role of random shocks to technology and the intertemporal substitution in consumption and leisure that these shocks induced. Mankiw (2003) points out that RBC models omit any role of monetary policy, unanticipated or otherwise, in explaining economic fluctuations. Goodhart (1982) tested the policy irrelevance hypothesis and found evidence that unanticipated monetary shocks do have real effects on variables like output and employment. (Howells and Bain 2009) add to the shortcoming, stating that RBC models’ assumption about perfect and instantaneous market clearing fails in the real world where, in fact the prices are `sticky’ as proposed by new Keynesians. The New Keynesian approach to macroeconomics evolved in response to the monetarist controversy and to fundamental questions raised by Lucas's critique, and in order to provide an alternative to the competitive flexible-price framework of RBC analysis (Goodfriend and King 1997). Therefore, the main characteristics of the New Keynesian models are their emphasis on monopolistic competition, nominal rigidities and short-run non-neutrality of monetary policy.
  • 17. 17 Important work along those lines was undertaken by Taylor (1993) and Fair (1994) who developed methods for incorporating rational expectations into econometric models, as well as methods for rigorous assessment of model fit and forecasting performance. Models in Fair –Taylor fashion are now in use at a number of leading policy organizations, including the Fed and International Monetary Fund (Brayton et al., 1997).Shown below is a highly aggregated econometric model, described in neo-Keynesian7 framework that incorporates rational expectations and sticky prices: Yt = β0 + β1Yt-1 + β2Yt-2 + β3(mt – pt) + β4(mt-1 – pt-1) + β5 π1 + β6t + ut 3 π1 = ϒ0 + ϒ1πt-1 + ϒ2Yt + vt 4 ut = ɳt – θ1εt-1 5 vt = εt – θ2εt-1 6 7 (1) Aggregate demand equation derived from IS-LM relationships. Aggregate demand Y consist of consumption, investment, government and net foreign demand. (2) is the price determination equation, where rate of inflation π1 is defined as pt+1 + pt. The rationale is that prices and wages are set in advance of the periods to which they apply. Moreover, the equation is perfectly accelerationist that is output cannot be raised permanently about its potential without raising inflation (Taylor 1979, p.1270) (3) and (4) describe stochastic structure of the random shocks ut and vt on the assumption of first order moving average form.
  • 18. 18 1.5 Forecasting accuracy (Fair, 1979) compared four models, each based on difference of opinion as to how the economy operates Table 38 and concludes that Sargent`s and Sims model are no more accurate than the naïve model making his model superior to others. Table 2: Forecasting performance of the four different models Figure 3: Graphical representation of the forecasting performance of the four different models 8 (1) Sargent's classical macroeconometric model, (2) Sims's six-equation unconstrained vector autoregression model, (3) a "naive" eighth-order autoregressive model, and (4) fair new-Keynesian model. The basic forecast period was 1978.2- 1981.4, and for the misspecification calculations the first of the 35 sample periods ended in 19681V and the last ended in 1977.1 0 2 4 6 8 10 12 0 1 2 3 4 5 6 7 8 9 RMSE Quarters ahead Ex-ante forecast, selsected models Naïve Sargent Sims Fair RMSE of Real GNP ex - ante forecast Variable and model Number of Quarters ahead 1 2 3 4 5 6 7 8 Naïve 1.11 1.96 2.76 3.51 4.09 4.42 4.7 4.91 Sargent 1.31 2.26 3.4 3.77 4.27 4.59 4.89 5.00 Sims 1.42 2.54 3.54 4.79 6.34 7.79 9.36 10.98 Fair 0.79 1.26 1.63 2.12 2.59 2.97 3.24 3.52
  • 19. 19 A Study by (Stekler and Fildes, 2000) compared various structural models used in the UK (see Table 3)9 and concluded that there was limited evidence to suggest correct prediction in cyclical turning points. In general, those models performed on average10 (MAE< 1) better than naïve ARIMA model. Table 3: Source, UK Treasury Another study conducted by (Heilemann and Stekler, 2012) found that substantial improvement in data, theories and method had not appeared to offer substantial improvement in forecasts. While the accuracy of GDP forecasts improved somewhat in the 1980s and 1990s, it deteriorated in the past decade, returning to the levels of the 1970s. The Structural models considered so far are based on theoretical assumptions about causality (Wold, 1954, p.164) and empirical relationships between the variables in question. `Structural models thus allow outputs in a given forecast to be traced back through the model structure as the result of the interaction of a number of economic mechanisms and judgements’ OBR (2010, p. 6). 9 UK Treasury compilation of forecasts for the 1990-98 calculations and Treasury and Civil Service Committee. GDP is based on preliminary figures, average estimates of GDP Based on year ahead forecasts, Table 3 shows that the MAE of the Treasury’s forecasts of real GDP growth was 0.8% and 1.00% in 1986-90 and 1990-98, respectively. The MAE was about 25% of the mean absolute change in the earlier period. The non-Treasury errors were slightly larger in the first period but smaller in the second one. 10 mean absolute error (MAE) is a quantity used to measure how close residuals are to the actual outcomes One year ahead UK forecast error - Mean Absolute Error (MAE) Forecasting group GDP 1986-90 1990-98 Independent average 1.2 0.95 Selected independents 1 0.87 Independent consensus N/A 0.89 City average 1 0.85 City consensus N/A 0.82 Treasury 0.8 1 Average Outcomes 3.05 1.59 Naïve forecast 1.35 1.6
  • 20. 20 1.6 Non-structural models Pollock (2013) stressed that main shortcomings of equations of the macroeconometric models are that they pay insufficient attention even to the simple laws of linear dynamic systems. Non- structural time-series models, on the other hand, may therefore offer a more pragmatic approach, assuming that the data series itself may well contain all the necessary information for adequate forecasts (Pokorny, 1987,p. 342).They are in a sense , agnostic or empirical models (Klein, 1991, p. 14). Significant contributions regarding the theory can be traced to a work of Yule (1927) and Slutzky (1937), who launched the notion of stochasticity in time series by postulating that every time series can be regarded as realization of a stochastic process. The process can be explained by autoregressive (AR) or moving average (MA) models. Thus Slutzky (1937) shows that cycles resembling business fluctuations can be generated by combination of a variables` own past value and a series of random causes (Kydland and Prescott, 1990, p.6). The combined method autoregressive integrated moving average (ARIMA) model was widely popularized by Box and Jenkins (1970) who developed a coherent four-stage iterative cycle for time series identification, estimation, diagnostic checking and forecasting (cf. Gooijer, 2006, p. 7). Many macroeconomic variables, including GDP exhibit properties that violates classical Gauss-Markow assumptions of constant mean, variance and/or covariance throughout the time. The non-stationarity was observed by Plosser and Nelson (1982) who investigated a number of macroeconomic variables including GDP and concluded the presence of stochastic trend (random walk); hence they argued that GDP should be modelled as a first difference stationary (DS) process (Newbold, 1999, p. 86).This was further confirmed by Stock and Watson (1988, p. 160) who concluded that macroeconomic time series appear to contain variable trends. Moreover, modelling these variable trends as random walks with drift seems to provide a good approximation to the long-run behaviour of many aggregate economic variables. In addition, Granger and Newbold (1973, p. 117) demonstrated by an ARIMA process that if random walks, or near random walks, are present and one includes in regression equations variables that should in fact not be included, then it will be the rule rather than the exception to find spurious relationships.
  • 21. 21 Forecasts based exclusively on the statistical time-series properties of the variable in question have often been used to provide inexpensive, yet powerful, alternatives to structural models. Wallis (1989) finds that published model forecasts generally outperform their time-series competitors, the margin being greater four quarters ahead than one quarter ahead. This is also confirmed by (Pokorny 1987, p. 342), who argues that this approach is not well suited to generate medium-to long-term forecasts, and the approach is of only limited use for the policy evaluation process. Makridakis (1982) cited in Hendry and Clements (2003, p. 304) produced results across many models and conclude “Although which model does best in a forecasting competition depends on how the forecasts are evaluated and what horizons and samples are selected, ‘simple’ extrapolative methods tend to outperform econometric systems, and pooling forecasts often pays. In conclusion, the current literature shows that macroeconomic modelling and forecasting went through dramatic changes over time. Firstly, there was a paradigm shift in doctrines, away from Keynesianism towards monetarism. Secondly, there was a dramatic evolution of statistical techniques, paving a way to more rigorous modelling based on advanced econometric models. Alternative models were also developed based on an AR process, which in many cases can equally compete with the structural ones. There is no doubt that econometrics is subject to important limitations, which stem largely from the incompleteness of the economic theory and the ever-changing nature of economic data.
  • 22. 22 CHAPTER 2 Structural econometric model 2.1 Structural model building 2.1.1. Rationale for simultaneous equations Univariate regression models consist of a dependent variable that is expressed as a linear function of one or set of explanatory variables. In such models implicit assumption is that the cause-and- effect relationship between the dependent and explanatory variables is unidirectional: the explanatory variables are the cause and the dependent variable is the effect. However, many conceptual frameworks for understanding economic processes institutions recognize that there are feedback mechanisms operating between many of the economic variables; that is, one economic variable affects another economic variable and is, in turn, affected by it (Gujarati, 2004, p. 718).The realisation that economic data are a product of the existing economic system may then be described as a system of simultaneous relations among the random economic variables and that these relations involve current , future and past values of some of the variables. As shown in Figure 4, simultaneous equations11 models allow to account for the interrelationships within set of variables. Figure 4: Influence diagram for simultaneous equation model. There are not many instances when we look at the economy in isolation, therefore, the simultaneous nature of economic variable determination, each as simplified version of the data generation process represents more accurate real-world situations (Judge, 1982, p. 600). 11 In simultaneous equations models there is recognition that variables p and q are jointly determined. The random errors εd and εs affect both p and q. Y is fixed exogenous variable that affect the endogenous variables p and q. Yp εd q εs
  • 23. 23 Since the analysis of the economy will be more difficult when there are numerous equations in the model, small-scale models can explain the economy in better way because `it is much easier to see the forest when the trees are fewer` (Bodkin and Marwah, 1988, p.301). Friedman (1953, p. 14) points out that `simple models are easier to understand, communicate and test empirically with the data`. However, (Maddala, 1992, p. 2) stresses that the choice of a simple model to explain complex real-world phenomena may lead to oversimplification and unrealistic assumptions. The particular role of the model should therefore be the distillation of the most important elements and their inter-relationships in precise and quantified manner to reveal inner working shapes or design of more complicated mechanism (Klein, 1983, p.1). 2.1.2. Rationale for Keynesian model The case for employing structural macroeconomic models that help with the policy analysis and forecasting, rests on arguments for abstraction and simplification of how the economy works by using empirical equations, which are themselves based on diversity of economic thinking (Kenway, 1994, p. 6). As outlined earlier, there are two dominant strands that attempt to explain how the economy operates. In the Classical theory monetary policy12 has no effect on the level or real economic variables including output assuming all prices and nominal wages are perfectly flexible both in the short run and long run owing to neutrality of money. Therefore an increase of money stock will increase the price level proportionally the price level. In the Keynesian theory, it is assumed that the economy is not operating at full employment (equilibrium), since machines are not fully utilized and some workers are unemployed, therefore the supply of output can be increased without increasing inflation. Moreover they claim that prices don’t adjust instantly owing to wage rigidity, menu costs and sticky prices. Since adjustments take time, an increase in aggregate demand (generated by an increase in money supply or government spending) will not affect the price level in the short run. Instead, it will lead to an increase in the level of output. 12 Monetarists, as with classical, reject the fiscal policy : Government spending, financed by taxes or borrowing from the public, results in a crowding-out of private expenditures with little, if any net increase in total spending. However monetarists claim that change in money stock exerts strong influence on total spending. Monetarists therefore conclude that action of monetary authorities which `result in the change of the money stock should be the main tool of economic stabilization` Mankiw (2011, p. 42)
  • 24. 24 The methodology applied in this dissertation is based on Keynesian framework for the following reasons: the longstanding popularity among policy makers, fairly simple calculations compared to other approaches, straight forward inferences and plausible forecasting results and lastly wide spread consensus that prices fail to clear markets, at least in the short run. Basic elements utilized in the Keynesian framework for the determination of national income measured in gross domestic product GDP and its components can be then defined in terms of a prototype macro model (Intriligator et al. 1996, p. 430). 2.2 Building blocks The prototype model disaggregates national income into only three components two of which are determined endogenously namely C and I C = β0 + ϒ1Yt + ε1 (consumption function) I = β1 + ϒ2Yt + ε2 (investment function) Y = C + I + G (national income equilibrium condition) Models of Keynesian fashion however disaggregate these two components further and they also include more equations and variables to account for certain factors not treated explicitly in prototype model. Since the objective of this dissertation is to estimate fairly small model it is important to select theoretically appropriate and statistically sound variables, so there would be balance between disaggregation and simplicity. The underlying idea behind an analysis of the aggregate demand in the Keynesian theoretical framework determined by the (IS/LM model) is that prices (and nominal wages) do not clear markets in the short-run owing to an inertia in the setting of prices, especially when the economy is operating below full capacity /full employment. A temporary increase in government spending or money supply affects the economy mainly through the government purchases multiplier. This, in turn, increases investments at the initial level of interest rate. Increasing the aggregate demand beyond the potential or full employment level will lead to an inflation. Keeping this basic assumption in mind, we are able to construct our small model.
  • 25. 25 2.2.1. Consumption Function [CONS] Two initial equations, CONS and INV of the model describe the IS part. An adequate explanation of consumers’ behaviour is a key behavioural equation in the model as consumption represent two- thirds of the UK`s GDP. The basic tenet, as outlined by Keynes (1936), is the positive relationship between consumption and income; as income increases, so too does consumption, so the sign of the variable INC’s coefficient should be positive. According to Keynes` absolute income hypothesis, current consumption is stable function of current income, the marginal propensity to consume lies between 0 < mpc < 1 and decreases as income increases. Friedman (1957, p. 23), however points out that people do not change their consumption habits immediately following a change in their income because of the force of habit (inertia). Moreover, people may not know whether a change is permanent or transitory. Therefore, Friedman suggests that the permanent income hypothesis may be approximated by an adaptive expectations process (7), whereby permanent income is a weighted sum of current and past values of observed income13: Yt P = λYt + λ(1 – λ)Yt–1 + (1 – λ)2Yt–2 P..... (1 – λ)nYt–k P 7 Equation 7 based on geometric convergence can, however, be replaced for simplicity by the lagged dependent variable CONS1 (lagged consumption by one quarter) making the consumption function dynamic. This avoids two problems with ad hoc distributed lag equations: the degrees of freedom increase and the multicollinearity problem disappears (Studenmund, 2011, p. 409). A positive sign is expected, as a change from the last period`s consumption should have a positive effect on current consumption. 13 Yt P – permanent income is inherently nonmeasurable variable whereas transitory income is an observed income (Venieris and Sebold, 1977, p.381)
  • 26. 26 2.2.2 Investment Function [INV] Investment INV is a smaller component of income than consumption; it is more volatile, and so is important in the analysis as a source of the short-term fluctuation in GDP. Investment can be described as the accumulation over time by firms of real capital goods (Levacic and Rebmann, 1982, p. 229). The basic motive for investment carried out by firms is to make a profit. The decisions about undertaking investment depends on the state of the economy and the opportunity cost of accumulating capital which is present consumption foregone. The required rate of return in the Keynesian framework the – marginal efficiency of capital (MEC), this `is the discount rate applied to the stream of returns on capital, equate the present value of those returns to the supply price of capital` Venieris and Sebolt (1977, p. 406). According to the Keynesian approach the MEC can then be compared to the market rate of interest so firms can decided whether to purchase the capital goods or defer it. Therefore, if MEC exceeds the market rate of interest, the firm should buy the capital stock. If the MEC is less than the market rate the firm should forgo the purchase. To account for this assumption, a variable INT8 was included in the investment equation, with the expectation of a negative sign. This variable is lagged by eight quarters, because it takes time to plan and start up a project. `Since investment is injection into the circular flow of income, these changes will cause multiplied changes in the income` Sloman and Wride (2009, p. 496). Because relatively modest change in income can cause much larger change in investment, the accelerator14 variable CINC1 was included. Moreover, the multiplier INC was added under the assumption that investment also depends on the current level of GDP. The rationale behind the use of the combination of accelerator and multiplier is that, for example, arise in government expenditure will lead to a multiplied rise of income. This rise of GDP will cause an accelerator effect; firms will respond to the rise in consumer demand by investing more, and this will further increase income. If this rise of income is larger than the first one there will be again be a rise in investment, which in turn will increase income (the multiplier). Both CYNC1 and INC should have positive coefficients, as increases in GDP have a stimulating effect on investment. 14 Clarke (1917) specify the accelerator principle in terms of potential aggregate production Yp as a function of existed capital (K) and labour (N). Assuming Kt=βYt , and It = Kt – Kt-1 , then Kt – Kt-1 = β(Yt - Yt-1) that is change in output has an impact on the level of investment.
  • 27. 27 2.2.3. Interest Rate Function [INT] [INT] equations represent the monetary sector of the model, hence the LM part. The short-term interest rate [INT] is modelled in a standard money demand tradition, that is: at any given level of GDP there will be a particular transaction and precautionary demand for money. If we assume that the Bank of England do have some power over controlling the money supply, its actions will have an effect on the level of short-term interest rates and inflation. This was explicitly attempted in the UK in the 1980`s under the phrase `Medium Term Financial Strategy`. Therefore the variable CMS is included under assumption that a decrease in the money supply will increase the interest rate. This is because `demand for money decrease when real short term interest rate rises as the opportunity cost of holding money increases` Pindyck and Rubinfeld (1999, p. 447). There is also empirical evidence for the gradual adjustments of interest rate by central banks. Coibion and Gorodnichenko (2011, p. 26) provide evidence that supports the notion that `inertia in monetary policy action has indeed been a fundamental and deliberate component of the decision- making process by monetary policymakers: more specifically, their evidence `strongly favours interest rate smoothing over serially correlated policy shocks as an explanation of highly persistent policy rates.` To account for this observation, the variable CINT1 was included to capture the changes between the lagged interest rates. Moreover, an increase in GDP will lead to a greater demand for money and hence to higher interest rates if equilibrium to be maintained, so the variable INC is included in equation. In addition, INC1 was included, so the emphasis is not only on the level of GDP but also on whether this level is changing. The responsiveness of the demand for money to a changes in national income will depend on the size of the, mpc which is derived from the consumption function and hence allows for a feedback effect. Since the mid 1990`s there has been a widely accepted assumption that the BofE changed its reaction function from controlling the money supply to a control of interest rate to maintain low and stable inflation. Howells and Bain (2009, p. 14) stress that transmission mechanism of monetary policy (see Figure 5) sees the short-term interest rate as the policy instrument, not the explicit control of money supply, for achieving the desired outcome.
  • 28. 28 Transmission mechanism of monetary policy Figure 5: (Source, BofE) Changing the policy however, poses a problem for the model because of Lucas critique. Lucas (1979) criticised the `Cowles Commission’s’ approach on a grounds that, when the Bank of England introduced the inflation targeting policy in the early 1990`s, that behaviour changed the reaction function (10) to a new one which treats money supply as an endogenous variable. Therefore, with the new reaction function, parameters of all other equations reflect choices that were made prior the policy change. Under the new policy rule the parameters could be significantly different in each equation causing inaccurate forecasts (Webb, 1999, p. 27). Lucas builds his hypothesis on the assumption that rational (forward-looking) agents will change their decisions when faced with a policy change or anticipation of the change. To address this problem at least in part, is to determine the flow of causality between the MS and INT in our sample period. In an attempt to identify the direction of causality which can then help to decide whether the money supply is endogenous or exogenous variable, that is whether a change in money supply cause a change in interest rates or vice versa, Granger’s causality test (see Appendix A: A.1) was conducted. According to this test, calculated value for money supply F= 0.69 > 0.05 critical value, suggesting that money supply does not cause interest rate. The calculated value for the interest rate F= 0.31 > 0.05 critical value suggesting that the interest rate does not cause money supply. From the Granger causality test, it appears that both variables are therefore jointly determined, with slightly stronger evidence for interest rate being the cause as its F value is closer to rejection region. For consistency with the IS/LM approach, we will thus model money supply as an exogenous variable.
  • 29. 29 2.2.4. Inflation Function [INF] The last equation in our small macroeconomic model describes inflation [INF] as a deviation of output from its long run equilibrium. This assumption is based on the accelerator theory whereby an increase in output cannot be raised permanently beyond its potential without creating inflationary pressure. This is expressed in terms of relationship between the rate of inflation, rather than unemployment (as postulated by Phillips), an output gap, the gap15 between existing output [INC] and potential [POTY] or full-employment output Howells and Bain (2009, p. 155). To improve the inflation function, an adaptive expectations formation variable [INF1] was incorporated, which takes into an account a worker’s estimation of the rate of inflation. The resulting expectation-augmented Phillips curve, as postulated by Friedman (1959), in output/inflation space, assumes backward-looking expectations), since the past errors are built in to future forecast. The size of the coefficient depends on the degree of money illusion: β= 1 means that workers base their expectations decisions on the true real wage rate, 0 < β < 1 indicates that workers are making incorrect assumptions about the true rate of inflation in the wage-bargaining process. Proponents of rational expectations hypothesis argue however that economic agents efficiently apply all relevant knowledge to the best available model in order to predict future values of economic variables, and not just the past information Howells and Bain (2009, p. 242). Howells and Bain also points out that rational inflation expectations prevent workers from adjusting their labour contracts immediately because these contracts are for fixed period causing wage stickiness. Chow (2011) compared adaptive and rational expectations and concluded that there is insufficient empirical evidence supporting the rational expectations. Chow argues that adaptive expectations provide a better proxy for psychological expectations as required in the study of economic behaviour. (Millet, 2007, p. 12) tested rational expectations directly, using survey data and indirectly, by implication and concluded that there is limited relevance of the Lucas critique in key empirical applications but it is appropriate when it comes to dealing with breaks in series. 15 This is based on Okun`s law, which states that growth is negatively related to the change in the rate of unemployment. It is formally expressed as deviations of income from its potential level Y-Y* are proportional to the difference between actual and full employment β(u* - u).
  • 30. 30 Millet acknowledges the importance of the adaptive behaviour on the part of agents `emphasizing the insight for monetary policy that imply… an eventual sensitivity to regime changes but no drastic or immediate response as a rule – not even to important innovations to the monetary policymaking process, such as the introduction of inflation targeting` (Millet, 2007, p. 22). (Eckstein, 1983, p. 50) examined the past record of the DRI Model and its predictions of changes in policy regimes and concludes `so far, the evidence suggests that changes of expectations formation are not among the principal causes of simulation error, that forecast error is largely created by other exogenous factors and the stochastic character of the economy`. Accounting identity Finally, the model is completed with the addition of a real national expenditure (gross domestic product) accounting identity. It defines real GDP16 [INC] as the sum of consumer spending [CONS], investment spending [INV], and government spending [GOV]. 16 ONS published real GDP, is already calculated as a net of imports and exports.
  • 31. 31 CHAPTER 3 Structural Modelling Complete model: CONS = α1 + β2INCt + β3CONSt-1 + ε1t 8 INV = α4 + β5(INCt-1 - INCt-2) + β6INCt - β7INTt-8 + ε2t 9 INT = α8 + β9INCt + β10(INCt - INCt-1) - β11(MSt - MSt-1) + β12(INTt-1 - INTt-2) + ε3t 10 INF = α13 + INFt-1 + β14(INCt - POTYt) + ε4t 11 INC≡ CONS + INV + GOV 12 3.1. Modelling Methodology Complete model therefore consists of four behavioural equations (8 –11) and one identity equation (12) that specify additional variables in the system and their accounting relations with the variables in the behavioural equations. As Table 4 summarises, there are five endogenous variables (CONS, INV, INT, INF and INC) and eight predetermined17 (exogenous) variables (CONS1, CINC1, INT8, CINC, CINC1, CMS, OTG, and GOV). 17 We can define endogenous variables to be those that are jointly determined in the system in the current period. Predetermined variables are independent (exogenous) variables plus any lagged endogenous variables that are in the Strictly speaking only exogenous variable in the model are MS, GOV and OTG because they are not simultaneously determined within the model.
  • 32. 32 Table 4: Summary table of the used variables in model Name Variable Definition Type CONS Real Aggregate personal consumption endogenous CONS1 Consumption lagged by one quarter exogenous INV Real Investments, expressed as gross capital formation endogenous INC Real total income q/q (GDP) endogenous CINC1 GDP lagged one quarter minus GDP lagged by two quarters exogenous CINC Current GDP minus last quarter GDP exogenous INT Interest rate on 3 month treasury bills endogenous CINT INR lagged by one quarter - INR lagged by two quarters exogenous INT8 Interest rate on 3 month treasury bills lagged by four quarters exogenous INF1 Inflation lagged by one quarter exogenous INF Inflation expressed as a growth rate of retail price index endogenous CMS Real money stock narrowly defined (M0) minus last quarter (M0) exogenous OTG GDP minus current potential GDP exogenous POTY Potential output (full employment) GDP exogenous GOV Real Government expenditure exogenous Figure 6 describes all the causal flows between variables. There is circular causal flow between GDP and consumption and investment; consumption and investment are in part determined by GDP but they are also component of GDP. Interest rate and inflation are simultaneously determined with GDP that is: when we follow the change of one of the variable through the system, the change will get back to original causal variable, but there is no circular feedback loop. Figure 6: Block diagram of five equation model lag lag lag lag lag lag Inflation [INF] Consumption [CONS] Ct Investment [INV] GDP [INC] Interest Rate [INT] GOV MS
  • 33. 33 3.1.1 Order condition identification Prior the estimation of the model, identification needs to be carried out. It follows that structural equation is identified only when enough of the system`s predetermined variables are excluded from each equation to `allow us to use the observed equilibrium points to distinguish the shape of the equation in question` (Studenmund 2011, pp. 478-481). The general method which can determine whether equations are identified is the order condition identification, which states that the number of predetermined variables excluded from the equation must be greater than or equal to the number of included endogenous variables minus one Pyndick and Rubinfeld (1999, p. 345). Table 5: The order condition of identification Equation CONS CONS1 INV INC CINC1 CINC CINT INT INT8 INF INF1 CMS OTG GOV 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 overidentified 2 0 0 1 1 1 0 0 0 1 0 0 0 0 0 overidentified 3 0 0 0 1 0 1 1 0 0 0 0 1 0 0 overidentified 4 0 0 0 0 0 0 0 0 0 0 1 0 1 0 overidentified 5 1 0 1 1 0 0 0 0 0 0 0 0 0 0 - Table 5 shows that all four (equation 12 does not need to be ranked) equations meet the criteria for further estimation, because more than one value is obtainable for some parameters. Applying OLS directly to structural models may lead to simultaneity bias, if one or more of the explanatory variables are endogenous and therefore could be correlated with an error term. This may be the case for the variable INC because it appears as endogenous in (12) and exogenous in (8-11). As a result, OLS estimated structural coefficients may be inconsistent and inefficient parameters. As an alternative to the OLS therefore, is to use the Two Stage Least Square (2sls) estimation, especially when structural parameters are overidentified. 3.1.2 Hausman test To test whether the simultaneity problem exists, the Hausman test is widely used method. The hypothesis follows: H0 : the efficient estimator (OLS) is consistent. (prefer OLS) Ha: the efficient estimator is not consistent. (prefer 2sls)
  • 34. 34 The rationale for the Hausman test is whether or not the difference between the two estimators is statistically significant. According to the test, (see appendix A: A3) the p-value is 0.0406 < 0.05 so we reject H0 at the 5% significance level. Hausman’s test confirms that simultaneity is present; i.e. OLS is inconsistent while the (2sls), which uses instrumental variables, will be both consistent and efficient. 3.1.3 Structural model estimation To conduct 2sls estimation, STATA statistical package was used. 2sls process18 can be broken down into two stages described in the following manner: In the first stage, OLS is estimated on the reduced form equation for each of the endogenous variable in the system. This is accomplished by regressing endogenous variables on all the predetermined variables in the system. However there is no need to estimate reduced form of inflation equation since variable INF isn’t explicitly used as an exogenous variable in the system. Strictly speaking there is no need to estimate the reduced form interest rate equation as well because it only appears in the investment function as INT8 so there is no need to worry about inconsistency in OLS estimates Pindyck and Rubinfeld (1999, p. 396). First stage – reduced form: CONŜ = π0̂ + π1̂GOV + π2̂CONS1 + π3̂CINC1 + π4̂INT8 + π5̂CINC + π6̂CMS + π7̂CINT1 + π8̂OTG + π9̂INF1 + π10̂ MS INV̂ = π11̂ + π12̂ GOV + π13̂ CONS1 + π14̂ CINC1 + π15̂ INT8 + π16̂ CINC + π17̂ CMS + π18̂ CINT1 + π19̂ OTG + π20̂ INF1 + π21̂ MS INĈ = π22̂ + π23̂ GOV + π24̂ CONS1 + π25̂ CINC1 + π26̂ INT8 + π27̂ CINC + π28̂ CMS + π29̂ CINT1 + π30̂ OTG + π31̂ INF1 + π32̂ MS Thus by using the first stage, we constructed variables which are linearly related to the predetermined model variables and at the same time uncorrelated with the reduced form error term. Only important information from this stage is the coefficient of determination (R2). As (Appendix: A Table 9) shows, fairly high R2 in all individual equations suggests high correlation of instruments with endogenous variables. 18 The term `process` means time series – it emphasizes the dependence of the present value of the series on its values in prior periods.
  • 35. 35 In the second stage, endogenous variables which appear on the right hand side (only) of the structural equations are replaced with the first stage fitted (instrumental) variables. Then unrestricted estimation is conducted by applying OLS to each equation in the implicit version of reduced form. Therefore by constructing 2sls we achieved consistent estimators of endogenous and predetermined variables. 3.2 Analysis of the structural model results Owing to the lack of an extended historical data regarding the potential output, estimation using 2sls was restricted for the period 1980q1-2007q1.The choice of a quarter as a base time19 unit emphasizes the short run movements of the economic system. Moreover, quarterly model may be analytically more useful than annual model and also the results are more robust because the increased number of observations. The evaluation criteria for simultaneous models are more challenging that those estimated by a single equation because, model as a whole has much richer dynamic structure than any individual equations. Although, there are no formal statistical tests for 2sls estimated equations, single equation test statistics may be used as a good indication for potential problems. It is evident form Table 6 that there is a serious problem with the serial correlation (SC) in equation INV and INT and heteroskedasticity (HT) in equation INV, and INF. Table 6: SC and HT tests of individual equations-OLS estimation 19 Quarterly data also have some drawbacks in terms of : effect of seasonality in quarterly data due to seasonal effects, the greater degree of serial correlation present in quarterly as compared with annual series and the determination of the appropriate structure of lags BREUSCH-GODFREY TEST FOR AUTOCORRELATION BREUSCH-PAGAN TEST FOR HETEROSKEDASTICITY RAMSEY`S RESET TEST OF FUNCTIONAL FORM χ2 - test statistic p-value pass/fail χ2 - test statistic p-value pass/fail F - test statistic p-value pass/fail CONS 5.058 0.025 fail 3.591 0.166 pass 6.19 0.000 fail INV 63.76 0.000 fail 12.03 0.007 fail 3.66 0.007 fail INT 56.02 0.000 fail 7.930 0.094 pass 1.56 0.116 pass INF 3.722 0.050 pass 174.56 0.000 fail 5.43 0.000 fail
  • 36. 36 Failure of the CS test points to the fact that standard errors of the coefficients are biased. Moreover, dynamic nature of the model will cause also bias in coefficients. This bias should be eliminated when 2sls is employed, producing coefficients closer to their true value although serial correlation will persist even after 2sls estimation. Moreover, heteroskedasticity will not be corrected using 2sls, therefore t-score and hypothesis test may be unreliable because standard errors of coefficients are biased. Ramsey’s test for correct use of functional form is rejected in all equations but INT, suggesting that relationships between some of the variables is nonlinear20, this is the case at least with the variable MS which appears to resemble exponential trend Baum (2006, p. 124). Mariscal (2012a) stresses that most of the economic variables are non-stationary and modelling such variables may cause a spurious result. As (Appendix A: A7) show, this is the case of all the variables but INT. It can be also seen in fairly high R2 across the equations and failed test for homoscedasticity. Investment function is therefore likely to be affected, and may partly explained small and wrong signs21 in variables INC and CINC. Consumption function is reasonably cointergrated thus offsetting some of the negative effects of non-stationarity. Inflation function does contribute the least to the whole model and t- ratios, size and sign of the OTG and INF1 are correct. Correlation matrix unveils (see appendix A: A6.2) that there is serious collinearity between some of the variables, but owing to simultaneous nature of the model this is not necessary relevant. Comparing the size of the coefficients from Table 7 between OLS and 2sls, only notable changes are in variables INC, CINC and CINC1. This was expected as GDP is the leading variable and so its reduced form has a bigger impact on other variables in the model then other reduced form endogenous variables when 2sls was employed. Significance of variables improved after 2sls, although INT8 in investment function still points on insignificance. All the remaining variables in the model are significant with 95% confidence. R2 slightly decreased, however (Pokorny, 1989, p. 309) stresses that it is meaningless to judge success of 2sls on the basis of R2 because `this method makes no reference to it in fact it is in conflict with the criterion of consistency`. 20 Improved results may be achieved by using log/log or semi log functional form. Moreover changing to annualized data may improve some of the statistics. 21 All the remaining signs of the coefficients in all other equations are correct and at reasonable size.
  • 37. 37 Table 7: Summary statistics of OLS and 2sls estimation procedures22 Single eq. OLS 2sls CONS CONS1 0.860 0.037 (22.69) 0.789 0.060 (12.96) INC 0.104 0.026 (3.87) 0.147 0.042 (3.46) R2 0.999 0.999 INV CINC1 -0.042 0.092 (0.460) 0.267 0.071 (3.73) INC 0.196 0.005 (39.14) 0.185 0.005 (34.57) INT8 -46.69 92.38 (0.510) -122.93 110.92 (1.110) R2 0.978 0.972 INT INC -0.00003 0.00008 (9.160) -0.00003 0.00006 (9.560) CINC -0.0003 0.00008 (3.610) -0.0015 0.00008 (2.390) CMS -0.02 0.0008 (2.590) -0.02 0.0008 (2.850) CINT1 0.829 0.200 (4.130) 0.838 0.204 (4.100) R2 0.719 0.712 INF OTG 0.137 0.039 (3.490) 0.128 0.034 (3.750) INF1 0.956 0.023 (40.57) 0.974 0.022 (41.57) R2 0.944 0.938 22 Note that t-values are in parentheses beneath of coefficients and standard errors in bold.
  • 38. 38 3.3 Ex- post forecasting To assess robustness of forecast results of the structural model, turning points at the time of large exogenous shocks to the economy may serve as a good benchmark. Ideal example of such an event is the recent recession 2007-08. The magnitude and the speed with which GDP collapsed is unprecedented and therefore, the model will be exposed to a great challenge. To get a better perspective about the likely validity and the condition of the endogenous variable determined by the simulation23 solution with the actual values, historical simulation was conducted. As Figure 7 shows there has been fairly close relationships between fitted and actual values which brake up temporarily in 1990`s. Since 2000`s there has been increasing under prediction, that suggests a structural break of some of the variables in relation to GDP. Figure 7: Historical simulation, GDP 1980q1-2007q1 To capture whole business cycle turning points of GDP and at the same time keeping in mind short term characteristics of the model, ex-post forecast was employed for 2007q1 – 2009q1. It means performing forecast at the end of the estimation period and then compared it with the available data. This enables us to test the forecasting accuracy of the model. Summary statistics of the ex-post one step forecast are shown in Table 8. 23 Refers to mathematical solution to a simultaneous set of difference equations, that is current value of one variable relates to past value of other variable.
  • 39. 39 Table 8: Ex-post forecast based on 2sls regression Observation Actual Prediction Error Error (%) S.D. of Error t- ratio24 2007q1 383980 393237.2 -9257.2 -2.41 0.0012 -1.929 2007q2 389661 395476.4 -5815.4 -1.49 0.0012 -1.193 2007q3 394031 398551.4 -4520.4 -1.15 0.0012 -0.921 2007q4 402523 401183.4 1339.6 0.33 0.0012 0.264 2008q1 406124 406566.6 -442.6 -0.11 0.0012 -0.088 2008q2 396921 403762.1 -6841.1 -1.72 0.0012 -1.377 2008q3 391272 396367.1 -5095.1 -1.30 0.0012 -1.041 2008q4 377355 395764.1 -18409.1 -4.88 0.0012 -3.907 2009q1 370764 386480.7 -15716.7 -4.24 0.0012 -3.394 Based on 9 observations from 2007q1 to 2009q1q4 Mean Prediction Errors (MPE, %) -1.591 Mean Absolute Prediction Error (MAPE, %) 1.959 Root Mean Sum Squares Predictive Errors (RMSPE, %) 2.492 MPE shows that model over-predicted GDP on average by -1.591%, with only one under- predictions in 2007q4. Persistent over prediction (negative bias) suggests presence of serial correlation. Clear pattern can be also seen in plot of residuals in (see appendix A, Figure 1). A shortcoming of the MPE is that positive and negative errors can offset each other leading to unwarranted results (Flegg, 2012a). To overcome the problem RMSPE was also calculated 2.492, which measures deviation of the simulated variable from its path time Pindyck and Rubinfeld, (1998, p. 210). Pindyck and Rubinfeld argue that the magnitude of the errors can be evaluated only by comparing it with the average size of the variable. This points that calculated errors are fairly small compared to actual values. 24 t –ratio was calculated by dividing S.D of error by prediction error.
  • 40. 40 Calculated t-ratios of the errors (assuming null hypothesis of rejecting significant error when <1.96) points on insignificance in 2007q1, 2008q4 and 2009q1.MAPE- which calculates an absolute accuracy of the fitted model, is slightly better -1.959% than RMSPE because it doesn’t take into account relatively larger errors in 2008q4 and 2009q1. Figure 8 shows that model predicts turning points fairly well, particularly at the exact turning point quarters 2008q1-2008q2. This might be because errors exhibiting negative bias did not fully reflect the sudden change in the direction. However, with increasing time horizon forecasting accuracy somehow decreases showing again over prediction. Overall, magnitude of the forecasting errors reflects small scale properties of the model owing to the lack of detail in the scope and simplified equation specification. Figure 8: Ex-post forecast based on 2sls regression
  • 41. 41 CHAPTER 4 Non-causal model 4.1 introduction Structural econometric model discussed thus far is based on causality25 and economic theory to capture underlying structure of the economy. Causal models described through interactions between several interrelated markets is a step closer to real world than the single equation that assumes only weak exogenity (one way causality).Therefore, strong exogenity used in the model which accounts for feedback between lagged endogenous variables that appears on the right hand side could be used to generate more accurate one step forecast (Brown 1991, p. 338). Brown however argues that even strong exogenity may not be sufficient assumption in the light of the change in expectations and resulting consequences to a model as outlined earlier mainly by (Lucas, 1979). Moreover, `because the structure of the model is assumed a priory and only on the subset of a of the A causal factors … , causality will depend on specific model` (Brown 1991, p. 337). In addition, there is a zero restriction on variables that do not comply with underlying assumptions causing the model to omit potentially important variables as pointed by Liu (1963). Autoregressive linear stochastic dynamic models do not offer a structural explanation for its behaviour in terms of other variables but does resemble its past behaviour, thus provide viable alternative. These time series models which assume random process that generate the data, are not therefore explained by the cause and effect relationship but rather `in terms of how randomness is embodied in process` Pindyck and Rubinfeld (1999, p. 489). There are number of techniques now used by modellers that utilize time series models. This dissertation though will concentrate on the use of the autoregressive moving average model (ARMA). This particular choice is motivated by the fact that ARMA model can offer powerful and efficient means of generating short term forecast and it is widely accepted alternative (benchmark) to structural models (Pokorny, 1987, p. 341). 25 (Brown 1991, p.338) stresses that statistics cannot prove causality but that causality must be assumed in regression analysis.
  • 42. 42 4.2 Notation of ARMA model The ARMA model is a combination of an Autoregressive (AR) model and moving average (MA). Let 𝑦𝑡 represent26 GDP at time t : 𝐴𝑅(1): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + 𝜀𝑡 where 𝛿 is the mean of Y and 𝜀𝑡 is an uncorrelated random error ~ 𝑁(0, 𝜎2 ), so 𝑦𝑡 follows a first order autoregressive AR(1) stochastic process. For stationary autoregressive process AR(1) 𝜇 , the mean of the process is invariant with respect of time, 𝜇 = 𝛿 1−𝜙1 𝛾0, the variance of the process is constant for |𝜙1| < 1 and 𝛿 = 0 𝛾0 = 𝐸[(𝜙1 𝑦𝑡−1 + 𝜀𝑡)2 ] = 𝜎 𝜀 2 1−𝜙1 2 and 𝛾1, the covariance follows the same constant properties. 𝛾1 = 𝐸[𝑦𝑡−1(𝑦𝑡−1 + 𝜀𝑡)] = 𝜙1 𝛾0 = 𝜙1 𝜎 𝜀 2 1−𝜙1 2 The pth-order autoregressive process AR(p) can be then expressed as 𝐴𝑅(𝑝): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + 𝜙2 𝑦𝑡−2 + ⋯ + 𝜙 𝑝 𝑦𝑡−𝑝 + 𝜀𝑡 𝜀𝑡~𝑊𝑁(0, 𝜎2 ) Stationary autoregressive process of order p the current observation 𝑦𝑡 is generated by weighted average of the past observations going back p periods. If we assume that AR process is not the only one which can generate y we can write: 𝑀𝐴(1): 𝑦𝑡 = 𝜇 + 𝜃1 𝜀𝑡−1 + 𝜀𝑡, Where 𝜇 is a mean of the process, and 𝜀𝑡 as before, is the stochastic error~ 𝑖𝑖𝑑(0, 𝜎2 ). It follows that 𝑦𝑡 at time t is equal to constant plus moving average of the current and past error term. 26 Small letter y explains the variable in its deviation from mean form, (𝑌𝑡 − 𝛿)
  • 43. 43 Therefore 𝑦𝑡 follows a first order moving average MA(1) process. For the process which is generated by the white noise process with the variance: 𝛾0 = 𝜎𝜀 2 (1 + 𝜃1 2 ) and covariance for the one lag displacement, 𝛾1 = 𝐸[𝜀𝑡 + 𝜃1 𝜀𝑡−1)(𝜀𝑡−1 + 𝜃1 𝜀𝑡−2)] = 𝜃1 𝜎𝜀 2 The pth-order autoregressive process MA(q) can be then expressed as: 𝑀𝐴(𝑞): 𝑦𝑡 = 𝜇 + 𝜃1 𝜀𝑡−1 + 𝜃2 𝜀𝑡−2 + ⋯ + 𝜃𝑞 𝜀𝑡−𝑞 + 𝜀𝑡 𝜀𝑡~𝑊𝑁(0, 𝜎2 ) Moving average process or order q states that each observation 𝑦𝑡 is generated by a moving average of the stochastic error going back q periods. Mean 𝜇 of the moving average model is independent of time since E (yt) = 𝜇. When the univariate series takes characteristics of both AR and MA, the combined ARMA (1,1) process is written as 𝐴𝑅𝑀𝐴(𝑝, 𝑞): 𝑦𝑡 = 𝛿 + 𝜙1 𝑦𝑡−1 + ⋯ + 𝜙 𝑝 𝑦𝑡−𝑝 + 𝜃1 𝜀𝑡−1 + ⋯ +𝜃𝑞 𝜀𝑡−𝑞 + 𝜀𝑡, 𝜀𝑡~𝑊𝑁(0, 𝜎2 ) 4.3 non-stationarity in time series Time series models including ARMA process, assume stationarity that is constant mean, variance and covariance or (autocorrelation for weak stationarity). From the Dickey-Fuller test (see appendix A: A7) it is apparent that many economic time series including GDP are non-stationary, that is integrated of order 1 I(1) in the case of INC. This can be also seen from Figure 9 where autocorrelation function is the first entry in the correlogram and represents correlation between 𝑦𝑡 and 𝑦𝑡−1 the second entry is correlation between 𝑦𝑡 and 𝑦𝑡−2 , etc. pointing on geometrical decline. Thus the process has an `infinite memory`, current value of the process depends on all past values at declining rate.
  • 44. 44 Figure 9: Autocorrelation function, INC In order to estimate the model we need to difference the process at d times to make it stationary so ARMA (p,q) becomes ARIMA (p,d,q), that is, autoregressive integrated moving average model. This is because, if the model is to be used for forecasting, we must assume that the features of this model are time invariant, over the future time periods. `Thus the simple reason for requiring stationary data is that any model which is inferred from these data can itself be interpreted as stationary or stable, therefore providing valid basis for forecasting` (Gujarati, 2004, p. 840). 4.4 ARIMA methodology The methodology behind the ARIMA is closely associated with George E.P. Box and Gwilym Jenkins; Box-Jenkins approach, (BJ) who proposed an iterative approach to time series modelling comprising of four steps: identification, estimation, diagnostic checking and forecasting. 4.4.1 Identification As noted earlier GDP time series has a unit root so the characteristics of the stochastic process change over time. This can be observed from Figure 10 where there is a clear trend in the variable. To remedy non-stationarity, we need to decompose the original series by removing the trend in order to isolate the other components of the data. Thus by taking the first order difference ∆𝑌 = 𝑌𝑡 − 𝑌𝑡−1 , we eliminated trend from the time series Figure 10. To confirm that the first difference was enough to make the series stationary, an augmented Dickey Fuller test was used (see appendix A: Table 17).
  • 45. 45 Figure 10: The UK`s GDP q/q values and the first difference By plotting autocorrelation function (ACF) and partial autocorrelation (PACF) on GROWTH27 , Table 9 there is a clear collapse to insignificance, indicating the growth rate of real GDP is now stationary. Table 9 : Autocorrelation function and partial autocorrelation 27 GROWTH was constructed in Stata by gen GROWTH = lnINC-l.lnINC , where lnINC=log(INC) 15 0.0016 0.1159 26.184 0.0361 14 -0.1444 -0.1164 26.183 0.0245 13 -0.0888 -0.1118 23.16 0.0398 12 -0.0364 -0.1616 22.027 0.0372 11 0.0169 -0.0512 21.838 0.0257 10 0.0732 0.1662 21.798 0.0162 9 0.0182 0.0871 21.048 0.0124 8 -0.0587 -0.1205 21.002 0.0071 7 -0.0093 -0.0412 20.528 0.0045 6 0.0308 0.0817 20.516 0.0022 5 -0.0202 -0.0170 20.388 0.0011 4 0.0021 -0.1206 20.333 0.0004 3 0.0565 -0.0758 20.333 0.0001 2 0.3171 0.2815 19.911 0.0000 1 0.2275 0.2277 6.7308 0.0095 LAG AC PAC Q Prob>Q [Autocorrelation] [Partial Autocor] -1 0 1 -1 0 1
  • 46. 46 Moreover ACF and PACF may be used to get an indication of the order of lags. In order to identify an MA(q) order of lags, we need to find how many periods the correlation lasts between terms in ACF. It can be seen from Figure 11 that first two autocorrelations are outside of the 95% confidence band indicating they are statistically significantly different from zero, thus correlogram indicate MA(2). In order to identify an AR(p) we need to look at the PACF which is the plot of autocorrelations between 𝑦𝑡 and 𝑦𝑡−𝑘 with the correlations between the intervening correlations omitted. Here, statistically significant are first two lags 𝑦𝑡 and 𝑦𝑡−1 = 0.227, 𝑦𝑡 and 𝑦𝑡−2 = 0.281, suggesting an AR(2) process. Figure 11 ACF and PACF of the GROWTH
  • 47. 47 4.4.2 Estimation Estimation of the ARIMA28 model was conducted by the STATA for the period 1978q1-2009q4. Owing to the fact that error terms in MA process tend to be non-normally distributed, which means that estimated coefficient 𝜃̂𝑡 doesn’t represent the true value of 𝜃𝑡. In these instances maximum likelihood (ML) procedure needs to be employed instead of OLS. ML therefore estimates the parameter 𝜃𝑡 as the value of 𝜃̂𝑡 which would maximise the probability of obtaining the sample actually observed Pindyck and Rubinfeld (1999, p. 53). STATA fits the model by maximizing the log of the likelihood function through optimization method and progress iteration by iteration (Becketi, 2013, p.245). Our tentatively identified ARMA (2, 0, 2) model however doesn’t necessary preclude the best forecasting results. In addition to a rule of thumb lag selection, Akaike’s and Schwarz Bayesian information criterion was conducted to get a better perspective about the validity of alternative lags order Table 10. According to Akaike’s information criterion that prefers model which minimises an information loss, the best fit model is ARIMA (1,0,1). According to Schwarz criterion, the best fit is ARIMA (4,0,4). Box and Jenkins (1970) argue that there is only very limited difference in forecasts between complex high order system and low order systems, therefor only low-order systems will be considered namely ARIMA(1,0,1) and ARIMA(2,0,2). Comparing the overall fit of the models in terms of log-likelihood unveils only marginal difference in the magnitude Appendix B, Tables 1,2. All coefficients implied by the ARIMA (1,0,1) are significant at 5% level. In this specification the 𝜓 coefficients implied by the specification29 show that 72% of an economic shock persists into the succeeding quarter, followed by 40% of the original shock. Standard error(SE) of white noise (𝜀) 0.01 is > than 0.006, indicating that the variability of the error is large relative to the mean of the process (Becketi, 2013, p. 249). Both models reject the Wald statistics which test all the coefficients against the null hypothesis that they are insignificant. The ARIMA (2,0,2) 𝜓 estimates 32.8% and -62.1% implying that the shocks to the GDP persist only marginally and reverse in the succeeding quarters. 28 Although there will be a reference to the ARIMA (p,0,q) through a text we could just estimate log of real GDP directly in STATA and notation would change to ARIMA (p,1,q). The results would be identical. 29 For details see Appendix B: `Dynamic response of GDP growth to economics shocks`
  • 48. 48 Calculated t- ratios of 𝜙1 and 𝜃1 in ARIMA (2,0,2) point on insignificance at 5% level, moreover coefficient 𝜙1is only one third of the size of the same coefficient in the former model. SE(𝜀) 0.009 is > than 0.006, indicating that the variability of the error is large relative to the mean of the process, but slight improvement from the former model. Results from ARIMA (2,0,2) therefore suggest that coefficients are not very precise, i.e. not providing accurate estimates of the dynamic response of GDP growth to economic shocks. Table 10: Akaike’s and Schwarz Bayesian information criterion for model GROWTH Model GROWTH Akaike`s information criterion Bayesian information criterion ARIMA (1,0,1) -800.01 -788.63 ARIMA (1,0,2) -806.96 -792.74 ARIMA (1,0,3) -805.67 -788.65 ARIMA (1,0,4) -806.52 -786.61 ARIMA (2,0,0) -805.63 -794.27 ARIMA (2,0,1) -804.01 -789.81 ARIMA (2,0,2) -807.45 -790.33 ARIMA (2,0,3) -805.83 -785.92 ARIMA (2,0,4) -801.85 -799.08 ARIMA (3,0,0) -804.31 -790.09 ARIMA (3,0,1) -806.05 -788.98 ARIMA (3,0,2) -806.21 -786.31 ARIMA (3,0,3) -805.09 -782.34 ARIMA (3,0,4) -801.98 -776.39 ARIMA (4,0,0) -803.98 -789.91 ARIMA (4,0,1) -802.63 -789.56 ARIMA (4,0,2) -805.98 -783.24 ARIMA (4,0,3) -803.56 -777.97 ARIMA (4,0,4) -801.58 -733.14
  • 49. 49 4.4.3 Diagnostic checking The next step in the BJ approach is model diagnostic checking, that is to check adequacy of candidate ARIMA models. It follows that well specified and accurately fitted model is evidence that residuals of its estimated error, is a white noise (Becketi, 2013, p. 254). Widely used test for iid residuals is the Ljung–Box Portmanteau test, which considers all ACF simultaneously for significance. According to (Appendix B: B3) the test strongly confirms no evidence that residuals deviate from white noise in models. On the basis of the considered tests, both models performed similarly as BJ methodology suggest. Following the BJ`s principle of parsimony and the fact that the ARIMA (1,0,1) outperformed the ARIMA(2,0,2) in some of the tests we therefore concluded that the ARIMA(1,0,1) would fit the GDP most accurately. Thus: 𝐴𝑅𝐼𝑀𝐴(1,0,1): 𝑦𝑡 = 0.006 + 0.670𝑦𝑡−1 − 0.446𝜀𝑡−1 + 𝜀 4.4.4 Forecasting The last part of the time-series modelling concerns forecasting. This was employed on the same time period as before, 2007q1-2009q1, with the use of STATA. Table 11: Ex-post forecast based on ML regression30 Observation Actual Prediction Error Error (%) S.D. of Error t- ratio 2007q1 383981.6 381509.11 2472.53 0.644 0.0012 0.515 2007q2 389660.1 386980.66 2679.40 0.688 0.0012 0.550 2007q3 394029.1 392864.48 1164.60 0.296 0.0012 0.237 2007q4 402524.0 396951.74 5572.26 1.384 0.0012 1.108 2008q1 406122.5 406439.35 -316.90 -0.078 0.0012 -0.062 2008q2 396920.0 408926.21 -12006.22 -3.025 0.0012 -2.421 2008q3 391272.7 396796.96 -5524.28 -1.412 0.0012 -1.130 2008q4 377354.4 391910.98 -14556.61 -3.858 0.0012 -3.088 2009q1 370763.6 376103.63 -5340.01 -1.440 0.0012 -1.153 Based on 9 observations from 2007q1 to 2009q1q4 Mean Prediction Errors (MPE, %) -0.722 Mean Absolute Prediction Error (MAPE, %) 1.424 Root Mean Sum Squares Predictive Errors (RMSPE, %) 1.855 30 For simplicity actual and predicted values were converted back to values by the use of antilog, see Appendix B: B4 for details.
  • 50. 50 All the calculated forecasting performance statistics, namely MPE, MAPE and RMPSE, points to a more accurate forecast than the one produced by the structural model. The ARIMA model under- predicted GDP between 2007q1-2007q4 and over-predicted between 2008q1-2009q1. Despite persistent over and under prediction, the residuals doesn’t show any longer-term pattern (see Appendix B: B5). Insignificant t-ratios are in 2008q2 and 2008q4. As Figure 12 shows, the exact timing of turning point wasn’t picked up by the model well, but as the time horizon increased, accuracy improved. Figure 12: UK’s GDP forecast ARIMA (1,1,1), 2007q1-2009q1
  • 51. 51 CHAPTER 5 Conclusion In this dissertation, we investigated two models from the opposite spectrum of underlying assumptions. The small structural model, which was based in the IS/LM/PC framework, quite clearly responded to exogenous shocks by exhibiting a cyclical response mechanism. Although the model responded to the exogenous shock to the GDP more accurately than the non-structural model did, it failed to keep up in following quarters and thus forecasting accuracy gradually decreased. Structural econometric models are no more than a reflection of the economy’s interactive nature and so they cannot contain any more information than was put into them during their construction. An indication of the limitations could be observed from the persistent over-prediction since 2000 when the historical simulation was conducted. Moreover, most of the variables used, exhibited properties that violate the classical Gauss-Markov assumptions, possibly causing radically different implications for the forecasting results from a model that is well specified and stationary. Atheoretical ARIMA models, which rely solely on past observations, provided superior results to the structural model. Although, the exact turning points were not predicted as precisely as for the former model, the forecasting results were more consistent through the forecasting period; this aspect is well captured by the better results from the RMSPE criterion. This point to a two caveats. In order to build a structural model that can be compared with the atheoretical model, further disaggregation is essential. Therefore, the prospective model builders need to assess the cost and benefits of building a more complex model carefully. This would mean, assessing whether the added benefits (measured in terms of improved forecast) of the simultaneous- equation model can be expected to unweighs the added costs involved in building it. Moreover Mizon and Hendry (2011, p. 5) points out that even being the `best forecasting model does not justify its policy use; and forecast failure is insufficient to reject a policy model`. They argue that models that `wins’ forecasting competitions have rarely any useful implications for an economic policy analysis, as they lack both target variable and policy instruments. This is clearly the case for the ARIMA model which can only be used for forecasting. Interestingly, the best result would be achieved when the two models’ forecasts are combined, given the structural model over-prediction and the non-structural model’s under-prediction.
  • 52. 52 APPENDIX APPENDIX A: A1: glossary of variables CONS - Final consumption expenditure, households, households &NPISH expenditure, constant prices, seasonally adjusted 2010, chained prices, quarterly INC - Gross Domestic Product chained volume, seasonally adjusted 2010 prices, quarterly values CINC - first difference of INC CINC1 - GDP lagged one quarter minus GDP lagged by two quarters MS - M0 notes and coins outside the central bank, seasonally adjusted current prices, monthly values CMS - Narrowly defined (M0) minus last quarter (M0) GOV - General Government final consumption expenditure CVM, seasonally adjusted 2010, chained prices, quarterly INT - UK 3 month treasury bills, Yield (annualized)31 , in % CINT - INT lagged by one quarter - INT lagged by two quarters INF - UK CPI Index: all items (annualized), monthly in % INF1- Inflation lagged by one quarter INT8 - Interest rate on 3 month treasury bills lagged by eight quarters POTY - UK’s potential output current prices, quarterly OTG - GDP minus current potential GDP = (∆𝑙𝑜𝑔𝐼𝑁𝐶 − ∆𝑙𝑜𝑔𝑃𝑂𝑇𝑌) INV - Gross capital formation CVM, seasonally adjusted 2010, chained prices, quarterly data 31 INT and INF was converted to quarterly values by dividing the variable by 4
  • 53. 53 A2: Granger casualty test: In order to conduct the test all variables need to be stationary. As an appendix A, A7 suggest, both variables appear to be non-stationary. According to Dickey-Fuller test, first difference of variable INT, Table1 was enough to make the variable stationary. Curiously in the case of MS there had to be also second difference conducted for variable to pass DF test (see Table 2,3). Actual test consist of running Vector Autoregressive Model (VAR) on both variables and their lags. Criteria for the length of lags is subject to Akaike`s information criterion. When choosing lags that minimize the AIC, lags 4, 5,6,7,8 were tried. As AIC was improving with the increased lag length, four lags were chosen to simply the process Table 4. Then, the Granger causality Wald test was conducted Table5. It means to run two Granger test for each direction. As the F-test shows, both variables rejected null hypothesis for non-causality. A2.1 ADF for first differenced [INT] is stationary Table 1 _cons -.0936771 .0822875 -1.14 0.257 -.2566728 .0693186 L4D. .0689543 .0875011 0.79 0.432 -.1043685 .2422772 L3D. .22083 .1188896 1.86 0.066 -.0146674 .4563274 L2D. .1289997 .1352925 0.95 0.342 -.1389887 .3969882 LD. .1628473 .1519139 1.07 0.286 -.1380648 .4637594 L1. -.9772638 .1681747 -5.81 0.000 -1.310386 -.6441421 CINT1 D.CINT1 Coef. Std. Err. t P>|t| [95% Conf. Interval] MacKinnon approximate p-value for Z(t) = 0.0000 Z(t) -5.811 -3.503 -2.889 -2.579 Statistic Value Value Value Test 1% Critical 5% Critical 10% Critical Interpolated Dickey-Fuller Augmented Dickey-Fuller test for unit root Number of obs = 121 . . dfuller CINT,lags(4) reg
  • 54. 54 A2.2 ADF for first differenced [MS] is non-stationary Table 2 A2.3 ADF for second differenced [MS] is stationary Table 3 _cons 26.49469 39.97869 0.66 0.509 -52.68815 105.6775 L4D. -.529233 .0945871 -5.60 0.000 -.7165746 -.3418914 L3D. -.707205 .1205876 -5.86 0.000 -.9460439 -.4683662 L2D. -.7359772 .1214308 -6.06 0.000 -.9764863 -.4954681 LD. -.8665757 .1161508 -7.46 0.000 -1.096627 -.6365245 L1. -.0143925 .0965072 -0.15 0.882 -.2055373 .1767522 CMS D.CMS Coef. Std. Err. t P>|t| [95% Conf. Interval] p-value for Z(t) = 0.4409 Z(t) -0.149 -2.359 -1.658 -1.289 Statistic Value Value Value Test 1% Critical 5% Critical 10% Critical Z(t) has t-distribution Augmented Dickey-Fuller test for unit root Number of obs = 122 . . dfuller CMS,lags(4) reg drift _cons 25.62748 20.48186 1.25 0.213 -14.94315 66.1981 L4D. .1555712 .1012939 1.54 0.127 -.0450726 .3562149 L3D. .8291929 .2101657 3.95 0.000 .4128952 1.245491 L2D. 1.648414 .3119625 5.28 0.000 1.030477 2.266352 LD. 2.485373 .4029534 6.17 0.000 1.6872 3.283546 L1. -4.434479 .469095 -9.45 0.000 -5.363666 -3.505292 dms D.dms Coef. Std. Err. t P>|t| [95% Conf. Interval] p-value for Z(t) = 0.0000 Z(t) -9.453 -2.359 -1.658 -1.289 Statistic Value Value Value Test 1% Critical 5% Critical 10% Critical Z(t) has t-distribution Augmented Dickey-Fuller test for unit root Number of obs = 121 . . dfuller dms,lags(4) reg drift
  • 55. 55 A 2.4 VAR estimation of [MS] and [INT] Table 4 A2.5 Granger causality test Table 5 _cons 17.49074 19.63273 0.89 0.375 -21.40523 56.38671 L4. -.5743644 .0868972 -6.61 0.000 -.7465234 -.4022054 L3. -.7778699 .1055064 -7.37 0.000 -.9868972 -.5688426 L2. -.7964502 .0986536 -8.07 0.000 -.9919006 -.6009997 L1. -.9032189 .0787349 -11.47 0.000 -1.059207 -.7472309 dms L4. -10.43342 20.99026 -0.50 0.620 -52.01891 31.15206 L3. -8.937376 21.28898 -0.42 0.675 -51.11469 33.23993 L2. -31.70797 21.51865 -1.47 0.143 -74.3403 10.92436 L1. -23.36729 21.83926 -1.07 0.287 -66.6348 19.90022 CINT1 dms _cons -.0705722 .0815818 -0.87 0.389 -.2322005 .091056 L4. .000107 .0003611 0.30 0.768 -.0006084 .0008224 L3. .000039 .0004384 0.09 0.929 -.0008296 .0009076 L2. -.0001827 .0004099 -0.45 0.657 -.0009949 .0006295 L1. -.0004127 .0003272 -1.26 0.210 -.0010608 .0002355 dms L4. -.0772208 .0872229 -0.89 0.378 -.2500251 .0955835 L3. .0668469 .0884642 0.76 0.451 -.1084167 .2421105 L2. .0217379 .0894186 0.24 0.808 -.1554164 .1988923 L1. .1661345 .0907508 1.83 0.070 -.0136593 .3459283 CINT1 CINT1 Coef. Std. Err. t P>|t| [95% Conf. Interval] dms 9 222.96 0.5405 17.93564 0.0000 CINT1 9 .926489 0.0595 .9642081 0.4676 Equation Parms RMSE R-sq F P > F Det(Sigma_ml) = 36005.71 SBIC = 16.87598 FPE = 48390.16 HQIC = 16.6303 Log likelihood = -986.1984 AIC = 16.46227 Sample: 1979q3 - 2009q4 No. of obs = 122 Vector autoregression . . var CINT dms, lags(1/4) small dms ALL 1.2091 4 113 0.3109 dms CINT1 1.2091 4 113 0.3109 CINT1 ALL .55336 4 113 0.6970 CINT1 dms .55336 4 113 0.6970 Equation Excluded F df df_r Prob > F Granger causality Wald tests . . vargranger
  • 56. 56 A3. Hausman specification test : Rationale behind the test is to test for the presence of simultaneity; that is weather the endogenous variable is correlated with an error term. If there is no simultaneity, OLS should generate efficient and consistent parameter estimators. Instrumental variable (generated by 2sls) on the other hand will be consistent but inefficient. If however simultaneity is present OLS will be inconsistent, while 2sls will be bot consistent and efficient. The test comprise of: regressing consumption function by OLS Table 6 and obtain residuals. Then regressing consumption function using instrumental variable and obtain residuals Table 7. Finally we compared the quadratic differences between the coefficients vectors scaled by the precision, matrix which gives us a χ2 test statistics Table 8. A3.1 single equation OLS estimation Table 6 _cons -3411.072 1019.123 -3.35 0.001 -5431.582 -1390.562 INC .1041349 .0268763 3.87 0.000 .05085 .1574198 CONS1 .8603323 .0379196 22.69 0.000 .785153 .9355116 CONS Coef. Std. Err. t P>|t| [95% Conf. Interval] Total 2.3372e+11 108 2.1640e+09 Root MSE = 1106.8 Adj R-squared = 0.9994 Residual 129857840 106 1225073.96 R-squared = 0.9994 Model 2.3359e+11 2 1.1679e+11 Prob > F = 0.0000 F( 2, 106) =95335.52 Source SS df MS Number of obs = 109 . . reg CONS CONS1 INC if tin(1980q1, 2007q1)
  • 57. 57 A3.1 single equation 2sLs estimation Table 7 A3.2 Hausman test Table 8 Instruments: CONS1 MS INT8 CINT1 INF1 CINC1 GOV CINC OTG Instrumented: INC _cons -4787.321 1217.284 -3.93 0.000 -7173.153 -2401.488 CONS1 .8052211 .0463722 17.36 0.000 .7143333 .896109 INC .1432685 .0328878 4.36 0.000 .0788096 .2077275 CONS Coef. Std. Err. z P>|z| [95% Conf. Interval] Root MSE = 1102.4 R-squared = 0.9994 Prob > chi2 = 0.0000 Wald chi2(2) = 1.9e+05 Instrumental variables (2SLS) regression Number of obs = 109 > , 2007q1) . . ivregress 2sls CONS CONS1 (INC= MS INT8 CINT1 INF1 CINC1 GOV CINC OTG ) if tin(1980q1 Prob>chi2 = 0.0406 = 4.19 chi2(1) = (b-B)'[(V_b-V_B)^(-1)](b-B) Test: Ho: difference in coefficients not systematic B = inconsistent under Ha, efficient under Ho; obtained from regress b = consistent under Ho and Ha; obtained from ivregress CONS1 .8052211 .8603323 -.0551111 .0269089 INC .1432685 .1041349 .0391336 .0191077 tsls ols Difference S.E. (b) (B) (b-B) sqrt(diag(V_b-V_B)) Coefficients are on a similar scale. unexpected and possibly consider scaling your variables so that the coefficients problems computing the test. Examine the output of your estimators for anything coefficients being tested (2); be sure this is what you expect, or there may be Note: the rank of the differenced variance matrix (1) does not equal the number of . . hausman tsls ols,sigmaless
  • 58. 58 A4. Structural model - results A.4.1 2sls estimation: Table 9: First stage _cons 5922.56 3710.857 1.60 0.113 -1432.238 13277.36 MS .0032858 .0883523 0.04 0.970 -.1718256 .1783971 GOV -.0565133 .0854569 -0.66 0.510 -.225886 .1128594 INF1 -80.7754 36.91972 -2.19 0.031 -153.9491 -7.601722 OTG 70.4786 61.45123 1.15 0.254 -51.31575 192.2729 CINT1 -113.2905 122.3191 -0.93 0.356 -355.7229 129.142 CMS .5504231 .4745044 1.16 0.249 -.3900293 1.490875 CINC .2130746 .039604 5.38 0.000 .1345809 .2915684 INT8 -104.5998 55.47193 -1.89 0.062 -214.5434 5.343762 CINC1 .0814872 .0411971 1.98 0.050 -.0001641 .1631385 CONS1 .995946 .0122958 81.00 0.000 .9715762 1.020316 CONS Coef. Std. Err. t P>|t| [95% Conf. Interval] Total 3.0414e+11 119 2.5558e+09 Root MSE = 1088.6 Adj R-squared = 0.9995 Residual 129162840 109 1184980.18 R-squared = 0.9996 Model 3.0401e+11 10 3.0401e+10 Prob > F = 0.0000 F( 10, 109) =25655.64 Source SS df MS Number of obs = 120 ----------------------- First-stage regressions > TG INF1), endo(INC) exog(GOV OTG MS) 2sls first . . reg3 (CONS = INC CONS1) (INV = CINC1 INC INT8) (INT = INC CINC CMS CINT1) (INF = O _cons 9348.136 7060.012 1.32 0.188 -4644.579 23340.85 MS .1306359 .1680927 0.78 0.439 -.2025185 .4637902 GOV -.3563384 .1625841 -2.19 0.031 -.6785749 -.034102 INF1 209.5245 70.24082 2.98 0.004 70.30948 348.7395 OTG 571.1541 116.9127 4.89 0.000 339.4369 802.8714 CINT1 -178.3855 232.7156 -0.77 0.445 -639.6202 282.8492 CMS -.5415002 .9027583 -0.60 0.550 -2.330738 1.247737 CINC .0373861 .0753477 0.50 0.621 -.1119505 .1867228 INT8 9.380083 105.5369 0.09 0.929 -199.7907 218.5509 CINC1 .142514 .0783786 1.82 0.072 -.01283 .2978579 CONS1 .3085774 .0233931 13.19 0.000 .2622131 .3549416 INV Coef. Std. Err. t P>|t| [95% Conf. Interval] Total 2.3496e+10 119 197446602 Root MSE = 2071 Adj R-squared = 0.9783 Residual 467519793 109 4289172.41 R-squared = 0.9801 Model 2.3029e+10 10 2.3029e+09 Prob > F = 0.0000 F( 10, 109) = 536.90 Source SS df MS Number of obs = 120
  • 59. 59 _cons 14.29523 3.939822 3.63 0.000 6.486631 22.10383 MS .0000938 .0000938 1.00 0.320 -.0000921 .0002797 GOV -.000135 .0000907 -1.49 0.140 -.0003148 .0000448 INF1 .4678207 .0391977 11.93 0.000 .3901321 .5455093 OTG .4562767 .0652429 6.99 0.000 .3269675 .5855859 CINT1 .5645513 .1298663 4.35 0.000 .3071605 .8219422 CMS -.0012305 .0005038 -2.44 0.016 -.002229 -.000232 CINC -.0000262 .000042 -0.62 0.535 -.0001095 .0000572 INT8 .2573257 .0588946 4.37 0.000 .1405985 .3740529 CINC1 -.0000796 .0000437 -1.82 0.071 -.0001663 7.05e-06 CONS1 -.0000246 .0000131 -1.88 0.063 -.0000504 1.32e-06 INT Coef. Std. Err. t P>|t| [95% Conf. Interval] Total 1599.02896 119 13.4372181 Root MSE = 1.1557 Adj R-squared = 0.9006 Residual 145.593593 109 1.33572103 R-squared = 0.9089 Model 1453.43537 10 145.343537 Prob > F = 0.0000 F( 10, 109) = 108.81 Source SS df MS Number of obs = 120 _cons 3.85524 2.62755 1.47 0.145 -1.352479 9.062958 MS .0000992 .0000626 1.59 0.116 -.0000248 .0002232 GOV -.0000807 .0000605 -1.33 0.185 -.0002006 .0000393 INF1 .9262634 .0261418 35.43 0.000 .8744512 .9780755 OTG .0885327 .0435118 2.03 0.044 .0022937 .1747717 CINT1 .5302765 .0866106 6.12 0.000 .3586171 .7019358 CMS -.0006023 .000336 -1.79 0.076 -.0012682 .0000636 CINC -.0000149 .000028 -0.53 0.597 -.0000704 .0000407 INT8 .0382188 .0392781 0.97 0.333 -.0396291 .1160666 CINC1 .0000381 .0000292 1.30 0.195 -.0000198 .0000959 CONS1 -6.96e-06 8.71e-06 -0.80 0.426 -.0000242 .0000103 INF Coef. Std. Err. t P>|t| [95% Conf. Interval] Total 1607.78367 119 13.5107871 Root MSE = .77078 Adj R-squared = 0.9560 Residual 64.7576504 109 .594106884 R-squared = 0.9597 Model 1543.02602 10 154.302602 Prob > F = 0.0000 F( 10, 109) = 259.72 Source SS df MS Number of obs = 120
  • 60. 60 Second stage: _cons 70865.7 11376.71 6.23 0.000 48317.43 93413.98 MS 1.253788 .2708696 4.63 0.000 .7169331 1.790643 GOV -.2161911 .2619929 -0.83 0.411 -.7354525 .3030703 INF1 32.23124 113.1881 0.28 0.776 -192.104 256.5665 OTG 480.3305 188.3966 2.55 0.012 106.9345 853.7265 CINT1 -334.7356 375.0048 -0.89 0.374 -1077.983 408.5117 CMS -1.816995 1.454732 -1.25 0.214 -4.700225 1.066236 CINC .3570231 .1214175 2.94 0.004 .1163776 .5976686 INT8 -701.2326 170.0653 -4.12 0.000 -1038.297 -364.1686 CINC1 .5130313 .1263017 4.06 0.000 .2627055 .7633571 CONS1 1.112142 .0376963 29.50 0.000 1.037429 1.186854 INC Coef. Std. Err. t P>|t| [95% Conf. Interval] Total 6.2038e+11 119 5.2133e+09 Root MSE = 3337.3 Adj R-squared = 0.9979 Residual 1.2140e+09 109 11137718.2 R-squared = 0.9980 Model 6.1916e+11 10 6.1916e+10 Prob > F = 0.0000 F( 10, 109) = 5559.16 Source SS df MS Number of obs = 120 Exogenous variables: CONS1 CINC1 INT8 CINC CMS CINT1 OTG INF1 GOV MS Endogenous variables: CONS INV INT INF INC _cons .1356703 .1351685 1.00 0.316 -.1299465 .4012871 INF1 .947737 .0227979 41.57 0.000 .9029374 .9925366 OTG .1283367 .0342462 3.75 0.000 .0610402 .1956331 INF _cons 17.91581 .7799207 22.97 0.000 16.3832 19.44841 CINT1 .8381642 .2046285 4.10 0.000 .4360531 1.240275 CMS -.0023777 .0008356 -2.85 0.005 -.0040196 -.0007358 CINC -.0001509 .0000631 -2.39 0.017 -.0002749 -.0000269 INC -.0000336 3.52e-06 -9.56 0.000 -.0000405 -.0000267 INT _cons -6972.469 2281.213 -3.06 0.002 -11455.23 -2489.706 INT8 -122.93 110.9292 -1.11 0.268 -340.9145 95.05453 INC .1853121 .005361 34.57 0.000 .1747774 .1958469 CINC1 .2678231 .0717983 3.73 0.000 .1267339 .4089124 INV _cons -3628.45 1480.371 -2.45 0.015 -6537.495 -719.4044 CONS1 .7891827 .0609076 12.96 0.000 .6694945 .9088708 INC .1477249 .0426785 3.46 0.001 .0638584 .2315915 CONS Coef. Std. Err. t P>|t| [95% Conf. Interval] INF 120 2 .9185514 0.9386 894.28 0.0000 INT 120 4 1.999967 0.7123 71.03 0.0000 INV 120 3 2383.319 0.9720 1332.06 0.0000 CONS 120 2 1523.078 0.9991 65503.10 0.0000 Equation Obs Parms RMSE "R-sq" F-Stat P Two-stage least-squares regression