SlideShare a Scribd company logo
1 of 7
Download to read offline
Empirical Finance
BE333
Essex Business School
Spring Coursework 2015/16
Jordan Stone
1303437
BE333 Spring 2016 Coursework - Option 2 – Sina Erdal
The coursework option consists of data manipulation and estimation in EViews,
analysis and interpretation. The coursework must be written up individually.
In your answers to the questions below, you should present your EViews equation
estimation output as it would be in published academic papers. (Examine several
such papers, the approaches to presentation are fairly standard.) Raw EViews
regression output should be included only in an Appendix. You should also
include the studies/books you have utilised in the analyses in a “References” section.
The principle of purchasing power parity (PPP) states that the exchange rate
between two countries will, at least in the long-run, fully reflect the changes in the
price levels of the two countries. Even if it does not hold exactly, the PPP model
provides a benchmark to suggest the levels that exchange rates should achieve.
This can be examined using a simple regression model:
Percentage changes in the exchange rate = + β1 × difference in inflation rates + ut
The PPP implies that = 0 and β1 = 1. That is, the currency of the country with the
higher inflation rate will in the long run depreciate at a rate that is equal to the
difference in inflation rates.
In the file “be333 coursework 2 spring 2016.xlsx” on Moodle you will find monthly
data from 1/1975 to 12/2010 for the following variables:
 USDJPY: the USD – Japanese yen exchange rate in yen per USD
 US CPI: the US consumer price index
 JP CPI: the Japanese consumer price index
Question 1) (25 points) Import the file into Eviews as monthly data. Form the
percentage monthly return series (RET) for the exchange rate and monthly inflation
rates (USINF and JPINF) for the two economies and report and comment on their
descriptive statistics.
jpinf = jpcpi/jpcpi(-1) – 1
usinf = uscpi/uscpi(-1) – 1
ret = usdjpy/usdjpy(-1) - 1
The descriptive statistics shows a greater central tendency of the distribution under
USINF (0.33%) and JPINF (0.14%). This means that the US inflation rate has
increased more than Japanese inflation rate over time. RET (-0.25%) has a lower
central tendency which is also negatively signed, therefore the US dollar has
weakened against the Japanese Yen over time.
It is also evident that RET has the most amount of outliers, followed by JPINF and
then USINF. (Mean – Median) JPINF 3.96X10-4 USINF 3.38X10-4 RET 24.56X10-4.
Standard deviation shows the deviation from the mean. Therefore the higher the
standard deviation, the more dispersed the data is from the mean. It gives us a
(95%) confidence level that the mean falls within a certain range if the test output
was repeated. RET (3.33%) has the highest standard deviation which is reflected by
the amount of outliers present in its distribution. JPINF (0.57%) has a lower standard
deviation and USINF (0.37%) has the lowest. Therefore RET and JPINF has more
probability of being away from its mean value than USINF.
JPINF has positive skewness therefore the mass of the data is concentrated to the
left of the distribution. RET and USINF are both negatively skewed, meaning the
mass of the data are concentrated to the right of the distribution.
Kurtosis is a measure of the peak of a distribution. The data shows that RET, JPINF
and USINF have kurtosis above 3 therefore they are leptokurtic. Meaning they have
a higher peak in comparison to a normal distribution.
JPINF USINF RET
Mean 0.001374 0.003346 -0.002456
Median 0.000978 0.003008 0.000000
Maximum 0.027157 0.015209 0.121059
Minimum -0.012950 -0.019153 -0.143560
Std.Dev 0.005732 0.003684 0.033272
Skewness 0.991608 -0.331324 -0.176195
Kurtosis 5.017866 6.899647 4.222720
Jarque-Bera 143.7552 280.9823 29.07854
Probability 0.000000 0.000000 0.000000
Observations 431 431 431
Question 2) (25 points) Estimate the following model using OLS in Eviews:
RETt β1 * (JPINFt – USINFt) + ut
Comment in detail on your regression output. State/interpret the signs, magnitudes,
and statistical significances of the coefficients and the statistical significance and fit
of the overall regression. Test the joint hypothesis = 0 and β1 = 1 using the Wald
test. Does PPP appear to hold in this dataset?
The following statistics are found in ‘Appendix 1’. The coefficient expresses the
relationship between the independent variables and the dependant variable. As the
differential between inflation rates increase by 1 unit (usinf-jpinf), the Japanese yen
exchange rate in yen per USD will increase by 0.4 units. As a result the value of the
yen against the dollar increases. This means an appreciation in the yen or a
depreciation of the dollar.
The R-square shows 0.48% of the percentage monthly return series can be
explained by the regression line. This means 0.48% of the dependant variable is
explained by independent variables. Therefore R-square has low explanatory power.
In this case the R-square is not a very good fit to the data. It can be increased if we
included more factors into the right hand side of the equation.
As the residual sum of squares is quite high, it means the data set fits poorly. This is
reflected by the low R-squared figure. The regression shows us that there is
0.473734 variation in the data set that is not explained by the regression model.
The P-value in ‘appendix 1’ for (usinf-jpinf) is 0.1515, as this is above the 5%
significance level, it means the slope coefficient is not significant. The intercept slope
coefficient 0.056 is almost equal to 5% significance level, therefore it is significant.
The significance of both variables are backed up by the standards errors. As shown,
the standard error of the differentials in inflation rates is high 28%. The standard
deviation is lower for the intercept at 0.11%.
The p-value for the F-test is 0.15 which is above the 5% significance level. Therefore
the overall regression is statistically not significant.
Least Squares Regression Output
(Dependant variable: RET)
Observations
431
Sample
1975-2010
Variables Coefficient Std.Error t-statistic Prob
C -0.003250 0.001693 -1.919257 0.0556
USINF-JPINF 0.402409 0.280062 1.436857 0.1515
R-squared 0.004789
Adjusted R-
squared
0.002470
Sum squared
resid
0.473734
Durbin-Watson
stat
1.962761
F-statistic 2.064558
Prob(F-statistic) 0.151488
The Wald test in ‘appendix 2’ with joint hypothesis, α = 0 and β1 = 1, shows that the
PPP does not hold. This is because the F-statistic 0.0024 and Chi-square 0.0022 are
below the 5% significance level.
Wald Test
Null Hypothesis: C(1)=0, C(2)=1
Observations
431
Sample
1975-2010
Test Statistic Value df Probability
F-statistic 6.102976 (2,429) 0.0024
Chi-square 12.20595 2 0.0022
Question 3) (25 points) In an essay of less than 250 words, define and critically
discuss the problem of autocorrelation in an estimation setting. Be sure to mention
the consequences of autocorrelation on the properties of OLS estimators.
Autocorrelation/serial correlation is a time series problem in an estimation setting. It
violates one of the Gauss-Markov conditions that is the covariance of error terms is
equal to zero [cov (ut, ut-1) = 0] Ұt≠t-1. Therefore the error covariance when
autocorrelation is present is no longer equal to zero. This means the error terms are
dependent from one another. The effect on todays ‘Ut-1’ error term will effect
tomorrows ‘Ut’ error term. Any error term should have no correlation with any other
error term in the data. There should be no predictability or patterns in results.
Positive autocorrelation causes positive values to be followed by positive values and
negative followed by negative. Alternatively negative autocorrelation causes positive
to follow negative and negative to follow positive values. There is no theory behind
autocorrelation, it is created by error. Autocorrelation is usually a result of a variable
being omitted that ought to be included in the regression output. The omitted variable
is included in the error term as autocorrelation. Another reason can be because the
model has the wrong functional form, for example, a linear-in-variable was included
instead of a log-linear model which should have been fitted. The consequences of
autocorrelation include linear biased estimators when there is a lagged dependant
variable on the RHS of the equation. Bias variances and standard errors are caused
by the equations underestimating. As a result coefficients may appear statistically
significantly different from zero, whereas this should not be the case. This also
means the R square; F and T tests are not reliable. (Gujarati and Porter, 2010).
Question 4) (25 points) Test the model given in Q2 for autocorrelation using the
Durbin-Watson test and comment on your results.
The Durbin Watson (DW) test is used to identify autocorrelation in data. The DW
statistic is equal to 2-2ê. ê is the parameter in the AR(1) autocorrelation relationship
Ut = ê Ut-1+Ɛt. When ê is equal to 0 and DW is equal to 2, then there is no
autocorrelation. DW equal to 0 means there is severe positive autocorrelation. DW
equal to 4 means severe negative autocorrelation.
1.96271=2-2 ê
1.96271−2
−2
= ê
ê = 0.018645
As ê is positive and close to 0, this implies no autocorrelation. ê can be between -1
and 1.
We must now test the null hypothesis that autocorrelation does not exist in the data
against the alternative hypothesis of autocorrelation. The Durbin Watson statistic as
given by the data in appendix 1 is 1.962761. The upper and lower bounds, dU and dL,
of the DW statistics are 1.84636 and 1.83704 respectively. These critical values are
found under the Durbin Watson 5% significance table with 430 observations and 2
parameters (k=2). As the Durbin Watson statistic 1.962761 is above the upper bound
1.844636 and below 2, we can comfortably accept the null hypothesis that
autocorrelation does not exist in the data between the residuals. dU<DW<2.If the DW
statistic was between dL and dU it would be in the region known as the grey area and
we would need to test further for autocorrelation. Lastly if the DW was between dL
and 0 then it will be evident there is positive autocorrelation present. This test can
also be repeated by reflecting the statistics into the region between 2 and 4; that is if
DW from the regression output is above 2. In the region between 2 and 4, dU and dL,
are 2.15364 and 2.16296 respectively.
Appendix:
Appendix 1:
Appendix 2:
Bibliography
Gujarati, D and Porter, D. 2010. Essentials of Econometrics. 4th edn. New
York:McGraw Hill.

More Related Content

What's hot

Correlation and Regression
Correlation and RegressionCorrelation and Regression
Correlation and RegressionShubham Mehta
 
Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...
Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...
Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...Muhammad Ali
 
Multiple Linear Regression
Multiple Linear RegressionMultiple Linear Regression
Multiple Linear RegressionIndus University
 
Regression analysis in R
Regression analysis in RRegression analysis in R
Regression analysis in RAlichy Sowmya
 
Solving stepwise regression problems
Solving stepwise regression problemsSolving stepwise regression problems
Solving stepwise regression problemsSoma Sinha Roy
 
Research Methodology Module-06
Research Methodology Module-06Research Methodology Module-06
Research Methodology Module-06Kishor Ade
 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regressionJames Neill
 
Multiple linear regression II
Multiple linear regression IIMultiple linear regression II
Multiple linear regression IIJames Neill
 
Topic 15 correlation spss
Topic 15 correlation spssTopic 15 correlation spss
Topic 15 correlation spssSizwan Ahammed
 
Chapter 4 - multiple regression
Chapter 4  - multiple regressionChapter 4  - multiple regression
Chapter 4 - multiple regressionTauseef khan
 
What is Simple Linear Regression and How Can an Enterprise Use this Technique...
What is Simple Linear Regression and How Can an Enterprise Use this Technique...What is Simple Linear Regression and How Can an Enterprise Use this Technique...
What is Simple Linear Regression and How Can an Enterprise Use this Technique...Smarten Augmented Analytics
 

What's hot (18)

Correlation and Regression
Correlation and RegressionCorrelation and Regression
Correlation and Regression
 
Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...
Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...
Econometrics notes (Introduction, Simple Linear regression, Multiple linear r...
 
Multiple Linear Regression
Multiple Linear RegressionMultiple Linear Regression
Multiple Linear Regression
 
Measure of Association
Measure of AssociationMeasure of Association
Measure of Association
 
Regression analysis in R
Regression analysis in RRegression analysis in R
Regression analysis in R
 
Correlation
CorrelationCorrelation
Correlation
 
Solving stepwise regression problems
Solving stepwise regression problemsSolving stepwise regression problems
Solving stepwise regression problems
 
Research Methodology Module-06
Research Methodology Module-06Research Methodology Module-06
Research Methodology Module-06
 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regression
 
Measures of Dispersion
Measures of DispersionMeasures of Dispersion
Measures of Dispersion
 
Multicollinearity PPT
Multicollinearity PPTMulticollinearity PPT
Multicollinearity PPT
 
Multiple linear regression II
Multiple linear regression IIMultiple linear regression II
Multiple linear regression II
 
Regression presentation
Regression presentationRegression presentation
Regression presentation
 
Topic 15 correlation spss
Topic 15 correlation spssTopic 15 correlation spss
Topic 15 correlation spss
 
Chapter 4 - multiple regression
Chapter 4  - multiple regressionChapter 4  - multiple regression
Chapter 4 - multiple regression
 
What is Simple Linear Regression and How Can an Enterprise Use this Technique...
What is Simple Linear Regression and How Can an Enterprise Use this Technique...What is Simple Linear Regression and How Can an Enterprise Use this Technique...
What is Simple Linear Regression and How Can an Enterprise Use this Technique...
 
Correlations using SPSS
Correlations using SPSSCorrelations using SPSS
Correlations using SPSS
 
Regression analysis
Regression analysisRegression analysis
Regression analysis
 

Similar to Empirical Finance, Jordan Stone- Linkedin

Logistic regression
Logistic regressionLogistic regression
Logistic regressionRupak Roy
 
Logistic regression with SPSS examples
Logistic regression with SPSS examplesLogistic regression with SPSS examples
Logistic regression with SPSS examplesGaurav Kamboj
 
Regression with Time Series Data
Regression with Time Series DataRegression with Time Series Data
Regression with Time Series DataRizano Ahdiat R
 
Degree exam 2019 q &amp; a (1) (1)
Degree exam 2019 q &amp; a (1) (1)Degree exam 2019 q &amp; a (1) (1)
Degree exam 2019 q &amp; a (1) (1)OsamaKhan404075
 
Application of Weighted Least Squares Regression in Forecasting
Application of Weighted Least Squares Regression in ForecastingApplication of Weighted Least Squares Regression in Forecasting
Application of Weighted Least Squares Regression in Forecastingpaperpublications3
 
ders 8 Quantile-Regression.ppt
ders 8 Quantile-Regression.pptders 8 Quantile-Regression.ppt
ders 8 Quantile-Regression.pptErgin Akalpler
 
Week 3 Lecture 11 Regression Analysis Regression analy.docx
Week 3 Lecture 11 Regression Analysis Regression analy.docxWeek 3 Lecture 11 Regression Analysis Regression analy.docx
Week 3 Lecture 11 Regression Analysis Regression analy.docxcockekeshia
 
What is Isotonic Regression and How Can a Business Utilize it to Analyze Data?
What is Isotonic Regression and How Can a Business Utilize it to Analyze Data?What is Isotonic Regression and How Can a Business Utilize it to Analyze Data?
What is Isotonic Regression and How Can a Business Utilize it to Analyze Data?Smarten Augmented Analytics
 
1Running head RESEARCH PROJECT PROPOSAL 13RESEA.docx
1Running head RESEARCH PROJECT PROPOSAL    13RESEA.docx1Running head RESEARCH PROJECT PROPOSAL    13RESEA.docx
1Running head RESEARCH PROJECT PROPOSAL 13RESEA.docxfelicidaddinwoodie
 
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docxBUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docxcurwenmichaela
 
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docxBUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docxjasoninnes20
 
Multiple Linear Regression II and ANOVA I
Multiple Linear Regression II and ANOVA IMultiple Linear Regression II and ANOVA I
Multiple Linear Regression II and ANOVA IJames Neill
 
Distribution of EstimatesLinear Regression ModelAssume (yt,.docx
Distribution of EstimatesLinear Regression ModelAssume (yt,.docxDistribution of EstimatesLinear Regression ModelAssume (yt,.docx
Distribution of EstimatesLinear Regression ModelAssume (yt,.docxmadlynplamondon
 
For this assignment, use the aschooltest.sav dataset.The d
For this assignment, use the aschooltest.sav dataset.The dFor this assignment, use the aschooltest.sav dataset.The d
For this assignment, use the aschooltest.sav dataset.The dMerrileeDelvalle969
 
30REGRESSION Regression is a statistical tool that a.docx
30REGRESSION  Regression is a statistical tool that a.docx30REGRESSION  Regression is a statistical tool that a.docx
30REGRESSION Regression is a statistical tool that a.docxtarifarmarie
 
Week 4 Lecture 12 Significance Earlier we discussed co.docx
Week 4 Lecture 12 Significance Earlier we discussed co.docxWeek 4 Lecture 12 Significance Earlier we discussed co.docx
Week 4 Lecture 12 Significance Earlier we discussed co.docxcockekeshia
 

Similar to Empirical Finance, Jordan Stone- Linkedin (20)

Logistic regression
Logistic regressionLogistic regression
Logistic regression
 
Logistic regression with SPSS examples
Logistic regression with SPSS examplesLogistic regression with SPSS examples
Logistic regression with SPSS examples
 
Regression with Time Series Data
Regression with Time Series DataRegression with Time Series Data
Regression with Time Series Data
 
autocorrelation.pptx
autocorrelation.pptxautocorrelation.pptx
autocorrelation.pptx
 
Degree exam 2019 q &amp; a (1) (1)
Degree exam 2019 q &amp; a (1) (1)Degree exam 2019 q &amp; a (1) (1)
Degree exam 2019 q &amp; a (1) (1)
 
Application of Weighted Least Squares Regression in Forecasting
Application of Weighted Least Squares Regression in ForecastingApplication of Weighted Least Squares Regression in Forecasting
Application of Weighted Least Squares Regression in Forecasting
 
ders 8 Quantile-Regression.ppt
ders 8 Quantile-Regression.pptders 8 Quantile-Regression.ppt
ders 8 Quantile-Regression.ppt
 
Quantitative Methods - Level II - CFA Program
Quantitative Methods - Level II - CFA ProgramQuantitative Methods - Level II - CFA Program
Quantitative Methods - Level II - CFA Program
 
Lecture 4
Lecture 4Lecture 4
Lecture 4
 
Week 3 Lecture 11 Regression Analysis Regression analy.docx
Week 3 Lecture 11 Regression Analysis Regression analy.docxWeek 3 Lecture 11 Regression Analysis Regression analy.docx
Week 3 Lecture 11 Regression Analysis Regression analy.docx
 
What is Isotonic Regression and How Can a Business Utilize it to Analyze Data?
What is Isotonic Regression and How Can a Business Utilize it to Analyze Data?What is Isotonic Regression and How Can a Business Utilize it to Analyze Data?
What is Isotonic Regression and How Can a Business Utilize it to Analyze Data?
 
1Running head RESEARCH PROJECT PROPOSAL 13RESEA.docx
1Running head RESEARCH PROJECT PROPOSAL    13RESEA.docx1Running head RESEARCH PROJECT PROPOSAL    13RESEA.docx
1Running head RESEARCH PROJECT PROPOSAL 13RESEA.docx
 
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docxBUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
 
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docxBUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
BUS 308 – Week 4 Lecture 2 Interpreting Relationships .docx
 
Paper
PaperPaper
Paper
 
Multiple Linear Regression II and ANOVA I
Multiple Linear Regression II and ANOVA IMultiple Linear Regression II and ANOVA I
Multiple Linear Regression II and ANOVA I
 
Distribution of EstimatesLinear Regression ModelAssume (yt,.docx
Distribution of EstimatesLinear Regression ModelAssume (yt,.docxDistribution of EstimatesLinear Regression ModelAssume (yt,.docx
Distribution of EstimatesLinear Regression ModelAssume (yt,.docx
 
For this assignment, use the aschooltest.sav dataset.The d
For this assignment, use the aschooltest.sav dataset.The dFor this assignment, use the aschooltest.sav dataset.The d
For this assignment, use the aschooltest.sav dataset.The d
 
30REGRESSION Regression is a statistical tool that a.docx
30REGRESSION  Regression is a statistical tool that a.docx30REGRESSION  Regression is a statistical tool that a.docx
30REGRESSION Regression is a statistical tool that a.docx
 
Week 4 Lecture 12 Significance Earlier we discussed co.docx
Week 4 Lecture 12 Significance Earlier we discussed co.docxWeek 4 Lecture 12 Significance Earlier we discussed co.docx
Week 4 Lecture 12 Significance Earlier we discussed co.docx
 

Empirical Finance, Jordan Stone- Linkedin

  • 1. Empirical Finance BE333 Essex Business School Spring Coursework 2015/16 Jordan Stone 1303437
  • 2. BE333 Spring 2016 Coursework - Option 2 – Sina Erdal The coursework option consists of data manipulation and estimation in EViews, analysis and interpretation. The coursework must be written up individually. In your answers to the questions below, you should present your EViews equation estimation output as it would be in published academic papers. (Examine several such papers, the approaches to presentation are fairly standard.) Raw EViews regression output should be included only in an Appendix. You should also include the studies/books you have utilised in the analyses in a “References” section. The principle of purchasing power parity (PPP) states that the exchange rate between two countries will, at least in the long-run, fully reflect the changes in the price levels of the two countries. Even if it does not hold exactly, the PPP model provides a benchmark to suggest the levels that exchange rates should achieve. This can be examined using a simple regression model: Percentage changes in the exchange rate = + β1 × difference in inflation rates + ut The PPP implies that = 0 and β1 = 1. That is, the currency of the country with the higher inflation rate will in the long run depreciate at a rate that is equal to the difference in inflation rates. In the file “be333 coursework 2 spring 2016.xlsx” on Moodle you will find monthly data from 1/1975 to 12/2010 for the following variables:  USDJPY: the USD – Japanese yen exchange rate in yen per USD  US CPI: the US consumer price index  JP CPI: the Japanese consumer price index Question 1) (25 points) Import the file into Eviews as monthly data. Form the percentage monthly return series (RET) for the exchange rate and monthly inflation rates (USINF and JPINF) for the two economies and report and comment on their descriptive statistics. jpinf = jpcpi/jpcpi(-1) – 1 usinf = uscpi/uscpi(-1) – 1 ret = usdjpy/usdjpy(-1) - 1 The descriptive statistics shows a greater central tendency of the distribution under USINF (0.33%) and JPINF (0.14%). This means that the US inflation rate has increased more than Japanese inflation rate over time. RET (-0.25%) has a lower central tendency which is also negatively signed, therefore the US dollar has weakened against the Japanese Yen over time.
  • 3. It is also evident that RET has the most amount of outliers, followed by JPINF and then USINF. (Mean – Median) JPINF 3.96X10-4 USINF 3.38X10-4 RET 24.56X10-4. Standard deviation shows the deviation from the mean. Therefore the higher the standard deviation, the more dispersed the data is from the mean. It gives us a (95%) confidence level that the mean falls within a certain range if the test output was repeated. RET (3.33%) has the highest standard deviation which is reflected by the amount of outliers present in its distribution. JPINF (0.57%) has a lower standard deviation and USINF (0.37%) has the lowest. Therefore RET and JPINF has more probability of being away from its mean value than USINF. JPINF has positive skewness therefore the mass of the data is concentrated to the left of the distribution. RET and USINF are both negatively skewed, meaning the mass of the data are concentrated to the right of the distribution. Kurtosis is a measure of the peak of a distribution. The data shows that RET, JPINF and USINF have kurtosis above 3 therefore they are leptokurtic. Meaning they have a higher peak in comparison to a normal distribution. JPINF USINF RET Mean 0.001374 0.003346 -0.002456 Median 0.000978 0.003008 0.000000 Maximum 0.027157 0.015209 0.121059 Minimum -0.012950 -0.019153 -0.143560 Std.Dev 0.005732 0.003684 0.033272 Skewness 0.991608 -0.331324 -0.176195 Kurtosis 5.017866 6.899647 4.222720 Jarque-Bera 143.7552 280.9823 29.07854 Probability 0.000000 0.000000 0.000000 Observations 431 431 431 Question 2) (25 points) Estimate the following model using OLS in Eviews: RETt β1 * (JPINFt – USINFt) + ut Comment in detail on your regression output. State/interpret the signs, magnitudes, and statistical significances of the coefficients and the statistical significance and fit of the overall regression. Test the joint hypothesis = 0 and β1 = 1 using the Wald test. Does PPP appear to hold in this dataset? The following statistics are found in ‘Appendix 1’. The coefficient expresses the relationship between the independent variables and the dependant variable. As the differential between inflation rates increase by 1 unit (usinf-jpinf), the Japanese yen exchange rate in yen per USD will increase by 0.4 units. As a result the value of the yen against the dollar increases. This means an appreciation in the yen or a depreciation of the dollar.
  • 4. The R-square shows 0.48% of the percentage monthly return series can be explained by the regression line. This means 0.48% of the dependant variable is explained by independent variables. Therefore R-square has low explanatory power. In this case the R-square is not a very good fit to the data. It can be increased if we included more factors into the right hand side of the equation. As the residual sum of squares is quite high, it means the data set fits poorly. This is reflected by the low R-squared figure. The regression shows us that there is 0.473734 variation in the data set that is not explained by the regression model. The P-value in ‘appendix 1’ for (usinf-jpinf) is 0.1515, as this is above the 5% significance level, it means the slope coefficient is not significant. The intercept slope coefficient 0.056 is almost equal to 5% significance level, therefore it is significant. The significance of both variables are backed up by the standards errors. As shown, the standard error of the differentials in inflation rates is high 28%. The standard deviation is lower for the intercept at 0.11%. The p-value for the F-test is 0.15 which is above the 5% significance level. Therefore the overall regression is statistically not significant. Least Squares Regression Output (Dependant variable: RET) Observations 431 Sample 1975-2010 Variables Coefficient Std.Error t-statistic Prob C -0.003250 0.001693 -1.919257 0.0556 USINF-JPINF 0.402409 0.280062 1.436857 0.1515 R-squared 0.004789 Adjusted R- squared 0.002470 Sum squared resid 0.473734 Durbin-Watson stat 1.962761 F-statistic 2.064558 Prob(F-statistic) 0.151488 The Wald test in ‘appendix 2’ with joint hypothesis, α = 0 and β1 = 1, shows that the PPP does not hold. This is because the F-statistic 0.0024 and Chi-square 0.0022 are below the 5% significance level. Wald Test Null Hypothesis: C(1)=0, C(2)=1 Observations 431 Sample 1975-2010 Test Statistic Value df Probability F-statistic 6.102976 (2,429) 0.0024 Chi-square 12.20595 2 0.0022
  • 5. Question 3) (25 points) In an essay of less than 250 words, define and critically discuss the problem of autocorrelation in an estimation setting. Be sure to mention the consequences of autocorrelation on the properties of OLS estimators. Autocorrelation/serial correlation is a time series problem in an estimation setting. It violates one of the Gauss-Markov conditions that is the covariance of error terms is equal to zero [cov (ut, ut-1) = 0] Ұt≠t-1. Therefore the error covariance when autocorrelation is present is no longer equal to zero. This means the error terms are dependent from one another. The effect on todays ‘Ut-1’ error term will effect tomorrows ‘Ut’ error term. Any error term should have no correlation with any other error term in the data. There should be no predictability or patterns in results. Positive autocorrelation causes positive values to be followed by positive values and negative followed by negative. Alternatively negative autocorrelation causes positive to follow negative and negative to follow positive values. There is no theory behind autocorrelation, it is created by error. Autocorrelation is usually a result of a variable being omitted that ought to be included in the regression output. The omitted variable is included in the error term as autocorrelation. Another reason can be because the model has the wrong functional form, for example, a linear-in-variable was included instead of a log-linear model which should have been fitted. The consequences of autocorrelation include linear biased estimators when there is a lagged dependant variable on the RHS of the equation. Bias variances and standard errors are caused by the equations underestimating. As a result coefficients may appear statistically significantly different from zero, whereas this should not be the case. This also means the R square; F and T tests are not reliable. (Gujarati and Porter, 2010). Question 4) (25 points) Test the model given in Q2 for autocorrelation using the Durbin-Watson test and comment on your results. The Durbin Watson (DW) test is used to identify autocorrelation in data. The DW statistic is equal to 2-2ê. ê is the parameter in the AR(1) autocorrelation relationship Ut = ê Ut-1+Ɛt. When ê is equal to 0 and DW is equal to 2, then there is no autocorrelation. DW equal to 0 means there is severe positive autocorrelation. DW equal to 4 means severe negative autocorrelation. 1.96271=2-2 ê 1.96271−2 −2 = ê ê = 0.018645 As ê is positive and close to 0, this implies no autocorrelation. ê can be between -1 and 1.
  • 6. We must now test the null hypothesis that autocorrelation does not exist in the data against the alternative hypothesis of autocorrelation. The Durbin Watson statistic as given by the data in appendix 1 is 1.962761. The upper and lower bounds, dU and dL, of the DW statistics are 1.84636 and 1.83704 respectively. These critical values are found under the Durbin Watson 5% significance table with 430 observations and 2 parameters (k=2). As the Durbin Watson statistic 1.962761 is above the upper bound 1.844636 and below 2, we can comfortably accept the null hypothesis that autocorrelation does not exist in the data between the residuals. dU<DW<2.If the DW statistic was between dL and dU it would be in the region known as the grey area and we would need to test further for autocorrelation. Lastly if the DW was between dL and 0 then it will be evident there is positive autocorrelation present. This test can also be repeated by reflecting the statistics into the region between 2 and 4; that is if DW from the regression output is above 2. In the region between 2 and 4, dU and dL, are 2.15364 and 2.16296 respectively.
  • 7. Appendix: Appendix 1: Appendix 2: Bibliography Gujarati, D and Porter, D. 2010. Essentials of Econometrics. 4th edn. New York:McGraw Hill.