SlideShare a Scribd company logo
Basic Econometrics
Basic Econometrics
Introduction:
What is Econometrics?
2
Introduction
What is Econometrics?
 Definition 1: Economic Measurement
 Definition 2: Application of the mathematical statistics to
economic data in order to lend empirical support to the economic
mathematical models and obtain numerical results (Gerhard
Tintner, 1968)
3
Introduction
What is Econometrics?
 Definition 3: The quantitative analysis of actual economic
phenomena based on concurrent development of theory and observation,
related by appropriate methods of inference
(P.A.Samuelson, T.C.Koopmans and J.R.N.Stone,
1954)
4
Introduction
What is Econometrics?
 Definition 4: The social science
which applies economics, mathematics and statistical inference to the
analysis of economic phenomena (By Arthur S. Goldberger,
1964)
 Definition 5: The empirical determination of economic
laws (By H. Theil, 1971)
5
Introduction
What is Econometrics?
 Definition 6: A conjunction of economic theory and actual
measurements, using the theory and technique of statistical inference as a
bridge pier (By T.Haavelmo, 1944)
 And the others
6
7
Econometrics
Economic
Theory
Mathematical
Economics
Economic
Statistics
Mathematic
Statistics
Introduction
Why a separate discipline?
 Economic theory makes statements that are mostly
qualitative in nature, while econometrics gives empirical content to most
economic theory
 Mathematical economics is to express
economic theory in mathematical form without empirical verification of
the theory, while econometrics is mainly interested in the later
8
Introduction
Why a separate discipline?
 Economic Statistics is mainly concerned with
collecting, processing and presenting economic data. It does not being
concerned with using the collected data to test economic theories
 Mathematical statistics provides many of tools
for economic studies, but econometrics supplies the later with many special
methods of quantitative analysis based on economic data
9
10
Econometrics
Economic
Theory
Mathematical
Economics
Economic
Statistics
Mathematic
Statistics
Introduction
Methodology of Econometrics
(1) Statement of theory or
hypothesis:
Keynes stated: ”Consumption increases as income increases,
but not as much as the increase in income”. It means that “The
marginal propensity to consume (MPC) for a unit change in
income is grater than zero but less than unit”
11
Introduction
Methodology of Econometrics
(2) Specification of the
mathematical model of the
theory
Y = ß1+ ß2X ; 0 < ß2< 1
Y= consumption expenditure
X= income
ß1 and ß2 are parameters; ß1 is
intercept, and ß2 is slope coefficients
12
Introduction
Methodology of Econometrics
(3) Specification of the
econometric model of the
theory
Y = ß1+ ß2X + u ; 0 < ß2< 1;
Y = consumption expenditure;
X = income;
ß1 and ß2 are parameters; ß1is intercept and ß2 is slope coefficients; u is
disturbance term or error term. It is a random or stochastic variable
13
Introduction
Methodology of Econometrics
(4) Obtaining Data
(See Table 1.1, page 6)
Y= Personal consumption
expenditure
X= Gross Domestic Product
all in Billion US Dollars
14
Introduction
Methodology of Econometrics
(4) Obtaining Data
May 2004 Prof.VuThieu 15
Year X Y
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
2447.1
2476.9
2503.7
2619.4
2746.1
2865.8
2969.1
3052.2
3162.4
3223.3
3260.4
3240.8
3776.3
3843.1
3760.3
3906.6
4148.5
4279.8
4404.5
4539.9
4718.6
4838.0
4877.5
4821.0
Introduction
Methodology of Econometrics
(5) Estimating the Econometric Model
Y^ = - 231.8 + 0.7194 X (1.3.3)
MPC was about 0.72 and it means that for the sample period when real
income increases 1 USD, led (on average) real consumption
expenditure increases of about 72 cents
Note: A hat symbol (^) above one variable will signify an estimator of the
relevant population value
May 2004 Prof.VuThieu 16
Introduction
Methodology of Econometrics
(6) Hypothesis Testing
Are the estimates accord with the
expectations of the theory that is being
tested? Is MPC < 1 statistically? If so,
it may support Keynes’ theory.
Confirmation or refutation of
economic theories based on
sample evidence is object of Statistical
Inference (hypothesis testing)
May 2004 Prof.VuThieu 17
Introduction
Methodology of Econometrics
(7) Forecasting or Prediction
 With given future value(s) of X, what is the future value(s) of Y?
 GDP=$6000Bill in 1994, what is the forecast consumption
expenditure?
 Y^= - 231.8+0.7196(6000) = 4084.6
 Income Multiplier M = 1/(1 – MPC) (=3.57). decrease (increase) of $1
in investment will eventually lead to $3.57 decrease (increase) in
income
May 2004 Prof.VuThieu 18
Introduction
Methodology of Econometrics
(8) Using model for control or
policy purposes
Y=4000= -231.8+0.7194 X  X  5882
MPC = 0.72, an income of $5882 Bill
will produce an expenditure of $4000
Bill. By fiscal and monetary policy,
Government can manipulate the
control variable X to get the desired
level of target variable Y
May 2004 Prof.VuThieu 19
Introduction
Methodology of Econometrics
Figure 1.4: Anatomy of economic
modelling
• 1) Economic Theory
• 2) Mathematical Model of Theory
• 3) Econometric Model of Theory
• 4) Data
• 5) Estimation of Econometric Model
• 6) Hypothesis Testing
• 7) Forecasting or Prediction
• 8) Using the Model for control or policy purposes
May 2004 Prof.VuThieu 20
May 2004 Prof.VuThieu 21
Economic Theory
Mathematic Model Econometric Model Data Collection
Estimation
Hypothesis Testing
Forecasting
Application
in control or
policy
studies
Basic Econometrics
Chapter 1:
THE NATURE OF REGRESSION ANALYSIS
May 2004 Prof.VuThieu 22
1-1. Historical origin of the term “Regression”
• The term REGRESSION was introduced by Francis Galton
• Tendency for tall parents to have tall children and for short
parents to have short children, but the average height of children
born from parents of a given height tended to move (or regress)
toward the average height in the population as a whole (F.
Galton, “Family Likeness in Stature”)
May 2004 Prof.VuThieu 23
1-1. Historical origin of the term “Regression”
• Galton’s Law was confirmed by Karl Pearson: The average height
of sons of a group of tall fathers < their fathers’ height. And the
average height of sons of a group of short fathers > their fathers’
height. Thus “regressing” tall and short sons alike toward the
average height of all men. (K. Pearson and A. Lee, “On the law of
Inheritance”)
• By the words of Galton, this was “Regression to mediocrity”
May 2004 Prof.VuThieu 24
1-2. Modern Interpretation of Regression
Analysis
• The modern way in interpretation of Regression: Regression
Analysis is concerned with the study of the dependence of one
variable (The Dependent Variable), on one or more other
variable(s) (The Explanatory Variable), with a view to
estimating and/or predicting the (population) mean or average
value of the former in term of the known or fixed (in repeated
sampling) values of the latter.
• Examples: (pages 16-19)
May 2004 Prof.VuThieu 25
Dependent Variable Y; Explanatory Variable Xs
1. Y = Son’s Height; X = Father’s Height
2. Y = Height of boys; X = Age of boys
3. Y = Personal Consumption Expenditure
X = Personal Disposable Income
4. Y = Demand; X = Price
5. Y = Rate of Change of Wages
X = Unemployment Rate
6. Y = Money/Income; X = Inflation Rate
7. Y = % Change in Demand; X = % Change in the
advertising budget
8. Y = Crop yield; Xs = temperature, rainfall, sunshine,
fertilizer
May 2004 Prof.VuThieu 26
1-3. Statistical vs.
Deterministic Relationships
• In regression analysis we are concerned with STATISTICAL
DEPENDENCE among variables (not Functional or Deterministic),
we essentially deal with RANDOM or STOCHASTIC variables
(with the probability distributions)
May 2004 Prof.VuThieu 27
1-4. Regression vs. Causation:
Regression does not necessarily imply causation. A statistical
relationship cannot logically imply causation. “A statistical
relationship, however strong and however suggestive, can
never establish causal connection: our ideas of causation must
come from outside statistics, ultimately from some theory or
other” (M.G. Kendal and A. Stuart, “The Advanced Theory of
Statistics”)
May 2004 Prof.VuThieu 28
1-5. Regression vs. Correlation
•Correlation Analysis: the primary objective is to
measure the strength or degree of linear
association between two variables (both are
assumed to be random)
•Regression Analysis: we try to estimate or
predict the average value of one variable
(dependent, and assumed to be stochastic) on
the basis of the fixed values of other variables
(independent, and non-stochastic)
May 2004 Prof.VuThieu 29
1-6. Terminology and Notation
Dependent Variable

Explained Variable

Predictand

Regressand

Response

Endogenous
Explanatory Variable(s)

Independent Variable(s)

Predictor(s)

Regressor(s)

Stimulus or control variable(s)

Exogenous(es)
May 2004 Prof.VuThieu 30
1-7. The Nature and Sources
of Data for Econometric
Analysis
1) Types of Data :
• Time series data;
• Cross-sectional data;
• Pooled data
2) The Sources of Data
3) The Accuracy of Data
May 2004 Prof.VuThieu 31
1-8. Summary and Conclusions
1) The key idea behind regression analysis is the statistic
dependence of one variable on one or more other variable(s)
2) The objective of regression analysis is to estimate and/or predict
the mean or average value of the dependent variable on basis
of known (or fixed) values of explanatory variable(s)
May 2004 Prof.VuThieu 32
1-8. Summary and Conclusions
3) The success of regression depends on the available and
appropriate data
4) The researcher should clearly state the sources of the data used
in the analysis, their definitions, their methods of collection,
any gaps or omissions and any revisions in the data
May 2004 Prof.VuThieu 33
Basic Econometrics
Chapter 2:
TWO-VARIABLE REGRESSION
ANALYSIS: Some basic Ideas
May 2004 Prof.VuThieu 34
2-1. A Hypothetical Example
• Total population: 60 families
• Y=Weekly family consumption expenditure
• X=Weekly disposable family income
• 60 families were divided into 10 groups of approximately the same income
level
(80, 100, 120, 140, 160, 180, 200, 220, 240, 260)
May 2004 Prof.VuThieu 35
2-1. A Hypothetical Example
• Table 2-1 gives the conditional distribution
of Y on the given values of X
• Table 2-2 gives the conditional probabilities of Y: p(YX)
• Conditional Mean
(or Expectation): E(YX=Xi )
May 2004 Prof.VuThieu 36
May 2004 Prof.VuThieu 37
X
Y
80 100 120 140 160 180 200 220 240 260
Weekly
family
consumption
expenditure
Y ($)
55 65 79 80 102 110 120 135 137 150
60 70 84 93 107 115 136 137 145 152
65 74 90 95 110 120 140 140 155 175
70 80 94 103 116 130 144 152 165 178
75 85 98 108 118 135 145 157 175 180
-- 88 -- 113 125 140 -- 160 189 185
-- -- -- 115 -- -- -- 162 -- 191
Total 325 462 445 707 678 750 685 1043 966 1211
Mean 65 77 89 101 113 125 137 149 161 173
Table 2-2: Weekly family income X ($), and consumption Y ($)
2-1. A Hypothetical Example
• Figure 2-1 shows the population regression line (curve). It is the
regression of Y on X
• Population regression curve is the
locus of the conditional means or expectations of the dependent
variable
for the fixed values of the explanatory variable X (Fig.2-2)
May 2004 Prof.VuThieu 38
2-2. The concepts of population
regression function (PRF)
• E(YX=Xi ) = f(Xi) is Population Regression Function (PRF) or
Population Regression (PR)
• In the case of linear function we have linear population regression
function (or equation or model)
E(YX=Xi ) = f(Xi) = ß1 + ß2Xi
May 2004 Prof.VuThieu 39
2-2. The concepts of population
regression function (PRF)
E(YX=Xi ) = f(Xi) = ß1 + ß2Xi
• ß1 and ß2 are regression coefficients, ß1is intercept and ß2 is slope
coefficient
• Linearity in the Variables
• Linearity in the Parameters
May 2004 Prof.VuThieu 40
2-4. Stochastic Specification of PRF
•Ui = Y - E(YX=Xi ) or Yi = E(YX=Xi ) + Ui
•Ui = Stochastic disturbance or stochastic error
term. It is nonsystematic component
•Component E(YX=Xi ) is systematic or
deterministic. It is the mean consumption
expenditure of all the families with the same
level of income
•The assumption that the regression line passes
through the conditional means of Y implies that
E(UiXi ) = 0
May 2004 Prof.VuThieu 41
2-5. The Significance of the Stochastic
Disturbance Term
•Ui = Stochastic Disturbance Term is a
surrogate for all variables that are omitted
from the model but they collectively affect
Y
•Many reasons why not include such
variables into the model as follows:
May 2004 Prof.VuThieu 42
2-5. The Significance of the Stochastic
Disturbance Term
Why not include as many as variable into the model (or the reasons for
using ui)
+ Vagueness of theory
+ Unavailability of Data
+ Core Variables vs. Peripheral Variables
+ Intrinsic randomness in human behavior
+ Poor proxy variables
+ Principle of parsimony
+ Wrong functional form
May 2004 Prof.VuThieu 43
2-6. The Sample Regression
Function (SRF)
Table 2-4: A random sample
from the population
Y X
------------------
70 80
65 100
90 120
95 140
110 160
115 180
120 200
140 220
155 240
150 260
------------------
Table 2-5: Another random
sample from the population
Y X
-------------------
55 80
88 100
90 120
80 140
118 160
120 180
145 200
135 220
145 240
175 260
--------------------
May 2004 Prof.VuThieu 44
May 2004 Prof.VuThieu 45
SRF1
SRF2
Weekly Consumption
Expenditure (Y)
Weekly Income (X)
2-6. The Sample Regression
Function (SRF)
•Fig.2-3: SRF1 and SRF 2
•Y^i = ^1 + ^2Xi (2.6.1)
•Y^i = estimator of E(YXi)
•^1 = estimator of 1
•^2 = estimator of 2
•Estimate = A particular numerical value obtained by
the estimator in an application
•SRF in stochastic form: Yi= ^1 + ^2Xi + u^i
or Yi= Y^i + u^i (2.6.3)
May 2004 Prof.VuThieu 46
2-6. The Sample Regression
Function (SRF)
• Primary objective in regression analysis is to estimate the PRF Yi= 1 +
2Xi + ui on the basis of the SRF Yi= ^1 + ^2Xi + ei and how to
construct SRF so that ^1 close to 1 and ^2 close to 2 as much as
possible
May 2004 Prof.VuThieu 47
2-6. The Sample Regression
Function (SRF)
• Population Regression Function PRF
• Linearity in the parameters
• Stochastic PRF
• Stochastic Disturbance Term ui plays a critical role in estimating
the PRF
• Sample of observations from population
• Stochastic Sample Regression Function SRF used to estimate the
PRF
May 2004 Prof.VuThieu 48
2-7. Summary and Conclusions
• The key concept underlying regression analysis is the concept of
the population regression function (PRF).
• This book deals with linear PRFs: linear in the unknown
parameters. They may or may not linear in the variables.
May 2004 Prof.VuThieu 49
2-7. Summary and Conclusions
• For empirical purposes, it is the stochastic PRF that matters. The
stochastic disturbance term ui plays a critical role in estimating the PRF.
• The PRF is an idealized concept, since in practice one rarely has access
to the entire population of interest. Generally, one has a sample of
observations from population and use the stochastic sample regression
(SRF) to estimate the PRF.
May 2004 Prof.VuThieu 50
Basic Econometrics
Chapter 3:
TWO-VARIABLE
REGRESSION MODEL:
The problem of Estimation
May 2004 Prof.VuThieu 51
3-1. The method of ordinary least
square (OLS)
 Least-square criterion:
 Minimizing U^2
i = (Yi – Y^i) 2
= (Yi- ^1 - ^2X)2 (3.1.2)
 Normal Equation and solving it for
^1 and ^2 = Least-square
estimators [See (3.1.6)(3.1.7)]
 Numerical and statistical properties
of OLS are as follows:
May 2004 Prof.VuThieu 52
3-1. The method of ordinary least
square (OLS)
 OLS estimators are expressed solely in terms of
observable quantities. They are point estimators
 The sample regression line passes through
sample means of X and Y
 The mean value of the estimated Y^ is equal to
the mean value of the actual Y: E(Y) = E(Y^)
 The mean value of the residuals U^i is zero: E(u^i
)=0
 u^i are uncorrelated with the predicted Y^i and
with Xi : That are u^iY^i = 0; u^iXi = 0
May 2004 Prof.VuThieu 53
3-2. The assumptions underlying the
method of least squares
 Ass 1: Linear regression model
(in parameters)
 Ass 2: X values are fixed in repeated
sampling
 Ass 3: Zero mean value of ui : E(uiXi)=0
 Ass 4: Homoscedasticity or equal
variance of ui : Var (uiXi) = 2
[VS. Heteroscedasticity]
 Ass 5: No autocorrelation between the
disturbances: Cov(ui,ujXi,Xj ) = 0
with i # j [VS. Correlation, + or - ]
May 2004 Prof.VuThieu 54
3-2. The assumptions underlying the
method of least squares
 Ass 6: Zero covariance between ui and Xi
Cov(ui, Xi) = E(ui, Xi) = 0
 Ass 7: The number of observations n must be
greater than the number of parameters to be
estimated
 Ass 8: Variability in X values. They must
not all be the same
 Ass 9: The regression model is correctly
specified
 Ass 10: There is no perfect multicollinearity
between Xs
May 2004 Prof.VuThieu 55
3-3. Precision or standard errors of
least-squares estimates
 In statistics the precision of an
estimate is measured by its standard
error (SE)
 var( ^2) = 2 / x2
i (3.3.1)
 se(^2) =  Var(^2) (3.3.2)
 var( ^1) = 2 X2
i / n x2
i (3.3.3)
 se(^1) =  Var(^1) (3.3.4)
 ^ 2 = u^2
i / (n - 2) (3.3.5)
 ^ =  ^ 2 is standard error of the
estimate
May 2004 Prof.VuThieu 56
3-3. Precision or standard errors of
least-squares estimates
 Features of the variance:
+ var( ^2) is proportional to 2 and inversely proportional to x2
i
+ var( ^1) is proportional to 2 and X2
i but inversely proportional to
x2
i and the sample size n.
+ cov ( ^1 , ^2) = - var( ^2) shows the independence between ^1 and
^2
May 2004 Prof.VuThieu 57
X
3-4. Properties of least-squares
estimators: The Gauss-Markov Theorem
 An OLS estimator is said to be BLUE if :
+ It is linear, that is, a linear function of a random
variable, such as the dependent variable Y in the
regression model
+ It is unbiased , that is, its average or expected
value, E(^2), is equal to the true value 2
+ It has minimum variance in the class of all such
linear unbiased estimators
An unbiased estimator with the least variance is
known as an efficient estimator
May 2004 Prof.VuThieu 58
3-4. Properties of least-squares
estimators: The Gauss-Markov Theorem
 Gauss- Markov Theorem:
Given the assumptions of the
classical linear regression model, the
least-squares estimators, in class of
unbiased linear estimators, have
minimum variance, that is, they are
BLUE
May 2004 Prof.VuThieu 59
3-5. The coefficient of determination
r2: A measure of “Goodness of fit”
 Yi = i + i or
 Yi - = i - i + i or
 yi = i + i (Note: = )
Squaring on both side and summing =>
  yi
2 = 2 x2
i +  2
i ; or
 TSS = ESS + RSS
May 2004 Prof.VuThieu 60
Y
Y
Ŷ
Ŷ Ŷ
Ŷ
Û
Û
Û
ŷ Û
2
β̂
2
β̂
3-5. The coefficient of determination r2:
A measure of “Goodness of fit”
 TSS =  yi
2 = Total Sum of Squares
 ESS =  Y^ i
2 = ^2
2 x2
i =
Explained Sum of Squares
 RSS =  u^2
I = Residual Sum of
Squares
ESS RSS
1 = -------- + -------- ; or
TSS TSS
RSS RSS
1 = r2 + ------- ; or r2 = 1 - -------
TSS TSS
May 2004 Prof.VuThieu 61
3-5. The coefficient of determination r2: A
measure of “Goodness of fit”
 r2 = ESS/TSS
is coefficient of determination, it measures
the proportion or percentage of the total
variation in Y explained by the regression
Model
 0  r2  1;
 r =  r2 is sample correlation coefficient
 Some properties of r
May 2004 Prof.VuThieu 62
3-5. The coefficient of determination r2: A
measure of “Goodness of fit”
3-6. A numerical Example (pages 80-83)
3-7. Illustrative Examples (pages 83-85)
3-8. Coffee demand Function
3-9. Monte Carlo Experiments (page 85)
3-10. Summary and conclusions (pages
86-87)
May 2004 Prof.VuThieu 63
Basic Econometrics
Chapter 4:
THE NORMALITY
ASSUMPTION:
Classical Normal Linear
Regression Model
(CNLRM)
May 2004 Prof.VuThieu 64
4-2.The normality assumption
•CNLR assumes that each u i is distributed normally u i
 N(0, 2) with:
Mean = E(u i) = 0 Ass 3
Variance = E(u2
i) = 2 Ass 4
Cov(u i , u j ) = E(u i , u j) = 0 (i#j) Ass 5
•Note: For two normally distributed variables, the
zero covariance or correlation means independence
of them, so u i and u j are not only uncorrelated but
also independently distributed. Therefore u i 
NID(0, 2) is Normal and
Independently Distributed
May 2004 Prof.VuThieu 65
4-2.The normality assumption
• Why the normality assumption?
(1) With a few exceptions, the distribution of sum
of a large number of independent and
identically distributed random variables tends
to a normal distribution as the number of such
variables increases indefinitely
(2) If the number of variables is not very large or
they are not strictly independent, their sum
may still be normally distributed
May 2004 Prof.VuThieu 66
4-2.The normality assumption
• Why the normality assumption?
(3) Under the normality assumption for ui , the OLS
estimators ^1 and ^2 are also normally distributed
(4) The normal distribution is a comparatively simple
distribution involving only two parameters (mean and
variance)
May 2004 Prof.VuThieu 67
4-3. Properties of OLS estimators
under the normality assumption
• With the normality assumption the OLS
estimators ^1 , ^2 and ^2 have the
following properties:
1. They are unbiased
2. They have minimum variance. Combined
1 and 2, they are efficient estimators
3. Consistency, that is, as the sample size
increases indefinitely, the estimators
converge to their true population values
May 2004 Prof.VuThieu 68
4-3. Properties of OLS estimators
under the normality assumption
4. ^1 is normally distributed 
N(1, ^1
2)
And Z = (^1- 1)/ ^1 is  N(0,1)
5. ^2 is normally distributed N(2 ,^2
2)
And Z = (^2- 2)/ ^2 is  N(0,1)
6. (n-2) ^2/ 2 is distributed as the
2
(n-2)
May 2004 Prof.VuThieu 69
4-3. Properties of OLS estimators
under the normality assumption
7. ^1 and ^2 are distributed independently of ^2. They have
minimum variance in the entire class of unbiased estimators,
whether linear or not. They are best unbiased estimators (BUE)
8. Let ui is  N(0, 2 ) then Yi is 
N[E(Yi); Var(Yi)] = N[1+ 2X i ; 2]
May 2004 Prof.VuThieu 70
Some last points of chapter 4
4-4. The method of Maximum likelihood (ML)
 ML is point estimation method with some
stronger theoretical properties than OLS
(Appendix 4.A on pages 110-114)
The estimators of coefficients ’s by OLS and ML are
 identical. They are true estimators of the ’s
 (ML estimator of 2) = u^i
2/n (is biased estimator)
 (OLS estimator of 2) = u^i
2/n-2 (is unbiased
estimator)
 When sample size (n) gets larger the two estimators
tend to be equal
May 2004 Prof.VuThieu 71
Some last points of chapter 4
4-5. Probability distributions related
to the Normal Distribution: The t, 2,
and F distributions
See section (4.5) on pages 107-108
with 8 theorems and Appendix A, on
pages 755-776
4-6. Summary and Conclusions
See 10 conclusions on pages 109-110
May 2004 Prof.VuThieu 72
Basic Econometrics
Chapter 5:
TWO-VARIABLE REGRESSION:
Interval Estimation
and Hypothesis Testing
May 2004 Prof.VuThieu 73
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-1. Statistical Prerequisites
• See Appendix A with key concepts such as probability,
probability distributions, Type I Error, Type II Error,level of
significance, power of a statistic test, and confidence interval
May 2004 Prof.VuThieu 74
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-2. Interval estimation: Some basic Ideas
•How “close” is, say, ^2 to 2 ?
Pr (^2 -   2  ^2 + ) = 1 -  (5.2.1)
•Random interval ^2 -   2  ^2 + 
if exits, it known as confidence interval
•^2 -  is lower confidence limit
•^2 +  is upper confidence limit
May 2004 Prof.VuThieu 75
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-2. Interval estimation: Some basic Ideas
•(1 - ) is confidence coefficient,
•0 <  < 1 is significance level
•Equation (5.2.1) does not mean that the Pr of 2
lying between the given limits is (1 - ), but the Pr
of constructing an interval that contains 2 is (1 -
)
•(^2 -  , ^2 + ) is random interval
May 2004 Prof.VuThieu 76
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-2. Interval estimation: Some basic Ideas
•In repeated sampling, the intervals will enclose, in
(1 - )*100 of the cases, the true value of the
parameters
•For a specific sample, can not say that the
probability is (1 - ) that a given fixed interval
includes the true 2
•If the sampling or probability distributions of the
estimators are known, one can make confidence
interval statement like (5.2.1)
May 2004 Prof.VuThieu 77
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-3. Confidence Intervals for Regression
Coefficients
•Z= (^2 - 2)/se(^2) = (^2 - 2) x2
i / ~N(0,1)
(5.3.1)
We did not know  and have to use ^ instead, so:
•t= (^2 - 2)/se(^2) = (^2 - 2) x2
i /^ ~ t(n-2)
(5.3.2)
• => Interval for 2
Pr [ -t /2  t  t /2] = 1-  (5.3.3)
May 2004 Prof.VuThieu 78
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-3. Confidence Intervals for Regression
Coefficients
• Or confidence interval for 2 is
Pr [^2-t /2se(^2)  2  ^2+t /2se(^2)] = 1- 
(5.3.5)
• Confidence Interval for 1
Pr [^1-t /2se(^1)  1  ^1+t /2se(^1)] = 1- 
(5.3.7)
May 2004 Prof.VuThieu 79
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-4. Confidence Intervals for 2
Pr [(n-2)^2/ 2
/2  2 (n-2)^2/ 2
1- /2] = 1- 
(5.4.3)
• The interpretation of this interval is: If we establish (1- ) confidence
limits on 2 and if we maintain a priori that these limits will include
true 2, we shall be right in the long run (1- ) percent of the time
May 2004 Prof.VuThieu 80
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-5. Hypothesis Testing: General Comments
 The stated hypothesis is known as the
null hypothesis: Ho
The Ho is tested against and alternative
hypothesis: H1
5-6. Hypothesis Testing: The confidence interval approach
One-sided or one-tail Test
H0: 2  * versus H1: 2 > *
May 2004 Prof.VuThieu 81
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
Two-sided or two-tail Test
H0: 2 = * versus H1: 2 # *
^2 - t /2se(^2)  2  ^2 + t /2se(^2) values of
2 lying in this interval are plausible under Ho with
100*(1- )% confidence.
•If 2 lies in this region we do not reject Ho (the
finding is statistically insignificant)
•If 2 falls outside this interval, we reject Ho (the
finding is statistically significant)
May 2004 Prof.VuThieu 82
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-7. Hypothesis Testing:
The test of significance approach
A test of significance is a procedure by which sample results are used to
verify the truth or falsity of a null hypothesis
• Testing the significance of regression coefficient: The t-test
Pr [^2-t /2se(^2)  2  ^2+t /2se(^2)]= 1- 
(5.7.2)
May 2004 Prof.VuThieu 83
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
• 5-7. Hypothesis Testing: The test of significance approach
•Table 5-1: Decision Rule for t-test of significance
May 2004 Prof.VuThieu 84
Type of
Hypothesis
H0 H1 Reject H0
if
Two-tail 2 = 2* 2 # 2* |t| > t/2,df
Right-tail 2  2* 2 > 2* t > t,df
Left-tail 2 2* 2 < 2* t < - t,df
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
• 5-7. Hypothesis Testing: The test of significance approach
Testing the significance of 2 : The 2 Test
Under the Normality assumption we have:
^2
2 = (n-2) ------- ~ 2
(n-2) (5.4.1)
2
From (5.4.2) and (5.4.3) on page 520 =>
May 2004 Prof.VuThieu 85
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
• 5-7. Hypothesis Testing: The test of significance approach
• Table 5-2: A summary of the 2 Test
May 2004 Prof.VuThieu 86
H0 H1 Reject H0 if
2 = 2
0 2 > 2
0 Df.(^2)/ 2
0 > 2
,df
2 = 2
0 2 < 2
0 Df.(^2)/ 2
0 < 2
(1-),df
2 = 2
0 2 # 2
0 Df.(^2)/ 2
0 > 2
/2,df
or < 2
(1-/2), df
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-8. Hypothesis Testing:
Some practical aspects
1) The meaning of “Accepting” or “Rejecting” a Hypothesis
2) The Null Hypothesis and the Rule of
Thumb
3) Forming the Null and Alternative
Hypotheses
4) Choosing , the Level of Significance
May 2004 Prof.VuThieu 87
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-8. Hypothesis Testing:
Some practical aspects
5) The Exact Level of Significance:
The p-Value [See page 132]
6) Statistical Significance versus
Practical Significance
7) The Choice between Confidence-
Interval and Test-of-Significance
Approaches to Hypothesis Testing
[Warning: Read carefully pages 117-134 ]
May 2004 Prof.VuThieu 88
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-9. Regression Analysis and Analysis
of Variance
• TSS = ESS + RSS
• F=[MSS of ESS]/[MSS of RSS] =
= 2^2 xi
2/ ^2 (5.9.1)
• If ui are normally distributed; H0: 2 = 0 then F follows the F
distribution with 1 and n-2 degree of freedom
May 2004 Prof.VuThieu 89
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
•5-9. Regression Analysis and Analysis
of Variance
• F provides a test statistic to test the null hypothesis that true 2
is zero by compare this F ratio with the F-critical obtained from F
tables at the chosen level of significance, or obtain the p-value of
the computed F statistic to make decision
May 2004 Prof.VuThieu 90
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
• 5-9. Regression Analysis and Analysis of Variance
• Table 5-3. ANOVA for two-variable regression model
May 2004 Prof.VuThieu 91
Source of
Variation
Sum of square ( SS) Degree of
Freedom -
(Df)
Mean sum of
square ( MSS)
ESS (due to
regression)
y^i
2 = 2^2 xi
2 1 2^2 xi
2
RSS (due to
residuals)
u^i
2 n-2 u^i
2 /(n-2)=^2
TSS y i
2 n-1
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-10. Application of Regression
Analysis: Problem of Prediction
• By the data of Table 3-2, we obtained the sample regression
(3.6.2) :
Y^i = 24.4545 + 0.5091Xi , where
Y^i is the estimator of true E(Yi)
• There are two kinds of prediction as
follows:
May 2004 Prof.VuThieu 92
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-10. Application of Regression
Analysis: Problem of Prediction
• Mean prediction: Prediction of the conditional mean value of Y
corresponding to a chosen X, say X0, that is the point on the
population regression line itself (see pages 137-138 for details)
• Individual prediction: Prediction of an individual Y value
corresponding to X0 (see pages 138-139 for details)
May 2004 Prof.VuThieu 93
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-11. Reporting the results of
regression analysis
• An illustration:
Y^I= 24.4545 + 0.5091Xi (5.1.1)
Se = (6.4138) (0.0357) r2= 0.9621
t = (3.8128) (14.2405) df= 8
P = (0.002517) (0.000000289) F1,2=2202.87
May 2004 Prof.VuThieu 94
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-12. Evaluating the results of regression analysis:
• Normality Test: The Chi-Square (2) Goodness of fit Test
2
N-1-k =  (Oi – Ei)2/Ei (5.12.1)
Oi is observed residuals (u^i) in interval i
Ei is expected residuals in interval i
N is number of classes or groups; k is number of
parameters to be estimated. If p-value of
obtaining 2
N-1-k is high (or 2
N-1-k is small) =>
The Normality Hypothesis can not be rejected
May 2004 Prof.VuThieu 95
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-12. Evaluating the results of regression analysis:
• Normality Test: The Chi-Square (2) Goodness of fit Test
H0: ui is normally distributed
H1: ui is un-normally distributed
Calculated-2
N-1-k =  (Oi – Ei)2/Ei (5.12.1)
Decision rule:
Calculated-2
N-1-k > Critical-2
N-1-k then H0 can
be rejected
May 2004 Prof.VuThieu 96
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-12. Evaluating the results of regression analysis:
The Jarque-Bera (JB) test of normality
This test first computes the Skewness (S)
and Kurtosis (K) and uses the following
statistic:
JB = n [S2/6 + (K-3)2/24] (5.12.2)
Mean= xbar = xi/n ; SD2 = (xi-xbar)2/(n-1)
S=m3/m2
3/2 ; K=m4/m2
2 ; mk= (xi-xbar)k/n
May 2004 Prof.VuThieu 97
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-12. (Continued)
Under the null hypothesis H0 that the
residuals are normally distributed Jarque
and Bera show that in large sample
(asymptotically) the JB statistic given in
(5.12.12) follows the Chi-Square
distribution with 2 df. If the p-value of the
computed Chi-Square statistic in an
application is sufficiently low, one can
reject the hypothesis that the residuals
are normally distributed. But if p-value is
reasonable high, one does not reject the
May 2004 Prof.VuThieu 98
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-13. Summary and Conclusions
1. Estimation and Hypothesis testing
constitute the two main branches of classical
statistics
2. Hypothesis testing answers this question:
Is a given finding compatible with a stated
hypothesis or not?
3. There are two mutually complementary
approaches to answering the preceding
question: Confidence interval and test of
significance.
May 2004 Prof.VuThieu 99
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-13. Summary and Conclusions
4. Confidence-interval approach has a specified probability of
including within its limits the true value of the unknown
parameter. If the null-hypothesized value lies in the
confidence interval, H0 is not rejected, whereas if it lies
outside this interval, H0 can be rejected
May 2004 Prof.VuThieu 100
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-13. Summary and Conclusions
5. Significance test procedure develops a test
statistic which follows a well-defined
probability distribution (like normal, t, F, or
Chi-square). Once a test statistic is
computed, its p-value can be easily
obtained.
The p-value The p-value of a test is the
lowest significance level, at which we would
reject H0. It gives exact probability of
obtaining the estimated test statistic under
H0. If p-value is small, one can reject H0, but
May 2004 Prof.VuThieu 101
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-13. Summary and Conclusions
6. Type I error is the error of rejecting a true hypothesis. Type II
error is the error of accepting a false hypothesis. In
practice, one should be careful in fixing the level of
significance , the probability of committing a type I error
(at arbitrary values such as 1%, 5%, 10%). It is better to
quote the p-value of the test statistic.
May 2004 Prof.VuThieu 102
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-13. Summary and Conclusions
7. This chapter introduced the normality test to find out
whether ui follows the normal distribution. Since in small
samples, the t, F,and Chi-square tests require the normality
assumption, it is important that this assumption be checked
formally
May 2004 Prof.VuThieu 103
Chapter 5 TWO-VARIABLE REGRESSION:
Interval Estimation and Hypothesis Testing
5-13. Summary and Conclusions
(ended)
8. If the model is deemed practically adequate, it may be used
for forecasting purposes. But should not go too far out of
the sample range of the regressor values. Otherwise,
forecasting errors can increase dramatically.
May 2004 Prof.VuThieu 104
Basic Econometrics
Chapter 6
EXTENSIONS OF THE
TWO-VARIABLE LINEAR
REGRESSION MODEL
May 2004 Prof.VuThieu 105
Chapter 6
EXTENSIONS OF THE TWO-VARIABLE LINEAR
REGRESSION MODELS
6-1. Regression through the origin
 The SRF form of regression:
 Yi = ^2X i + u^ i (6.1.5)
 Comparison two types of regressions:
* Regression through-origin model and
* Regression with intercept
May 2004 Prof.VuThieu 106
Chapter 6
EXTENSIONS OF THE TWO-VARIABLE LINEAR
REGRESSION MODELS
6-1. Regression through the origin
Comparison two types of regressions:
^2 = XiYi/X2
i (6.1.6) O
^2 = xiyi/x2
i (3.1.6) I
var(^2) = 2/ X2
i (6.1.7) O
var(^2) = 2/ x2
i (3.3.1) I
^2 = (u^i)2/(n-1) (6.1.8) O
^2 = (u^i)2/(n-2) (3.3.5) I
May 2004 Prof.VuThieu 107
Chapter 6
EXTENSIONS OF THE TWO-VARIABLE LINEAR
REGRESSION MODELS
6-1. Regression through the origin
 r2 for regression through-origin model
Raw r2 = (XiYi)2 /X2
i Y2
i (6.1.9)
 Note: Without very strong a priory expectation, well
advise is sticking to the conventional, intercept-
present model. If intercept equals to zero
statistically, for practical purposes we have a
regression through the origin. If in fact there is an
intercept in the model but we insist on fitting a
regression through the origin, we would be
committing a specification error
May 2004 Prof.VuThieu 108
Chapter 6
EXTENSIONS OF THE TWO-VARIABLE LINEAR
REGRESSION MODELS
6-1. Regression through the origin
 Illustrative Examples:
1) Capital Asset Pricing Model - CAPM (page 156)
2) Market Model (page 157)
3) The Characteristic Line of Portfolio Theory
(page 159)
May 2004 Prof.VuThieu 109
Chapter 6
EXTENSIONS OF THE TWO-VARIABLE LINEAR
REGRESSION MODELS
6-2. Scaling and units of measurement
 Let Yi = ^1 + ^2Xi + u^ i (6.2.1)
 Define Y*i=w 1 Y i and X*i=w 2 X i then:
 *^2 = (w1/w2) ^2 (6.2.15)
 *^1 = w1^1 (6.2.16)
 *^2
= w1
2^2 (6.2.17)
 Var(*^1) = w2
1 Var(^1) (6.2.18)
 Var(*^2) = (w1/w2)2 Var(^2) (6.2.19)
 r2
xy = r2
x*y* (6.2.20)
May 2004 Prof.VuThieu 110
Chapter 6
EXTENSIONS OF THE TWO-VARIABLE LINEAR
REGRESSION MODELS
6-2. Scaling and units of measurement
 From one scale of measurement, one can derive the results
based on another scale of measurement. If w1= w2 the
intercept and standard error are both multiplied by w1. If
w2=1 and scale of Y changed by w1, then all coefficients and
standard errors are all multiplied by w1. If w1=1 and scale of
X changed by w2, then only slope coefficient and its standard
error are multiplied by 1/w2. Transformation from (Y,X) to
(Y*,X*) scale does not affect the properties of OLS
Estimators
 A numerical example: (pages 161, 163-165)
May 2004 Prof.VuThieu 111
6-3. Functional form of regression model
 The log-linear model
 Semi-log model
 Reciprocal model
May 2004 Prof.VuThieu 112
6-4. How to measure elasticity
The log-linear model
 Exponential regression model:
 Yi= 1Xi
2 e u
i (6.4.1)
By taking log to the base e of both side:
 lnYi = ln1 +2lnXi + ui , by setting ln1 =  =>
 lnYi =  +2lnXi + ui (6.4.3)
(log-log, or double-log, or log-linear model)
This can be estimated by OLS by letting
 Y*i =  +2X*i + ui , where Y*i=lnYi, X*i=lnXi ;
2 measures the ELASTICITY of Y respect to X, that is,
percentage change in Y for a given (small) percentage
change in X.
May 2004 Prof.VuThieu 113
6-4. How to measure elasticity
The log-linear model
The elasticity E of a variable Y with
respect to variable X is defined as:
E=dY/dX=(% change in Y)/(% change in X)
~ [(Y/Y) x 100] / [(X/X) x100]=
= (Y/X)x (X/Y) = slope x (X/Y)
 An illustrative example: The coffee
demand function (pages 167-168)
May 2004 Prof.VuThieu 114
6-5. Semi-log model:
Log-lin and Lin-log Models
 How to measure the growth rate: The log-lin model
 Y t = Y0 (1+r) t (6.5.1)
 lnYt = lnY0 + t ln(1+r) (6.5.2)
 lnYt = 1 + 2t , called constant growth model (6.5.5)
where 1 = lnY0 ; 2 = ln(1+r)
 lnYt = 1 + 2t + ui (6.5.6)
 It is Semi-log model, or log-lin model. The slope
coefficient measures the constant proportional or
relative change in Y for a given absolute change in the
value of the regressor (t)
 2 = (Relative change in regressand)/(Absolute change in
regressor) (6.5.7)
May 2004 Prof.VuThieu 115
6-5. Semi-log model:
Log-lin and Lin-log Models
 Instantaneous Vs. compound rate of growth
 2 is instantaneous rate of growth
 antilog(2) – 1 is compound rate of growth
The linear trend model
 Yt = 1 + 2t + ut (6.5.9)
 If 2 > 0, there is an upward trend in Y
 If 2 < 0, there is an downward trend in Y
 Note: (i) Cannot compare the r2 values of models
(6.5.5) and (6.5.9) because the regressands in the
two models are different, (ii) Such models may be
appropriate only if a time series is stationary.
May 2004 Prof.VuThieu 116
6-5. Semi-log model:
Log-lin and Lin-log Models
 The lin-log model:
 Yi = 1 +2lnXi + ui (6.5.11)
 2 = (Change in Y) / Change in lnX = (Change in Y)/(Relative change in X) ~
(Y)/(X/X) (6.5.12)
 or Y = 2 (X/X) (6.5.13)
 That is, the absolute change in Y equal to 2 times the relative change in
X.
May 2004 Prof.VuThieu 117
6-6. Reciprocal Models:
Log-lin and Lin-log Models
The reciprocal model:
 Yi = 1 + 2( 1/Xi ) + ui (6.5.14)
 As X increases definitely, the term
2( 1/Xi ) approaches to zero and Yi
approaches the limiting or asymptotic value
1 (See figure 6.5 in page 174)
 An Illustrative example: The Phillips Curve for
the United Kingdom 1950-1966
May 2004 Prof.VuThieu 118
6-7. Summary of Functional Forms
Table 6.5 (page 178)
May 2004 Prof.VuThieu 119
Model Equation Slope =
dY/dX
Elasticity =
(dY/dX).(X/Y)
Linear Y = 1 + 2 X 2 2(X/Y) */
Log-linear
(log-log)
lnY = 1 + 2 lnX 2 (Y/X) 2
Log-lin lnY = 1 + 2 X 2 (Y) 2 X */
Lin-log Y = 1 + 2 lnX 2(1/X) 2 (1/Y) */
Reciprocal Y = 1 + 2 (1/X) - 2(1/X2) - 2 (1/XY) */
6-7. Summary of Functional Forms
 Note: */ indicates that the elasticity coefficient is variable, depending on the
value taken by X or Y or both. when no X and Y values are specified, in
practice, very often these elasticities are measured at the mean values E(X)
and E(Y).
-----------------------------------------------
6-8. A note on the stochastic error term
6-9. Summary and conclusions
(pages 179-180)
May 2004 Prof.VuThieu 120
Basic Econometrics
Chapter 7
MULTIPLE REGRESSION ANALYSIS:
The Problem of Estimation
May 2004 Prof.VuThieu 121
7-1. The three-Variable Model:
Notation and Assumptions
• Yi = ß1+ ß2X2i + ß3X3i + u i (7.1.1)
• ß2 , ß3 are partial regression coefficients
• With the following assumptions:
+ Zero mean value of Ui:: E(u i|X2i,X3i) = 0. i (7.1.2)
+ No serial correlation: Cov(ui,uj) = 0, i # j (7.1.3)
+ Homoscedasticity: Var(u i) = 2 (7.1.4)
+ Cov(ui,X2i) = Cov(ui,X3i) = 0 (7.1.5)
+ No specification bias or model correct specified (7.1.6)
+ No exact collinearity between X variables (7.1.7)
(no multicollinearity in the cases of more explanatory
vars. If there is linear relationship exits, X vars. Are said
to be linearly dependent)
+ Model is linear in parameters
May 2004 Prof.VuThieu 122
7-2. Interpretation of Multiple
Regression
• E(Yi|X2i ,X3i)= ß1+ ß2X2i + ß3X3i (7.2.1)
• (7.2.1) gives conditional mean or expected value of Y
conditional upon the given or fixed value of the X2 and X3
May 2004 Prof.VuThieu 123
7-3. The meaning of partial
regression coefficients
• Yi= ß1+ ß2X2i + ß3X3 +….+ ßsXs+ ui
• ßk measures the change in the mean value
of Y per unit change in Xk, holding the rest
explanatory variables constant. It gives the
“direct” effect of unit change in Xk on the
E(Yi), net of Xj (j # k)
• How to control the “true” effect of a unit
change in Xk on Y? (read pages 195-197)
May 2004 Prof.VuThieu 124
7-4. OLS and ML estimation of the
partial regression coefficients
• This section (pages 197-201) provides:
1. The OLS estimators in the case of three-
variable regression
Yi= ß1+ ß2X2i + ß3X3+ ui
2. Variances and standard errors of OLS
estimators
3. 8 properties of OLS estimators (pp 199-201)
4. Understanding on ML estimators
May 2004 Prof.VuThieu 125
7-5. The multiple coefficient of
determination R2 and the multiple
coefficient of correlation R
• This section provides:
1. Definition of R2 in the context of multiple
regression like r2 in the case of two-variable
regression
2. R = R2 is the coefficient of multiple regression,
it measures the degree of association between Y
and all the explanatory variables jointly
3. Variance of a partial regression coefficient
Var(ß^k) = 2/ x2
k (1/(1-R2
k)) (7.5.6)
Where ß^k is the partial regression coefficient of
regressor Xk and R2
k is the R2 in the regression of
Xk on the rest regressors
May 2004 Prof.VuThieu 126
7-6. Example 7.1: The expectations-
augmented Philips Curve for the US (1970-
1982)
• This section provides an illustration for the ideas
introduced in the chapter
• Regression Model (7.6.1)
• Data set is in Table 7.1
May 2004 Prof.VuThieu 127
7-7. Simple regression in the context of
multiple regression: Introduction to
specification bias
• This section provides an understanding on “ Simple
regression in the context of multiple regression”. It will
cause the specification bias which will be discussed in
Chapter 13
May 2004 Prof.VuThieu 128
7-8. R2 and the Adjusted-R2
• R2 is a non-decreasing function of the number of
explanatory variables. An additional X variable will not
decrease R2
R2= ESS/TSS = 1- RSS/TSS = 1-u^2
I / y^2
i (7.8.1)
• This will make the wrong direction by adding more
irrelevant variables into the regression and give an idea for
an adjusted-R2 (R bar) by taking account of degree of
freedom
• R2
bar= 1- [ u^2
I /(n-k)] / [y^2
i /(n-1) ] , or (7.8.2)
R2
bar= 1- ^2 / S2
Y (S2
Y is sample variance of Y)
K= number of parameters including intercept term
• By substituting (7.8.1) into (7.8.2) we get
R2
bar = 1- (1-R2) (n-1)/(n- k) (7.8.4)
• For k > 1, R2
bar < R2 thuswhen number of X variables increases
R2
bar increases less than R2 and R2
bar can be negative
May 2004 Prof.VuThieu 129
7-8. R2 and the Adjusted-R2
• R2 is a non-decreasing function of the number of
explanatory variables. An additional X variable will not
decrease R2
R2= ESS/TSS = 1- RSS/TSS = 1-u^2
I / y^2
i (7.8.1)
• This will make the wrong direction by adding more
irrelevant variables into the regression and give an idea for
an adjusted-R2 (R bar) by taking account of degree of
freedom
• R2
bar= 1- [ u^2
I /(n-k)] / [y^2
i /(n-1) ] , or (7.8.2)
R2
bar= 1- ^2 / S2
Y (S2
Y is sample variance of Y)
K= number of parameters including intercept term
• By substituting (7.8.1) into (7.8.2) we get
R2
bar = 1- (1-R2) (n-1)/(n- k) (7.8.4)
• For k > 1, R2
bar < R2 thuswhen number of X variables increases
R2
bar increases less than R2 and R2
bar can be negative
May 2004 Prof.VuThieu 130
7-8. R2 and the Adjusted-R2
• Comparing Two R2 Values:
To compare, the size n and the dependent variable must be
the same
• Example 7-2: Coffee Demand Function Revisited (page 210)
• The “game” of maximizing adjusted-R2: Choosing
the model that gives the highest R2
bar may be dangerous,
for in regression our objective is not for that but for
obtaining the dependable estimates of the true population
regression coefficients and draw statistical inferences about
them
• Should be more concerned about the logical or theoretical
relevance of the explanatory variables to the dependent
variable and their statistical significance
May 2004 Prof.VuThieu 131
7-9. Partial Correlation Coefficients
• This section provides:
1. Explanation of simple and partial
correlation coefficients
2. Interpretation of simple and partial
correlation coefficients
(pages 211-214)
May 2004 Prof.VuThieu 132
7-10. Example 7.3: The Cobb-Douglas
Production function
More on functional form
• Yi = 1X2
2i X3
3ieU
i (7.10.1)
By log-transform of this model:
• lnYi = ln1 + 2ln X2i + 3ln X3i + Ui = 0 + 2ln X2i +
3ln X3i + Ui
(7.10.2)
Data set is in Table 7.3
Report of results is in page 216
May 2004 Prof.VuThieu 133
7-11 Polynomial Regression Models
• Yi = 0 + 1 Xi + 2 X2
i +…+ k Xk
i + Ui
(7.11.3)
• Example 7.4: Estimating the Total Cost Function
• Data set is in Table 7.4
• Empirical results is in page 221
--------------------------------------------------------------
• 7-12. Summary and Conclusions
(page 221)
May 2004 Prof.VuThieu 134
Basic Econometrics
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
May 2004 Prof.VuThieu 135
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-3. Hypothesis testing in multiple regression:
Testing hypotheses about an individual partial regression
coefficient
Testing the overall significance of the estimated multiple
regression model, that is, finding out if all the partial slope
coefficients are simultaneously equal to zero
Testing that two or more coefficients are equal to one
another
Testing that the partial regression coefficients satisfy
certain restrictions
Testing the stability of the estimated regression model over
time or in different cross-sectional units
Testing the functional form of regression models
May 2004 Prof.VuThieu 136
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-4. Hypothesis testing about individual partial
regression coefficients
With the assumption that u i ~ N(0,2) we can
use t-test to test a hypothesis about any
individual partial regression coefficient.
H0: 2 = 0
H1: 2  0
If the computed t value > critical t value at the
chosen level of significance, we may reject the
null hypothesis; otherwise, we may not reject it
May 2004 Prof.VuThieu 137
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-5. Testing the overall significance of a multiple
regression: The F-Test
For Yi = 1 + 2X2i + 3X3i + ........+ kXki + ui
 To test the hypothesis H0: 2 =3 =....= k= 0 (all
slope coefficients are simultaneously zero) versus H1: Not at
all slope coefficients are simultaneously zero,
compute
F=(ESS/df)/(RSS/df)=(ESS/(k-1))/(RSS/(n-k)) (8.5.7) (k
= total number of parameters to be estimated
including intercept)
 If F > F critical = F(k-1,n-k), reject H0
 Otherwise you do not reject it
May 2004 Prof.VuThieu 138
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-5. Testing the overall significance of a multiple
regression
 Alternatively, if the p-value of F obtained from
(8.5.7) is sufficiently low, one can reject H0
 An important relationship between R2 and F:
F=(ESS/(k-1))/(RSS/(n-k)) or
R2/(k-1)
F = ---------------- (8.5.1)
(1-R2) / (n-k)
( see prove on page 249)
May 2004 Prof.VuThieu 139
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-5. Testing the overall significance of a multiple
regression in terms of R2
For Yi = 1 + 2X2i + 3X3i + ........+ kXki + ui
 To test the hypothesis H0: 2 = 3 = .....= k = 0 (all
slope coefficients are simultaneously zero) versus
H1: Not at all slope coefficients are simultaneously
zero, compute
 F = [R2/(k-1)] / [(1-R2) / (n-k)] (8.5.13) (k = total
number of parameters to be estimated including
intercept)
 If F > F critical = F , (k-1,n-k), reject H0
May 2004 Prof.VuThieu 140
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-5. Testing the overall significance of a multiple
regression
Alternatively, if the p-value of F obtained from
(8.5.13) is sufficiently low, one can reject H0
The “Incremental” or “Marginal”
contribution of an explanatory variable:
Let X is the new (additional) term in the
right hand of a regression. Under the usual
assumption of the normality of ui and the
HO:  = 0, it can be shown that the following
F ratio will follow the F distribution with
respectively degree of freedom
May 2004 Prof.VuThieu 141
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-5. Testing the overall significance of a multiple
regression
[R2
new - R2
old] / Df1
F com = ---------------------- (8.5.18)
[1- R2
new] / Df2
Where Df1 = number of new regressors
Df2 = n – number of parameters in the
new model
R2
new is standing for coefficient of determination of the
new regression (by adding X);
R2
old is standing for coefficient of determination of the old
regression (before adding X).
May 2004 Prof.VuThieu 142
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-5. Testing the overall significance of a multiple
regression
Decision Rule:
If F com > F , Df1 , Df2 one can reject the Ho that  = 0
and conclude that the addition of X to the model
significantly increases ESS and hence the R2 value
 When to Add a New Variable? If |t| of coefficient
of X > 1 (or F= t 2 of that variable exceeds 1)
 When to Add a Group of Variables? If adding a
group of variables to the model will give F value
greater than 1;
May 2004 Prof.VuThieu 143
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-6. Testing the equality of two regression coefficients
Yi = 1 + 2X2i + 3X3i + 4X4i + ui (8.6.1)
Test the hypotheses:
H0: 3 = 4 or 3 - 4 = 0 (8.6.2)
H1: 3  4 or 3 - 4  0
Under the classical assumption it can be shown:
t = [(^3 - ^4) – (3 - 4)] / se(^3 - ^4)
follows the t distribution with (n-4) df because (8.6.1) is a
four-variable model or, more generally, with (n-k) df.
where k is the total number of parameters estimated,
including intercept term.
se(^3 - ^4) =  [var((^3) + var( ^4) – 2cov(^3, ^4)]
(8.6.4)
(see appendix)
May 2004 Prof.VuThieu 144
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
t = (^3 - ^4) /  [var((^3) + var( ^4) – 2cov(^3, ^4)]
(8.6.5)
Steps for testing:
1. Estimate ^3 and ^4
2. Compute se(^3 - ^4) through (8.6.4)
3. Obtain t- ratio from (8.6.5) with H0: 3 = 4
4. If t-computed > t-critical at designated level of
significance for given df, then reject H0. Otherwise do
not reject it. Alternatively, if the p-value of t statistic
from (8.6.5) is reasonable low, one can reject H0.
 Example 8.2: The cubic cost function revisited
May 2004 Prof.VuThieu 145
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-7. Restricted least square: Testing linear
equality restrictions
Yi = 1X 2
2i X 3
3i eu
i (7.10.1) and (8.7.1)
Y = output
X2 = labor input
X3 = capital input
In the log-form:
lnYi = 0 + 2lnX2i + 3lnX3i + ui (8.7.2)
with the constant return to scale: 2 + 3 = 1
(8.7.3)
May 2004 Prof.VuThieu 146
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-7. Restricted least square: Testing linear
equality restrictions
How to test (8.7.3)
 The t Test approach (unrestricted): test of the hypothesis
H0: 2 + 3 = 1 can be conducted by t- test:
t = [(^2 + ^3) – (2 + 3)] / se(^2 - ^3) (8.7.4)
 The F Test approach (restricted least square -RLS): Using,
say, 2 = 1-3 and substitute it into (8.7.2) we get: ln(Yi /X2i)
= 0 + 3 ln(X3i /X2i) + ui (8.7.8). Where (Yi /X2i) is
output/labor ratio, and (X3i / X2i) is capital/labor ratio
May 2004 Prof.VuThieu 147
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-7. Restricted least square: Testing linear equality restrictions
u^2
UR=RSSUR of unrestricted regression (8.7.2)
and  u^2
R = RSSR of restricted regression (8.7.7),
m = number of linear restrictions,
k = number of parameters in the unrestricted regression,
n = number of observations.
R2
UR and R2
R are R2 values obtained from unrestricted and
restricted regressions respectively. Then
F=[(RSSR – RSSUR)/m]/[RSSUR/(n-k)] =
= [(R2
UR – R2
R) / m] / [1 – R2
UR / (n-k)] (8.7.10)
follows F distribution with m, (n-k) df.
Decision rule: If F > F m, n-k , reject H0: 2 + 3 = 1
May 2004 Prof.VuThieu 148
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-7. Restricted least square: Testing linear equality restrictions
 Note: R2
UR  R2
R (8.7.11)
 and  u^2
UR   u^2
R (8.7.12)
 Example 8.3: The Cobb-Douglas Production
function for Taiwanese Agricultural Sector,
1958-1972. (pages 259-260). Data in Table 7.3
(page 216)
 General F Testing (page 260)
 Example 8.4: The demand for chicken in the US,
1960-1982. Data in exercise 7.23 (page 228)
May 2004 Prof.VuThieu 149
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-8. Comparing two regressions: Testing for structural
stability of regression models
Table 8.8: Personal savings and income data, UK, 1946-
1963 (millions of pounds)
Savings function:
 Reconstruction period:
Y t = 1+ 2X t + U1t (t = 1,2,...,n1)
 Post-Reconstruction period:
Y t = 1 + 2X t + U2t (t = 1,2,...,n2)
Where Y is personal savings, X is personal income, the us
are disturbance terms in the two equations and n1, n2 are
the number of observations in the two period
May 2004 Prof.VuThieu 150
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-8. Comparing two regressions: Testing for structural stability
of regression models
+ The structural change may mean that the two intercept are
different, or the two slopes are different, or both are
different, or any other suitable combination of the
parameters. If there is no structural change we can combine
all the n1, n2 and just estimate one savings function as:
Y t = l1 + l2X t + Ut (t = 1,2,...,n1, 1,....n2). (8.8.3)
How do we find out whether there is a structural change in
the savings-income relationship between the two period? A
popular test is Chow-Test, it is simply the F Test discussed
earlier
HO: i = i i Vs H1: i that i  i
May 2004 Prof.VuThieu 151
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-8. Comparing two regressions: Testing for structural stability
of regression models
+ The assumptions underlying the Chow test
u1t and u2t ~ N(0,s2), two error terms are normally
distributed with the same variance
u1t and u2t are independently distributed
Step 1: Estimate (8.8.3), get RSS, say, S1 with df =
(n1+n2 – k); k is number of parameters estimated )
Step 2: Estimate (8.8.1) and (8.8.2) individually and
get their RSS, say, S2 and S3 , with df = (n1 – k) and
(n2-k) respectively. Call S4 = S2+S3; with df = (n1+n2 –
2k)
Step 3: S5 = S1 – S4;
May 2004 Prof.VuThieu 152
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-8. Comparing two regressions: Testing for structural stability
of regression models
Step 4: Given the assumptions of the Chow Test, it
can be show that
F = [S5 / k] / [S4 / (n1+n2 – 2k)] (8.8.4)
follows the F distribution with Df = (k, n1+n2 – 2k)
Decision Rule: If F computed by (8.8.4) > F- critical at
the chosen level of significance a => reject the
hypothesis that the regression (8.8.1) and (8.8.2) are
the same, or reject the hypothesis of structural
stability; One can use p-value of the F obtained from
(8.8.4) to reject H0 if p-value low reasonably.
+ Apply for the data in Table 8.8
May 2004 Prof.VuThieu 153
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-9. Testing the functional form of regression:
Choosing between linear and log-linear regression models: MWD Test
(MacKinnon, White and Davidson)
H0: Linear Model Y is a linear function of regressors, the Xs;
H1: Log-linear Model Y is a linear function of logs of regressors, the lnXs;
May 2004 Prof.VuThieu 154
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
8-9. Testing the functional form of regression:
Step 1: Estimate the linear model and obtain the
estimated Y values. Call them Yf (i.e.,Y^). Take lnYf.
Step 2: Estimate the log-linear model and obtain the
estimated lnY values, call them lnf (i.e., ln^Y )
Step 3: Obtain Z1 = (lnYf – lnf)
Step 4: Regress Y on Xs and Z1. Reject H0 if the
coefficient of Z1 is statistically significant, by the
usual t - test
Step 5: Obtain Z2 = antilog of (lnf – Yf)
Step 6: Regress lnY on lnXs and Z2. Reject H1 if the
coefficient of Z2 is statistically significant, by the
usual t-test
May 2004 Prof.VuThieu 155
Chapter 8
MULTIPLE REGRESSION ANALYSIS:
The Problem of Inference
Example 8.5: The demand for Roses (page 266-
267). Data in exercise 7.20 (page 225)
8-10. Prediction with multiple regression
Follow the section 5-10 and the illustration in
pages 267-268 by using data set in the Table 8.1
(page 241)
8-11. The troika of hypothesis tests: The
likelihood ratio (LR), Wald (W) and Lagarange
Multiplier (LM) Tests
8-12. Summary and Conclusions
May 2004 Prof.VuThieu 156

More Related Content

Similar to 06Econometrics_Statistics_Basic_1-8.ppt

11.0004www.iiste.org call for paper. 46-64
11.0004www.iiste.org call for paper. 46-6411.0004www.iiste.org call for paper. 46-64
11.0004www.iiste.org call for paper. 46-64
Alexander Decker
 
Economatrics
Economatrics Economatrics
Economatrics
Asfand Yar
 
Econometrics lecture 1st
Econometrics lecture 1stEconometrics lecture 1st
Econometrics lecture 1st
Ishaq Ahmad
 
Econometrics ch1
Econometrics ch1Econometrics ch1
Econometrics ch1
Baterdene Batchuluun
 
Unit 01 - Consolidated.pptx
Unit 01 - Consolidated.pptxUnit 01 - Consolidated.pptx
Unit 01 - Consolidated.pptx
ChristopherDevakumar1
 
1.1.Introduction Econometrics.pptx
1.1.Introduction Econometrics.pptx1.1.Introduction Econometrics.pptx
1.1.Introduction Econometrics.pptx
AyushKumar685245
 
Relationship Between Monetary Policy And The Stock Market
Relationship Between Monetary Policy And The Stock MarketRelationship Between Monetary Policy And The Stock Market
Relationship Between Monetary Policy And The Stock Market
Casey Hudson
 
15
1515
Advanced Econometrics L1-2.pptx
Advanced Econometrics L1-2.pptxAdvanced Econometrics L1-2.pptx
Advanced Econometrics L1-2.pptx
akashayosha
 
The dangers of policy experiments Initial beliefs under adaptive learning
The dangers of policy experiments Initial beliefs under adaptive learningThe dangers of policy experiments Initial beliefs under adaptive learning
The dangers of policy experiments Initial beliefs under adaptive learning
GRAPE
 
2U1.pptx
2U1.pptx2U1.pptx
2U1.pptx
FarhaanNitrate
 
Class 1.1 (1).pptx
Class 1.1 (1).pptxClass 1.1 (1).pptx
Class 1.1 (1).pptx
ChristopherDevakumar1
 
MModule 1 ppt.pptx
MModule 1 ppt.pptxMModule 1 ppt.pptx
MModule 1 ppt.pptx
jyotikumarijyotshna
 
Econometrics_1.pptx
Econometrics_1.pptxEconometrics_1.pptx
Econometrics_1.pptx
SoumiliBera2
 
Vol7no2 6
Vol7no2 6Vol7no2 6
Vol7no2 6
Arslan Ishaq
 
!!!!!!!!!!!!!!!!!!!!!!!!Optimal combinationofrealizedvolatilityestimators!!!!...
!!!!!!!!!!!!!!!!!!!!!!!!Optimal combinationofrealizedvolatilityestimators!!!!...!!!!!!!!!!!!!!!!!!!!!!!!Optimal combinationofrealizedvolatilityestimators!!!!...
!!!!!!!!!!!!!!!!!!!!!!!!Optimal combinationofrealizedvolatilityestimators!!!!...
pace130557
 
Does Inflation Targeting matter?
Does Inflation Targeting matter?Does Inflation Targeting matter?
Does Inflation Targeting matter?
Eesti Pank
 
Case Econ08 Ppt 01
Case Econ08 Ppt 01Case Econ08 Ppt 01
Case Econ08 Ppt 01
Amba Research
 
Case Econ08 Ab Az Ppt 01
Case Econ08 Ab Az Ppt 01Case Econ08 Ab Az Ppt 01
Case Econ08 Ab Az Ppt 01
guest9850dd4e
 
Econometrics and economic data
Econometrics and economic dataEconometrics and economic data
Econometrics and economic data
AdilMohsunov1
 

Similar to 06Econometrics_Statistics_Basic_1-8.ppt (20)

11.0004www.iiste.org call for paper. 46-64
11.0004www.iiste.org call for paper. 46-6411.0004www.iiste.org call for paper. 46-64
11.0004www.iiste.org call for paper. 46-64
 
Economatrics
Economatrics Economatrics
Economatrics
 
Econometrics lecture 1st
Econometrics lecture 1stEconometrics lecture 1st
Econometrics lecture 1st
 
Econometrics ch1
Econometrics ch1Econometrics ch1
Econometrics ch1
 
Unit 01 - Consolidated.pptx
Unit 01 - Consolidated.pptxUnit 01 - Consolidated.pptx
Unit 01 - Consolidated.pptx
 
1.1.Introduction Econometrics.pptx
1.1.Introduction Econometrics.pptx1.1.Introduction Econometrics.pptx
1.1.Introduction Econometrics.pptx
 
Relationship Between Monetary Policy And The Stock Market
Relationship Between Monetary Policy And The Stock MarketRelationship Between Monetary Policy And The Stock Market
Relationship Between Monetary Policy And The Stock Market
 
15
1515
15
 
Advanced Econometrics L1-2.pptx
Advanced Econometrics L1-2.pptxAdvanced Econometrics L1-2.pptx
Advanced Econometrics L1-2.pptx
 
The dangers of policy experiments Initial beliefs under adaptive learning
The dangers of policy experiments Initial beliefs under adaptive learningThe dangers of policy experiments Initial beliefs under adaptive learning
The dangers of policy experiments Initial beliefs under adaptive learning
 
2U1.pptx
2U1.pptx2U1.pptx
2U1.pptx
 
Class 1.1 (1).pptx
Class 1.1 (1).pptxClass 1.1 (1).pptx
Class 1.1 (1).pptx
 
MModule 1 ppt.pptx
MModule 1 ppt.pptxMModule 1 ppt.pptx
MModule 1 ppt.pptx
 
Econometrics_1.pptx
Econometrics_1.pptxEconometrics_1.pptx
Econometrics_1.pptx
 
Vol7no2 6
Vol7no2 6Vol7no2 6
Vol7no2 6
 
!!!!!!!!!!!!!!!!!!!!!!!!Optimal combinationofrealizedvolatilityestimators!!!!...
!!!!!!!!!!!!!!!!!!!!!!!!Optimal combinationofrealizedvolatilityestimators!!!!...!!!!!!!!!!!!!!!!!!!!!!!!Optimal combinationofrealizedvolatilityestimators!!!!...
!!!!!!!!!!!!!!!!!!!!!!!!Optimal combinationofrealizedvolatilityestimators!!!!...
 
Does Inflation Targeting matter?
Does Inflation Targeting matter?Does Inflation Targeting matter?
Does Inflation Targeting matter?
 
Case Econ08 Ppt 01
Case Econ08 Ppt 01Case Econ08 Ppt 01
Case Econ08 Ppt 01
 
Case Econ08 Ab Az Ppt 01
Case Econ08 Ab Az Ppt 01Case Econ08 Ab Az Ppt 01
Case Econ08 Ab Az Ppt 01
 
Econometrics and economic data
Econometrics and economic dataEconometrics and economic data
Econometrics and economic data
 

Recently uploaded

一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
taqyea
 
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
Falcon Invoice Discounting
 
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
5spllj1l
 
Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...
Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...
Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...
nimaruinazawa258
 
在线办理(TAMU毕业证书)美国德州农工大学毕业证PDF成绩单一模一样
在线办理(TAMU毕业证书)美国德州农工大学毕业证PDF成绩单一模一样在线办理(TAMU毕业证书)美国德州农工大学毕业证PDF成绩单一模一样
在线办理(TAMU毕业证书)美国德州农工大学毕业证PDF成绩单一模一样
5spllj1l
 
Enhancing Asset Quality: Strategies for Financial Institutions
Enhancing Asset Quality: Strategies for Financial InstitutionsEnhancing Asset Quality: Strategies for Financial Institutions
Enhancing Asset Quality: Strategies for Financial Institutions
shruti1menon2
 
快速制作美国迈阿密大学牛津分校毕业证文凭证书英文原版一模一样
快速制作美国迈阿密大学牛津分校毕业证文凭证书英文原版一模一样快速制作美国迈阿密大学牛津分校毕业证文凭证书英文原版一模一样
快速制作美国迈阿密大学牛津分校毕业证文凭证书英文原版一模一样
rlo9fxi
 
STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...
STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...
STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...
sameer shah
 
New Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy Visa
New Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy VisaNew Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy Visa
New Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy Visa
Amit Kakkar
 
The Impact of Generative AI and 4th Industrial Revolution
The Impact of Generative AI and 4th Industrial RevolutionThe Impact of Generative AI and 4th Industrial Revolution
The Impact of Generative AI and 4th Industrial Revolution
Paolo Maresca
 
RMIT University degree offer diploma Transcript
RMIT University degree offer diploma TranscriptRMIT University degree offer diploma Transcript
RMIT University degree offer diploma Transcript
cahyrnui
 
Independent Study - College of Wooster Research (2023-2024)
Independent Study - College of Wooster Research (2023-2024)Independent Study - College of Wooster Research (2023-2024)
Independent Study - College of Wooster Research (2023-2024)
AntoniaOwensDetwiler
 
Bridging the gap: Online job postings, survey data and the assessment of job ...
Bridging the gap: Online job postings, survey data and the assessment of job ...Bridging the gap: Online job postings, survey data and the assessment of job ...
Bridging the gap: Online job postings, survey data and the assessment of job ...
Labour Market Information Council | Conseil de l’information sur le marché du travail
 
Economic Risk Factor Update: June 2024 [SlideShare]
Economic Risk Factor Update: June 2024 [SlideShare]Economic Risk Factor Update: June 2024 [SlideShare]
Economic Risk Factor Update: June 2024 [SlideShare]
Commonwealth
 
Does teamwork really matter? Looking beyond the job posting to understand lab...
Does teamwork really matter? Looking beyond the job posting to understand lab...Does teamwork really matter? Looking beyond the job posting to understand lab...
Does teamwork really matter? Looking beyond the job posting to understand lab...
Labour Market Information Council | Conseil de l’information sur le marché du travail
 
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdfSeeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Ashis Kumar Dey
 
Machine Learning in Business - A power point presentation.pptx
Machine Learning in Business - A power point presentation.pptxMachine Learning in Business - A power point presentation.pptx
Machine Learning in Business - A power point presentation.pptx
mimiroselowe
 
Accounting Information Systems (AIS).pptx
Accounting Information Systems (AIS).pptxAccounting Information Systems (AIS).pptx
Accounting Information Systems (AIS).pptx
TIZITAWMASRESHA
 
做澳洲澳大利亚国立大学毕业证荣誉学位证书原版一模一样
做澳洲澳大利亚国立大学毕业证荣誉学位证书原版一模一样做澳洲澳大利亚国立大学毕业证荣誉学位证书原版一模一样
做澳洲澳大利亚国立大学毕业证荣誉学位证书原版一模一样
2g3om49r
 
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
28xo7hf
 

Recently uploaded (20)

一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
一比一原版美国新罕布什尔大学(unh)毕业证学历认证真实可查
 
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
falcon-invoice-discounting-a-premier-investment-platform-for-superior-returns...
 
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
快速办理(美国Fordham毕业证书)福德汉姆大学毕业证学历证书一模一样
 
Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...
Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...
Tdasx: In-Depth Analysis of Cryptocurrency Giveaway Scams and Security Strate...
 
在线办理(TAMU毕业证书)美国德州农工大学毕业证PDF成绩单一模一样
在线办理(TAMU毕业证书)美国德州农工大学毕业证PDF成绩单一模一样在线办理(TAMU毕业证书)美国德州农工大学毕业证PDF成绩单一模一样
在线办理(TAMU毕业证书)美国德州农工大学毕业证PDF成绩单一模一样
 
Enhancing Asset Quality: Strategies for Financial Institutions
Enhancing Asset Quality: Strategies for Financial InstitutionsEnhancing Asset Quality: Strategies for Financial Institutions
Enhancing Asset Quality: Strategies for Financial Institutions
 
快速制作美国迈阿密大学牛津分校毕业证文凭证书英文原版一模一样
快速制作美国迈阿密大学牛津分校毕业证文凭证书英文原版一模一样快速制作美国迈阿密大学牛津分校毕业证文凭证书英文原版一模一样
快速制作美国迈阿密大学牛津分校毕业证文凭证书英文原版一模一样
 
STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...
STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...
STREETONOMICS: Exploring the Uncharted Territories of Informal Markets throug...
 
New Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy Visa
New Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy VisaNew Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy Visa
New Visa Rules for Tourists and Students in Thailand | Amit Kakkar Easy Visa
 
The Impact of Generative AI and 4th Industrial Revolution
The Impact of Generative AI and 4th Industrial RevolutionThe Impact of Generative AI and 4th Industrial Revolution
The Impact of Generative AI and 4th Industrial Revolution
 
RMIT University degree offer diploma Transcript
RMIT University degree offer diploma TranscriptRMIT University degree offer diploma Transcript
RMIT University degree offer diploma Transcript
 
Independent Study - College of Wooster Research (2023-2024)
Independent Study - College of Wooster Research (2023-2024)Independent Study - College of Wooster Research (2023-2024)
Independent Study - College of Wooster Research (2023-2024)
 
Bridging the gap: Online job postings, survey data and the assessment of job ...
Bridging the gap: Online job postings, survey data and the assessment of job ...Bridging the gap: Online job postings, survey data and the assessment of job ...
Bridging the gap: Online job postings, survey data and the assessment of job ...
 
Economic Risk Factor Update: June 2024 [SlideShare]
Economic Risk Factor Update: June 2024 [SlideShare]Economic Risk Factor Update: June 2024 [SlideShare]
Economic Risk Factor Update: June 2024 [SlideShare]
 
Does teamwork really matter? Looking beyond the job posting to understand lab...
Does teamwork really matter? Looking beyond the job posting to understand lab...Does teamwork really matter? Looking beyond the job posting to understand lab...
Does teamwork really matter? Looking beyond the job posting to understand lab...
 
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdfSeeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
Seeman_Fiintouch_LLP_Newsletter_Jun_2024.pdf
 
Machine Learning in Business - A power point presentation.pptx
Machine Learning in Business - A power point presentation.pptxMachine Learning in Business - A power point presentation.pptx
Machine Learning in Business - A power point presentation.pptx
 
Accounting Information Systems (AIS).pptx
Accounting Information Systems (AIS).pptxAccounting Information Systems (AIS).pptx
Accounting Information Systems (AIS).pptx
 
做澳洲澳大利亚国立大学毕业证荣誉学位证书原版一模一样
做澳洲澳大利亚国立大学毕业证荣誉学位证书原版一模一样做澳洲澳大利亚国立大学毕业证荣誉学位证书原版一模一样
做澳洲澳大利亚国立大学毕业证荣誉学位证书原版一模一样
 
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
真实可查(nwu毕业证书)美国西北大学毕业证学位证书范本原版一模一样
 

06Econometrics_Statistics_Basic_1-8.ppt

  • 3. Introduction What is Econometrics?  Definition 1: Economic Measurement  Definition 2: Application of the mathematical statistics to economic data in order to lend empirical support to the economic mathematical models and obtain numerical results (Gerhard Tintner, 1968) 3
  • 4. Introduction What is Econometrics?  Definition 3: The quantitative analysis of actual economic phenomena based on concurrent development of theory and observation, related by appropriate methods of inference (P.A.Samuelson, T.C.Koopmans and J.R.N.Stone, 1954) 4
  • 5. Introduction What is Econometrics?  Definition 4: The social science which applies economics, mathematics and statistical inference to the analysis of economic phenomena (By Arthur S. Goldberger, 1964)  Definition 5: The empirical determination of economic laws (By H. Theil, 1971) 5
  • 6. Introduction What is Econometrics?  Definition 6: A conjunction of economic theory and actual measurements, using the theory and technique of statistical inference as a bridge pier (By T.Haavelmo, 1944)  And the others 6
  • 8. Introduction Why a separate discipline?  Economic theory makes statements that are mostly qualitative in nature, while econometrics gives empirical content to most economic theory  Mathematical economics is to express economic theory in mathematical form without empirical verification of the theory, while econometrics is mainly interested in the later 8
  • 9. Introduction Why a separate discipline?  Economic Statistics is mainly concerned with collecting, processing and presenting economic data. It does not being concerned with using the collected data to test economic theories  Mathematical statistics provides many of tools for economic studies, but econometrics supplies the later with many special methods of quantitative analysis based on economic data 9
  • 11. Introduction Methodology of Econometrics (1) Statement of theory or hypothesis: Keynes stated: ”Consumption increases as income increases, but not as much as the increase in income”. It means that “The marginal propensity to consume (MPC) for a unit change in income is grater than zero but less than unit” 11
  • 12. Introduction Methodology of Econometrics (2) Specification of the mathematical model of the theory Y = ß1+ ß2X ; 0 < ß2< 1 Y= consumption expenditure X= income ß1 and ß2 are parameters; ß1 is intercept, and ß2 is slope coefficients 12
  • 13. Introduction Methodology of Econometrics (3) Specification of the econometric model of the theory Y = ß1+ ß2X + u ; 0 < ß2< 1; Y = consumption expenditure; X = income; ß1 and ß2 are parameters; ß1is intercept and ß2 is slope coefficients; u is disturbance term or error term. It is a random or stochastic variable 13
  • 14. Introduction Methodology of Econometrics (4) Obtaining Data (See Table 1.1, page 6) Y= Personal consumption expenditure X= Gross Domestic Product all in Billion US Dollars 14
  • 15. Introduction Methodology of Econometrics (4) Obtaining Data May 2004 Prof.VuThieu 15 Year X Y 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 2447.1 2476.9 2503.7 2619.4 2746.1 2865.8 2969.1 3052.2 3162.4 3223.3 3260.4 3240.8 3776.3 3843.1 3760.3 3906.6 4148.5 4279.8 4404.5 4539.9 4718.6 4838.0 4877.5 4821.0
  • 16. Introduction Methodology of Econometrics (5) Estimating the Econometric Model Y^ = - 231.8 + 0.7194 X (1.3.3) MPC was about 0.72 and it means that for the sample period when real income increases 1 USD, led (on average) real consumption expenditure increases of about 72 cents Note: A hat symbol (^) above one variable will signify an estimator of the relevant population value May 2004 Prof.VuThieu 16
  • 17. Introduction Methodology of Econometrics (6) Hypothesis Testing Are the estimates accord with the expectations of the theory that is being tested? Is MPC < 1 statistically? If so, it may support Keynes’ theory. Confirmation or refutation of economic theories based on sample evidence is object of Statistical Inference (hypothesis testing) May 2004 Prof.VuThieu 17
  • 18. Introduction Methodology of Econometrics (7) Forecasting or Prediction  With given future value(s) of X, what is the future value(s) of Y?  GDP=$6000Bill in 1994, what is the forecast consumption expenditure?  Y^= - 231.8+0.7196(6000) = 4084.6  Income Multiplier M = 1/(1 – MPC) (=3.57). decrease (increase) of $1 in investment will eventually lead to $3.57 decrease (increase) in income May 2004 Prof.VuThieu 18
  • 19. Introduction Methodology of Econometrics (8) Using model for control or policy purposes Y=4000= -231.8+0.7194 X  X  5882 MPC = 0.72, an income of $5882 Bill will produce an expenditure of $4000 Bill. By fiscal and monetary policy, Government can manipulate the control variable X to get the desired level of target variable Y May 2004 Prof.VuThieu 19
  • 20. Introduction Methodology of Econometrics Figure 1.4: Anatomy of economic modelling • 1) Economic Theory • 2) Mathematical Model of Theory • 3) Econometric Model of Theory • 4) Data • 5) Estimation of Econometric Model • 6) Hypothesis Testing • 7) Forecasting or Prediction • 8) Using the Model for control or policy purposes May 2004 Prof.VuThieu 20
  • 21. May 2004 Prof.VuThieu 21 Economic Theory Mathematic Model Econometric Model Data Collection Estimation Hypothesis Testing Forecasting Application in control or policy studies
  • 22. Basic Econometrics Chapter 1: THE NATURE OF REGRESSION ANALYSIS May 2004 Prof.VuThieu 22
  • 23. 1-1. Historical origin of the term “Regression” • The term REGRESSION was introduced by Francis Galton • Tendency for tall parents to have tall children and for short parents to have short children, but the average height of children born from parents of a given height tended to move (or regress) toward the average height in the population as a whole (F. Galton, “Family Likeness in Stature”) May 2004 Prof.VuThieu 23
  • 24. 1-1. Historical origin of the term “Regression” • Galton’s Law was confirmed by Karl Pearson: The average height of sons of a group of tall fathers < their fathers’ height. And the average height of sons of a group of short fathers > their fathers’ height. Thus “regressing” tall and short sons alike toward the average height of all men. (K. Pearson and A. Lee, “On the law of Inheritance”) • By the words of Galton, this was “Regression to mediocrity” May 2004 Prof.VuThieu 24
  • 25. 1-2. Modern Interpretation of Regression Analysis • The modern way in interpretation of Regression: Regression Analysis is concerned with the study of the dependence of one variable (The Dependent Variable), on one or more other variable(s) (The Explanatory Variable), with a view to estimating and/or predicting the (population) mean or average value of the former in term of the known or fixed (in repeated sampling) values of the latter. • Examples: (pages 16-19) May 2004 Prof.VuThieu 25
  • 26. Dependent Variable Y; Explanatory Variable Xs 1. Y = Son’s Height; X = Father’s Height 2. Y = Height of boys; X = Age of boys 3. Y = Personal Consumption Expenditure X = Personal Disposable Income 4. Y = Demand; X = Price 5. Y = Rate of Change of Wages X = Unemployment Rate 6. Y = Money/Income; X = Inflation Rate 7. Y = % Change in Demand; X = % Change in the advertising budget 8. Y = Crop yield; Xs = temperature, rainfall, sunshine, fertilizer May 2004 Prof.VuThieu 26
  • 27. 1-3. Statistical vs. Deterministic Relationships • In regression analysis we are concerned with STATISTICAL DEPENDENCE among variables (not Functional or Deterministic), we essentially deal with RANDOM or STOCHASTIC variables (with the probability distributions) May 2004 Prof.VuThieu 27
  • 28. 1-4. Regression vs. Causation: Regression does not necessarily imply causation. A statistical relationship cannot logically imply causation. “A statistical relationship, however strong and however suggestive, can never establish causal connection: our ideas of causation must come from outside statistics, ultimately from some theory or other” (M.G. Kendal and A. Stuart, “The Advanced Theory of Statistics”) May 2004 Prof.VuThieu 28
  • 29. 1-5. Regression vs. Correlation •Correlation Analysis: the primary objective is to measure the strength or degree of linear association between two variables (both are assumed to be random) •Regression Analysis: we try to estimate or predict the average value of one variable (dependent, and assumed to be stochastic) on the basis of the fixed values of other variables (independent, and non-stochastic) May 2004 Prof.VuThieu 29
  • 30. 1-6. Terminology and Notation Dependent Variable  Explained Variable  Predictand  Regressand  Response  Endogenous Explanatory Variable(s)  Independent Variable(s)  Predictor(s)  Regressor(s)  Stimulus or control variable(s)  Exogenous(es) May 2004 Prof.VuThieu 30
  • 31. 1-7. The Nature and Sources of Data for Econometric Analysis 1) Types of Data : • Time series data; • Cross-sectional data; • Pooled data 2) The Sources of Data 3) The Accuracy of Data May 2004 Prof.VuThieu 31
  • 32. 1-8. Summary and Conclusions 1) The key idea behind regression analysis is the statistic dependence of one variable on one or more other variable(s) 2) The objective of regression analysis is to estimate and/or predict the mean or average value of the dependent variable on basis of known (or fixed) values of explanatory variable(s) May 2004 Prof.VuThieu 32
  • 33. 1-8. Summary and Conclusions 3) The success of regression depends on the available and appropriate data 4) The researcher should clearly state the sources of the data used in the analysis, their definitions, their methods of collection, any gaps or omissions and any revisions in the data May 2004 Prof.VuThieu 33
  • 34. Basic Econometrics Chapter 2: TWO-VARIABLE REGRESSION ANALYSIS: Some basic Ideas May 2004 Prof.VuThieu 34
  • 35. 2-1. A Hypothetical Example • Total population: 60 families • Y=Weekly family consumption expenditure • X=Weekly disposable family income • 60 families were divided into 10 groups of approximately the same income level (80, 100, 120, 140, 160, 180, 200, 220, 240, 260) May 2004 Prof.VuThieu 35
  • 36. 2-1. A Hypothetical Example • Table 2-1 gives the conditional distribution of Y on the given values of X • Table 2-2 gives the conditional probabilities of Y: p(YX) • Conditional Mean (or Expectation): E(YX=Xi ) May 2004 Prof.VuThieu 36
  • 37. May 2004 Prof.VuThieu 37 X Y 80 100 120 140 160 180 200 220 240 260 Weekly family consumption expenditure Y ($) 55 65 79 80 102 110 120 135 137 150 60 70 84 93 107 115 136 137 145 152 65 74 90 95 110 120 140 140 155 175 70 80 94 103 116 130 144 152 165 178 75 85 98 108 118 135 145 157 175 180 -- 88 -- 113 125 140 -- 160 189 185 -- -- -- 115 -- -- -- 162 -- 191 Total 325 462 445 707 678 750 685 1043 966 1211 Mean 65 77 89 101 113 125 137 149 161 173 Table 2-2: Weekly family income X ($), and consumption Y ($)
  • 38. 2-1. A Hypothetical Example • Figure 2-1 shows the population regression line (curve). It is the regression of Y on X • Population regression curve is the locus of the conditional means or expectations of the dependent variable for the fixed values of the explanatory variable X (Fig.2-2) May 2004 Prof.VuThieu 38
  • 39. 2-2. The concepts of population regression function (PRF) • E(YX=Xi ) = f(Xi) is Population Regression Function (PRF) or Population Regression (PR) • In the case of linear function we have linear population regression function (or equation or model) E(YX=Xi ) = f(Xi) = ß1 + ß2Xi May 2004 Prof.VuThieu 39
  • 40. 2-2. The concepts of population regression function (PRF) E(YX=Xi ) = f(Xi) = ß1 + ß2Xi • ß1 and ß2 are regression coefficients, ß1is intercept and ß2 is slope coefficient • Linearity in the Variables • Linearity in the Parameters May 2004 Prof.VuThieu 40
  • 41. 2-4. Stochastic Specification of PRF •Ui = Y - E(YX=Xi ) or Yi = E(YX=Xi ) + Ui •Ui = Stochastic disturbance or stochastic error term. It is nonsystematic component •Component E(YX=Xi ) is systematic or deterministic. It is the mean consumption expenditure of all the families with the same level of income •The assumption that the regression line passes through the conditional means of Y implies that E(UiXi ) = 0 May 2004 Prof.VuThieu 41
  • 42. 2-5. The Significance of the Stochastic Disturbance Term •Ui = Stochastic Disturbance Term is a surrogate for all variables that are omitted from the model but they collectively affect Y •Many reasons why not include such variables into the model as follows: May 2004 Prof.VuThieu 42
  • 43. 2-5. The Significance of the Stochastic Disturbance Term Why not include as many as variable into the model (or the reasons for using ui) + Vagueness of theory + Unavailability of Data + Core Variables vs. Peripheral Variables + Intrinsic randomness in human behavior + Poor proxy variables + Principle of parsimony + Wrong functional form May 2004 Prof.VuThieu 43
  • 44. 2-6. The Sample Regression Function (SRF) Table 2-4: A random sample from the population Y X ------------------ 70 80 65 100 90 120 95 140 110 160 115 180 120 200 140 220 155 240 150 260 ------------------ Table 2-5: Another random sample from the population Y X ------------------- 55 80 88 100 90 120 80 140 118 160 120 180 145 200 135 220 145 240 175 260 -------------------- May 2004 Prof.VuThieu 44
  • 45. May 2004 Prof.VuThieu 45 SRF1 SRF2 Weekly Consumption Expenditure (Y) Weekly Income (X)
  • 46. 2-6. The Sample Regression Function (SRF) •Fig.2-3: SRF1 and SRF 2 •Y^i = ^1 + ^2Xi (2.6.1) •Y^i = estimator of E(YXi) •^1 = estimator of 1 •^2 = estimator of 2 •Estimate = A particular numerical value obtained by the estimator in an application •SRF in stochastic form: Yi= ^1 + ^2Xi + u^i or Yi= Y^i + u^i (2.6.3) May 2004 Prof.VuThieu 46
  • 47. 2-6. The Sample Regression Function (SRF) • Primary objective in regression analysis is to estimate the PRF Yi= 1 + 2Xi + ui on the basis of the SRF Yi= ^1 + ^2Xi + ei and how to construct SRF so that ^1 close to 1 and ^2 close to 2 as much as possible May 2004 Prof.VuThieu 47
  • 48. 2-6. The Sample Regression Function (SRF) • Population Regression Function PRF • Linearity in the parameters • Stochastic PRF • Stochastic Disturbance Term ui plays a critical role in estimating the PRF • Sample of observations from population • Stochastic Sample Regression Function SRF used to estimate the PRF May 2004 Prof.VuThieu 48
  • 49. 2-7. Summary and Conclusions • The key concept underlying regression analysis is the concept of the population regression function (PRF). • This book deals with linear PRFs: linear in the unknown parameters. They may or may not linear in the variables. May 2004 Prof.VuThieu 49
  • 50. 2-7. Summary and Conclusions • For empirical purposes, it is the stochastic PRF that matters. The stochastic disturbance term ui plays a critical role in estimating the PRF. • The PRF is an idealized concept, since in practice one rarely has access to the entire population of interest. Generally, one has a sample of observations from population and use the stochastic sample regression (SRF) to estimate the PRF. May 2004 Prof.VuThieu 50
  • 51. Basic Econometrics Chapter 3: TWO-VARIABLE REGRESSION MODEL: The problem of Estimation May 2004 Prof.VuThieu 51
  • 52. 3-1. The method of ordinary least square (OLS)  Least-square criterion:  Minimizing U^2 i = (Yi – Y^i) 2 = (Yi- ^1 - ^2X)2 (3.1.2)  Normal Equation and solving it for ^1 and ^2 = Least-square estimators [See (3.1.6)(3.1.7)]  Numerical and statistical properties of OLS are as follows: May 2004 Prof.VuThieu 52
  • 53. 3-1. The method of ordinary least square (OLS)  OLS estimators are expressed solely in terms of observable quantities. They are point estimators  The sample regression line passes through sample means of X and Y  The mean value of the estimated Y^ is equal to the mean value of the actual Y: E(Y) = E(Y^)  The mean value of the residuals U^i is zero: E(u^i )=0  u^i are uncorrelated with the predicted Y^i and with Xi : That are u^iY^i = 0; u^iXi = 0 May 2004 Prof.VuThieu 53
  • 54. 3-2. The assumptions underlying the method of least squares  Ass 1: Linear regression model (in parameters)  Ass 2: X values are fixed in repeated sampling  Ass 3: Zero mean value of ui : E(uiXi)=0  Ass 4: Homoscedasticity or equal variance of ui : Var (uiXi) = 2 [VS. Heteroscedasticity]  Ass 5: No autocorrelation between the disturbances: Cov(ui,ujXi,Xj ) = 0 with i # j [VS. Correlation, + or - ] May 2004 Prof.VuThieu 54
  • 55. 3-2. The assumptions underlying the method of least squares  Ass 6: Zero covariance between ui and Xi Cov(ui, Xi) = E(ui, Xi) = 0  Ass 7: The number of observations n must be greater than the number of parameters to be estimated  Ass 8: Variability in X values. They must not all be the same  Ass 9: The regression model is correctly specified  Ass 10: There is no perfect multicollinearity between Xs May 2004 Prof.VuThieu 55
  • 56. 3-3. Precision or standard errors of least-squares estimates  In statistics the precision of an estimate is measured by its standard error (SE)  var( ^2) = 2 / x2 i (3.3.1)  se(^2) =  Var(^2) (3.3.2)  var( ^1) = 2 X2 i / n x2 i (3.3.3)  se(^1) =  Var(^1) (3.3.4)  ^ 2 = u^2 i / (n - 2) (3.3.5)  ^ =  ^ 2 is standard error of the estimate May 2004 Prof.VuThieu 56
  • 57. 3-3. Precision or standard errors of least-squares estimates  Features of the variance: + var( ^2) is proportional to 2 and inversely proportional to x2 i + var( ^1) is proportional to 2 and X2 i but inversely proportional to x2 i and the sample size n. + cov ( ^1 , ^2) = - var( ^2) shows the independence between ^1 and ^2 May 2004 Prof.VuThieu 57 X
  • 58. 3-4. Properties of least-squares estimators: The Gauss-Markov Theorem  An OLS estimator is said to be BLUE if : + It is linear, that is, a linear function of a random variable, such as the dependent variable Y in the regression model + It is unbiased , that is, its average or expected value, E(^2), is equal to the true value 2 + It has minimum variance in the class of all such linear unbiased estimators An unbiased estimator with the least variance is known as an efficient estimator May 2004 Prof.VuThieu 58
  • 59. 3-4. Properties of least-squares estimators: The Gauss-Markov Theorem  Gauss- Markov Theorem: Given the assumptions of the classical linear regression model, the least-squares estimators, in class of unbiased linear estimators, have minimum variance, that is, they are BLUE May 2004 Prof.VuThieu 59
  • 60. 3-5. The coefficient of determination r2: A measure of “Goodness of fit”  Yi = i + i or  Yi - = i - i + i or  yi = i + i (Note: = ) Squaring on both side and summing =>   yi 2 = 2 x2 i +  2 i ; or  TSS = ESS + RSS May 2004 Prof.VuThieu 60 Y Y Ŷ Ŷ Ŷ Ŷ Û Û Û ŷ Û 2 β̂ 2 β̂
  • 61. 3-5. The coefficient of determination r2: A measure of “Goodness of fit”  TSS =  yi 2 = Total Sum of Squares  ESS =  Y^ i 2 = ^2 2 x2 i = Explained Sum of Squares  RSS =  u^2 I = Residual Sum of Squares ESS RSS 1 = -------- + -------- ; or TSS TSS RSS RSS 1 = r2 + ------- ; or r2 = 1 - ------- TSS TSS May 2004 Prof.VuThieu 61
  • 62. 3-5. The coefficient of determination r2: A measure of “Goodness of fit”  r2 = ESS/TSS is coefficient of determination, it measures the proportion or percentage of the total variation in Y explained by the regression Model  0  r2  1;  r =  r2 is sample correlation coefficient  Some properties of r May 2004 Prof.VuThieu 62
  • 63. 3-5. The coefficient of determination r2: A measure of “Goodness of fit” 3-6. A numerical Example (pages 80-83) 3-7. Illustrative Examples (pages 83-85) 3-8. Coffee demand Function 3-9. Monte Carlo Experiments (page 85) 3-10. Summary and conclusions (pages 86-87) May 2004 Prof.VuThieu 63
  • 64. Basic Econometrics Chapter 4: THE NORMALITY ASSUMPTION: Classical Normal Linear Regression Model (CNLRM) May 2004 Prof.VuThieu 64
  • 65. 4-2.The normality assumption •CNLR assumes that each u i is distributed normally u i  N(0, 2) with: Mean = E(u i) = 0 Ass 3 Variance = E(u2 i) = 2 Ass 4 Cov(u i , u j ) = E(u i , u j) = 0 (i#j) Ass 5 •Note: For two normally distributed variables, the zero covariance or correlation means independence of them, so u i and u j are not only uncorrelated but also independently distributed. Therefore u i  NID(0, 2) is Normal and Independently Distributed May 2004 Prof.VuThieu 65
  • 66. 4-2.The normality assumption • Why the normality assumption? (1) With a few exceptions, the distribution of sum of a large number of independent and identically distributed random variables tends to a normal distribution as the number of such variables increases indefinitely (2) If the number of variables is not very large or they are not strictly independent, their sum may still be normally distributed May 2004 Prof.VuThieu 66
  • 67. 4-2.The normality assumption • Why the normality assumption? (3) Under the normality assumption for ui , the OLS estimators ^1 and ^2 are also normally distributed (4) The normal distribution is a comparatively simple distribution involving only two parameters (mean and variance) May 2004 Prof.VuThieu 67
  • 68. 4-3. Properties of OLS estimators under the normality assumption • With the normality assumption the OLS estimators ^1 , ^2 and ^2 have the following properties: 1. They are unbiased 2. They have minimum variance. Combined 1 and 2, they are efficient estimators 3. Consistency, that is, as the sample size increases indefinitely, the estimators converge to their true population values May 2004 Prof.VuThieu 68
  • 69. 4-3. Properties of OLS estimators under the normality assumption 4. ^1 is normally distributed  N(1, ^1 2) And Z = (^1- 1)/ ^1 is  N(0,1) 5. ^2 is normally distributed N(2 ,^2 2) And Z = (^2- 2)/ ^2 is  N(0,1) 6. (n-2) ^2/ 2 is distributed as the 2 (n-2) May 2004 Prof.VuThieu 69
  • 70. 4-3. Properties of OLS estimators under the normality assumption 7. ^1 and ^2 are distributed independently of ^2. They have minimum variance in the entire class of unbiased estimators, whether linear or not. They are best unbiased estimators (BUE) 8. Let ui is  N(0, 2 ) then Yi is  N[E(Yi); Var(Yi)] = N[1+ 2X i ; 2] May 2004 Prof.VuThieu 70
  • 71. Some last points of chapter 4 4-4. The method of Maximum likelihood (ML)  ML is point estimation method with some stronger theoretical properties than OLS (Appendix 4.A on pages 110-114) The estimators of coefficients ’s by OLS and ML are  identical. They are true estimators of the ’s  (ML estimator of 2) = u^i 2/n (is biased estimator)  (OLS estimator of 2) = u^i 2/n-2 (is unbiased estimator)  When sample size (n) gets larger the two estimators tend to be equal May 2004 Prof.VuThieu 71
  • 72. Some last points of chapter 4 4-5. Probability distributions related to the Normal Distribution: The t, 2, and F distributions See section (4.5) on pages 107-108 with 8 theorems and Appendix A, on pages 755-776 4-6. Summary and Conclusions See 10 conclusions on pages 109-110 May 2004 Prof.VuThieu 72
  • 73. Basic Econometrics Chapter 5: TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing May 2004 Prof.VuThieu 73
  • 74. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-1. Statistical Prerequisites • See Appendix A with key concepts such as probability, probability distributions, Type I Error, Type II Error,level of significance, power of a statistic test, and confidence interval May 2004 Prof.VuThieu 74
  • 75. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas •How “close” is, say, ^2 to 2 ? Pr (^2 -   2  ^2 + ) = 1 -  (5.2.1) •Random interval ^2 -   2  ^2 +  if exits, it known as confidence interval •^2 -  is lower confidence limit •^2 +  is upper confidence limit May 2004 Prof.VuThieu 75
  • 76. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas •(1 - ) is confidence coefficient, •0 <  < 1 is significance level •Equation (5.2.1) does not mean that the Pr of 2 lying between the given limits is (1 - ), but the Pr of constructing an interval that contains 2 is (1 - ) •(^2 -  , ^2 + ) is random interval May 2004 Prof.VuThieu 76
  • 77. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-2. Interval estimation: Some basic Ideas •In repeated sampling, the intervals will enclose, in (1 - )*100 of the cases, the true value of the parameters •For a specific sample, can not say that the probability is (1 - ) that a given fixed interval includes the true 2 •If the sampling or probability distributions of the estimators are known, one can make confidence interval statement like (5.2.1) May 2004 Prof.VuThieu 77
  • 78. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-3. Confidence Intervals for Regression Coefficients •Z= (^2 - 2)/se(^2) = (^2 - 2) x2 i / ~N(0,1) (5.3.1) We did not know  and have to use ^ instead, so: •t= (^2 - 2)/se(^2) = (^2 - 2) x2 i /^ ~ t(n-2) (5.3.2) • => Interval for 2 Pr [ -t /2  t  t /2] = 1-  (5.3.3) May 2004 Prof.VuThieu 78
  • 79. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-3. Confidence Intervals for Regression Coefficients • Or confidence interval for 2 is Pr [^2-t /2se(^2)  2  ^2+t /2se(^2)] = 1-  (5.3.5) • Confidence Interval for 1 Pr [^1-t /2se(^1)  1  ^1+t /2se(^1)] = 1-  (5.3.7) May 2004 Prof.VuThieu 79
  • 80. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-4. Confidence Intervals for 2 Pr [(n-2)^2/ 2 /2  2 (n-2)^2/ 2 1- /2] = 1-  (5.4.3) • The interpretation of this interval is: If we establish (1- ) confidence limits on 2 and if we maintain a priori that these limits will include true 2, we shall be right in the long run (1- ) percent of the time May 2004 Prof.VuThieu 80
  • 81. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-5. Hypothesis Testing: General Comments  The stated hypothesis is known as the null hypothesis: Ho The Ho is tested against and alternative hypothesis: H1 5-6. Hypothesis Testing: The confidence interval approach One-sided or one-tail Test H0: 2  * versus H1: 2 > * May 2004 Prof.VuThieu 81
  • 82. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing Two-sided or two-tail Test H0: 2 = * versus H1: 2 # * ^2 - t /2se(^2)  2  ^2 + t /2se(^2) values of 2 lying in this interval are plausible under Ho with 100*(1- )% confidence. •If 2 lies in this region we do not reject Ho (the finding is statistically insignificant) •If 2 falls outside this interval, we reject Ho (the finding is statistically significant) May 2004 Prof.VuThieu 82
  • 83. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-7. Hypothesis Testing: The test of significance approach A test of significance is a procedure by which sample results are used to verify the truth or falsity of a null hypothesis • Testing the significance of regression coefficient: The t-test Pr [^2-t /2se(^2)  2  ^2+t /2se(^2)]= 1-  (5.7.2) May 2004 Prof.VuThieu 83
  • 84. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing • 5-7. Hypothesis Testing: The test of significance approach •Table 5-1: Decision Rule for t-test of significance May 2004 Prof.VuThieu 84 Type of Hypothesis H0 H1 Reject H0 if Two-tail 2 = 2* 2 # 2* |t| > t/2,df Right-tail 2  2* 2 > 2* t > t,df Left-tail 2 2* 2 < 2* t < - t,df
  • 85. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing • 5-7. Hypothesis Testing: The test of significance approach Testing the significance of 2 : The 2 Test Under the Normality assumption we have: ^2 2 = (n-2) ------- ~ 2 (n-2) (5.4.1) 2 From (5.4.2) and (5.4.3) on page 520 => May 2004 Prof.VuThieu 85
  • 86. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing • 5-7. Hypothesis Testing: The test of significance approach • Table 5-2: A summary of the 2 Test May 2004 Prof.VuThieu 86 H0 H1 Reject H0 if 2 = 2 0 2 > 2 0 Df.(^2)/ 2 0 > 2 ,df 2 = 2 0 2 < 2 0 Df.(^2)/ 2 0 < 2 (1-),df 2 = 2 0 2 # 2 0 Df.(^2)/ 2 0 > 2 /2,df or < 2 (1-/2), df
  • 87. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-8. Hypothesis Testing: Some practical aspects 1) The meaning of “Accepting” or “Rejecting” a Hypothesis 2) The Null Hypothesis and the Rule of Thumb 3) Forming the Null and Alternative Hypotheses 4) Choosing , the Level of Significance May 2004 Prof.VuThieu 87
  • 88. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-8. Hypothesis Testing: Some practical aspects 5) The Exact Level of Significance: The p-Value [See page 132] 6) Statistical Significance versus Practical Significance 7) The Choice between Confidence- Interval and Test-of-Significance Approaches to Hypothesis Testing [Warning: Read carefully pages 117-134 ] May 2004 Prof.VuThieu 88
  • 89. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-9. Regression Analysis and Analysis of Variance • TSS = ESS + RSS • F=[MSS of ESS]/[MSS of RSS] = = 2^2 xi 2/ ^2 (5.9.1) • If ui are normally distributed; H0: 2 = 0 then F follows the F distribution with 1 and n-2 degree of freedom May 2004 Prof.VuThieu 89
  • 90. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing •5-9. Regression Analysis and Analysis of Variance • F provides a test statistic to test the null hypothesis that true 2 is zero by compare this F ratio with the F-critical obtained from F tables at the chosen level of significance, or obtain the p-value of the computed F statistic to make decision May 2004 Prof.VuThieu 90
  • 91. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing • 5-9. Regression Analysis and Analysis of Variance • Table 5-3. ANOVA for two-variable regression model May 2004 Prof.VuThieu 91 Source of Variation Sum of square ( SS) Degree of Freedom - (Df) Mean sum of square ( MSS) ESS (due to regression) y^i 2 = 2^2 xi 2 1 2^2 xi 2 RSS (due to residuals) u^i 2 n-2 u^i 2 /(n-2)=^2 TSS y i 2 n-1
  • 92. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-10. Application of Regression Analysis: Problem of Prediction • By the data of Table 3-2, we obtained the sample regression (3.6.2) : Y^i = 24.4545 + 0.5091Xi , where Y^i is the estimator of true E(Yi) • There are two kinds of prediction as follows: May 2004 Prof.VuThieu 92
  • 93. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-10. Application of Regression Analysis: Problem of Prediction • Mean prediction: Prediction of the conditional mean value of Y corresponding to a chosen X, say X0, that is the point on the population regression line itself (see pages 137-138 for details) • Individual prediction: Prediction of an individual Y value corresponding to X0 (see pages 138-139 for details) May 2004 Prof.VuThieu 93
  • 94. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-11. Reporting the results of regression analysis • An illustration: Y^I= 24.4545 + 0.5091Xi (5.1.1) Se = (6.4138) (0.0357) r2= 0.9621 t = (3.8128) (14.2405) df= 8 P = (0.002517) (0.000000289) F1,2=2202.87 May 2004 Prof.VuThieu 94
  • 95. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: • Normality Test: The Chi-Square (2) Goodness of fit Test 2 N-1-k =  (Oi – Ei)2/Ei (5.12.1) Oi is observed residuals (u^i) in interval i Ei is expected residuals in interval i N is number of classes or groups; k is number of parameters to be estimated. If p-value of obtaining 2 N-1-k is high (or 2 N-1-k is small) => The Normality Hypothesis can not be rejected May 2004 Prof.VuThieu 95
  • 96. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: • Normality Test: The Chi-Square (2) Goodness of fit Test H0: ui is normally distributed H1: ui is un-normally distributed Calculated-2 N-1-k =  (Oi – Ei)2/Ei (5.12.1) Decision rule: Calculated-2 N-1-k > Critical-2 N-1-k then H0 can be rejected May 2004 Prof.VuThieu 96
  • 97. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-12. Evaluating the results of regression analysis: The Jarque-Bera (JB) test of normality This test first computes the Skewness (S) and Kurtosis (K) and uses the following statistic: JB = n [S2/6 + (K-3)2/24] (5.12.2) Mean= xbar = xi/n ; SD2 = (xi-xbar)2/(n-1) S=m3/m2 3/2 ; K=m4/m2 2 ; mk= (xi-xbar)k/n May 2004 Prof.VuThieu 97
  • 98. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-12. (Continued) Under the null hypothesis H0 that the residuals are normally distributed Jarque and Bera show that in large sample (asymptotically) the JB statistic given in (5.12.12) follows the Chi-Square distribution with 2 df. If the p-value of the computed Chi-Square statistic in an application is sufficiently low, one can reject the hypothesis that the residuals are normally distributed. But if p-value is reasonable high, one does not reject the May 2004 Prof.VuThieu 98
  • 99. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 1. Estimation and Hypothesis testing constitute the two main branches of classical statistics 2. Hypothesis testing answers this question: Is a given finding compatible with a stated hypothesis or not? 3. There are two mutually complementary approaches to answering the preceding question: Confidence interval and test of significance. May 2004 Prof.VuThieu 99
  • 100. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 4. Confidence-interval approach has a specified probability of including within its limits the true value of the unknown parameter. If the null-hypothesized value lies in the confidence interval, H0 is not rejected, whereas if it lies outside this interval, H0 can be rejected May 2004 Prof.VuThieu 100
  • 101. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 5. Significance test procedure develops a test statistic which follows a well-defined probability distribution (like normal, t, F, or Chi-square). Once a test statistic is computed, its p-value can be easily obtained. The p-value The p-value of a test is the lowest significance level, at which we would reject H0. It gives exact probability of obtaining the estimated test statistic under H0. If p-value is small, one can reject H0, but May 2004 Prof.VuThieu 101
  • 102. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 6. Type I error is the error of rejecting a true hypothesis. Type II error is the error of accepting a false hypothesis. In practice, one should be careful in fixing the level of significance , the probability of committing a type I error (at arbitrary values such as 1%, 5%, 10%). It is better to quote the p-value of the test statistic. May 2004 Prof.VuThieu 102
  • 103. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions 7. This chapter introduced the normality test to find out whether ui follows the normal distribution. Since in small samples, the t, F,and Chi-square tests require the normality assumption, it is important that this assumption be checked formally May 2004 Prof.VuThieu 103
  • 104. Chapter 5 TWO-VARIABLE REGRESSION: Interval Estimation and Hypothesis Testing 5-13. Summary and Conclusions (ended) 8. If the model is deemed practically adequate, it may be used for forecasting purposes. But should not go too far out of the sample range of the regressor values. Otherwise, forecasting errors can increase dramatically. May 2004 Prof.VuThieu 104
  • 105. Basic Econometrics Chapter 6 EXTENSIONS OF THE TWO-VARIABLE LINEAR REGRESSION MODEL May 2004 Prof.VuThieu 105
  • 106. Chapter 6 EXTENSIONS OF THE TWO-VARIABLE LINEAR REGRESSION MODELS 6-1. Regression through the origin  The SRF form of regression:  Yi = ^2X i + u^ i (6.1.5)  Comparison two types of regressions: * Regression through-origin model and * Regression with intercept May 2004 Prof.VuThieu 106
  • 107. Chapter 6 EXTENSIONS OF THE TWO-VARIABLE LINEAR REGRESSION MODELS 6-1. Regression through the origin Comparison two types of regressions: ^2 = XiYi/X2 i (6.1.6) O ^2 = xiyi/x2 i (3.1.6) I var(^2) = 2/ X2 i (6.1.7) O var(^2) = 2/ x2 i (3.3.1) I ^2 = (u^i)2/(n-1) (6.1.8) O ^2 = (u^i)2/(n-2) (3.3.5) I May 2004 Prof.VuThieu 107
  • 108. Chapter 6 EXTENSIONS OF THE TWO-VARIABLE LINEAR REGRESSION MODELS 6-1. Regression through the origin  r2 for regression through-origin model Raw r2 = (XiYi)2 /X2 i Y2 i (6.1.9)  Note: Without very strong a priory expectation, well advise is sticking to the conventional, intercept- present model. If intercept equals to zero statistically, for practical purposes we have a regression through the origin. If in fact there is an intercept in the model but we insist on fitting a regression through the origin, we would be committing a specification error May 2004 Prof.VuThieu 108
  • 109. Chapter 6 EXTENSIONS OF THE TWO-VARIABLE LINEAR REGRESSION MODELS 6-1. Regression through the origin  Illustrative Examples: 1) Capital Asset Pricing Model - CAPM (page 156) 2) Market Model (page 157) 3) The Characteristic Line of Portfolio Theory (page 159) May 2004 Prof.VuThieu 109
  • 110. Chapter 6 EXTENSIONS OF THE TWO-VARIABLE LINEAR REGRESSION MODELS 6-2. Scaling and units of measurement  Let Yi = ^1 + ^2Xi + u^ i (6.2.1)  Define Y*i=w 1 Y i and X*i=w 2 X i then:  *^2 = (w1/w2) ^2 (6.2.15)  *^1 = w1^1 (6.2.16)  *^2 = w1 2^2 (6.2.17)  Var(*^1) = w2 1 Var(^1) (6.2.18)  Var(*^2) = (w1/w2)2 Var(^2) (6.2.19)  r2 xy = r2 x*y* (6.2.20) May 2004 Prof.VuThieu 110
  • 111. Chapter 6 EXTENSIONS OF THE TWO-VARIABLE LINEAR REGRESSION MODELS 6-2. Scaling and units of measurement  From one scale of measurement, one can derive the results based on another scale of measurement. If w1= w2 the intercept and standard error are both multiplied by w1. If w2=1 and scale of Y changed by w1, then all coefficients and standard errors are all multiplied by w1. If w1=1 and scale of X changed by w2, then only slope coefficient and its standard error are multiplied by 1/w2. Transformation from (Y,X) to (Y*,X*) scale does not affect the properties of OLS Estimators  A numerical example: (pages 161, 163-165) May 2004 Prof.VuThieu 111
  • 112. 6-3. Functional form of regression model  The log-linear model  Semi-log model  Reciprocal model May 2004 Prof.VuThieu 112
  • 113. 6-4. How to measure elasticity The log-linear model  Exponential regression model:  Yi= 1Xi 2 e u i (6.4.1) By taking log to the base e of both side:  lnYi = ln1 +2lnXi + ui , by setting ln1 =  =>  lnYi =  +2lnXi + ui (6.4.3) (log-log, or double-log, or log-linear model) This can be estimated by OLS by letting  Y*i =  +2X*i + ui , where Y*i=lnYi, X*i=lnXi ; 2 measures the ELASTICITY of Y respect to X, that is, percentage change in Y for a given (small) percentage change in X. May 2004 Prof.VuThieu 113
  • 114. 6-4. How to measure elasticity The log-linear model The elasticity E of a variable Y with respect to variable X is defined as: E=dY/dX=(% change in Y)/(% change in X) ~ [(Y/Y) x 100] / [(X/X) x100]= = (Y/X)x (X/Y) = slope x (X/Y)  An illustrative example: The coffee demand function (pages 167-168) May 2004 Prof.VuThieu 114
  • 115. 6-5. Semi-log model: Log-lin and Lin-log Models  How to measure the growth rate: The log-lin model  Y t = Y0 (1+r) t (6.5.1)  lnYt = lnY0 + t ln(1+r) (6.5.2)  lnYt = 1 + 2t , called constant growth model (6.5.5) where 1 = lnY0 ; 2 = ln(1+r)  lnYt = 1 + 2t + ui (6.5.6)  It is Semi-log model, or log-lin model. The slope coefficient measures the constant proportional or relative change in Y for a given absolute change in the value of the regressor (t)  2 = (Relative change in regressand)/(Absolute change in regressor) (6.5.7) May 2004 Prof.VuThieu 115
  • 116. 6-5. Semi-log model: Log-lin and Lin-log Models  Instantaneous Vs. compound rate of growth  2 is instantaneous rate of growth  antilog(2) – 1 is compound rate of growth The linear trend model  Yt = 1 + 2t + ut (6.5.9)  If 2 > 0, there is an upward trend in Y  If 2 < 0, there is an downward trend in Y  Note: (i) Cannot compare the r2 values of models (6.5.5) and (6.5.9) because the regressands in the two models are different, (ii) Such models may be appropriate only if a time series is stationary. May 2004 Prof.VuThieu 116
  • 117. 6-5. Semi-log model: Log-lin and Lin-log Models  The lin-log model:  Yi = 1 +2lnXi + ui (6.5.11)  2 = (Change in Y) / Change in lnX = (Change in Y)/(Relative change in X) ~ (Y)/(X/X) (6.5.12)  or Y = 2 (X/X) (6.5.13)  That is, the absolute change in Y equal to 2 times the relative change in X. May 2004 Prof.VuThieu 117
  • 118. 6-6. Reciprocal Models: Log-lin and Lin-log Models The reciprocal model:  Yi = 1 + 2( 1/Xi ) + ui (6.5.14)  As X increases definitely, the term 2( 1/Xi ) approaches to zero and Yi approaches the limiting or asymptotic value 1 (See figure 6.5 in page 174)  An Illustrative example: The Phillips Curve for the United Kingdom 1950-1966 May 2004 Prof.VuThieu 118
  • 119. 6-7. Summary of Functional Forms Table 6.5 (page 178) May 2004 Prof.VuThieu 119 Model Equation Slope = dY/dX Elasticity = (dY/dX).(X/Y) Linear Y = 1 + 2 X 2 2(X/Y) */ Log-linear (log-log) lnY = 1 + 2 lnX 2 (Y/X) 2 Log-lin lnY = 1 + 2 X 2 (Y) 2 X */ Lin-log Y = 1 + 2 lnX 2(1/X) 2 (1/Y) */ Reciprocal Y = 1 + 2 (1/X) - 2(1/X2) - 2 (1/XY) */
  • 120. 6-7. Summary of Functional Forms  Note: */ indicates that the elasticity coefficient is variable, depending on the value taken by X or Y or both. when no X and Y values are specified, in practice, very often these elasticities are measured at the mean values E(X) and E(Y). ----------------------------------------------- 6-8. A note on the stochastic error term 6-9. Summary and conclusions (pages 179-180) May 2004 Prof.VuThieu 120
  • 121. Basic Econometrics Chapter 7 MULTIPLE REGRESSION ANALYSIS: The Problem of Estimation May 2004 Prof.VuThieu 121
  • 122. 7-1. The three-Variable Model: Notation and Assumptions • Yi = ß1+ ß2X2i + ß3X3i + u i (7.1.1) • ß2 , ß3 are partial regression coefficients • With the following assumptions: + Zero mean value of Ui:: E(u i|X2i,X3i) = 0. i (7.1.2) + No serial correlation: Cov(ui,uj) = 0, i # j (7.1.3) + Homoscedasticity: Var(u i) = 2 (7.1.4) + Cov(ui,X2i) = Cov(ui,X3i) = 0 (7.1.5) + No specification bias or model correct specified (7.1.6) + No exact collinearity between X variables (7.1.7) (no multicollinearity in the cases of more explanatory vars. If there is linear relationship exits, X vars. Are said to be linearly dependent) + Model is linear in parameters May 2004 Prof.VuThieu 122
  • 123. 7-2. Interpretation of Multiple Regression • E(Yi|X2i ,X3i)= ß1+ ß2X2i + ß3X3i (7.2.1) • (7.2.1) gives conditional mean or expected value of Y conditional upon the given or fixed value of the X2 and X3 May 2004 Prof.VuThieu 123
  • 124. 7-3. The meaning of partial regression coefficients • Yi= ß1+ ß2X2i + ß3X3 +….+ ßsXs+ ui • ßk measures the change in the mean value of Y per unit change in Xk, holding the rest explanatory variables constant. It gives the “direct” effect of unit change in Xk on the E(Yi), net of Xj (j # k) • How to control the “true” effect of a unit change in Xk on Y? (read pages 195-197) May 2004 Prof.VuThieu 124
  • 125. 7-4. OLS and ML estimation of the partial regression coefficients • This section (pages 197-201) provides: 1. The OLS estimators in the case of three- variable regression Yi= ß1+ ß2X2i + ß3X3+ ui 2. Variances and standard errors of OLS estimators 3. 8 properties of OLS estimators (pp 199-201) 4. Understanding on ML estimators May 2004 Prof.VuThieu 125
  • 126. 7-5. The multiple coefficient of determination R2 and the multiple coefficient of correlation R • This section provides: 1. Definition of R2 in the context of multiple regression like r2 in the case of two-variable regression 2. R = R2 is the coefficient of multiple regression, it measures the degree of association between Y and all the explanatory variables jointly 3. Variance of a partial regression coefficient Var(ß^k) = 2/ x2 k (1/(1-R2 k)) (7.5.6) Where ß^k is the partial regression coefficient of regressor Xk and R2 k is the R2 in the regression of Xk on the rest regressors May 2004 Prof.VuThieu 126
  • 127. 7-6. Example 7.1: The expectations- augmented Philips Curve for the US (1970- 1982) • This section provides an illustration for the ideas introduced in the chapter • Regression Model (7.6.1) • Data set is in Table 7.1 May 2004 Prof.VuThieu 127
  • 128. 7-7. Simple regression in the context of multiple regression: Introduction to specification bias • This section provides an understanding on “ Simple regression in the context of multiple regression”. It will cause the specification bias which will be discussed in Chapter 13 May 2004 Prof.VuThieu 128
  • 129. 7-8. R2 and the Adjusted-R2 • R2 is a non-decreasing function of the number of explanatory variables. An additional X variable will not decrease R2 R2= ESS/TSS = 1- RSS/TSS = 1-u^2 I / y^2 i (7.8.1) • This will make the wrong direction by adding more irrelevant variables into the regression and give an idea for an adjusted-R2 (R bar) by taking account of degree of freedom • R2 bar= 1- [ u^2 I /(n-k)] / [y^2 i /(n-1) ] , or (7.8.2) R2 bar= 1- ^2 / S2 Y (S2 Y is sample variance of Y) K= number of parameters including intercept term • By substituting (7.8.1) into (7.8.2) we get R2 bar = 1- (1-R2) (n-1)/(n- k) (7.8.4) • For k > 1, R2 bar < R2 thuswhen number of X variables increases R2 bar increases less than R2 and R2 bar can be negative May 2004 Prof.VuThieu 129
  • 130. 7-8. R2 and the Adjusted-R2 • R2 is a non-decreasing function of the number of explanatory variables. An additional X variable will not decrease R2 R2= ESS/TSS = 1- RSS/TSS = 1-u^2 I / y^2 i (7.8.1) • This will make the wrong direction by adding more irrelevant variables into the regression and give an idea for an adjusted-R2 (R bar) by taking account of degree of freedom • R2 bar= 1- [ u^2 I /(n-k)] / [y^2 i /(n-1) ] , or (7.8.2) R2 bar= 1- ^2 / S2 Y (S2 Y is sample variance of Y) K= number of parameters including intercept term • By substituting (7.8.1) into (7.8.2) we get R2 bar = 1- (1-R2) (n-1)/(n- k) (7.8.4) • For k > 1, R2 bar < R2 thuswhen number of X variables increases R2 bar increases less than R2 and R2 bar can be negative May 2004 Prof.VuThieu 130
  • 131. 7-8. R2 and the Adjusted-R2 • Comparing Two R2 Values: To compare, the size n and the dependent variable must be the same • Example 7-2: Coffee Demand Function Revisited (page 210) • The “game” of maximizing adjusted-R2: Choosing the model that gives the highest R2 bar may be dangerous, for in regression our objective is not for that but for obtaining the dependable estimates of the true population regression coefficients and draw statistical inferences about them • Should be more concerned about the logical or theoretical relevance of the explanatory variables to the dependent variable and their statistical significance May 2004 Prof.VuThieu 131
  • 132. 7-9. Partial Correlation Coefficients • This section provides: 1. Explanation of simple and partial correlation coefficients 2. Interpretation of simple and partial correlation coefficients (pages 211-214) May 2004 Prof.VuThieu 132
  • 133. 7-10. Example 7.3: The Cobb-Douglas Production function More on functional form • Yi = 1X2 2i X3 3ieU i (7.10.1) By log-transform of this model: • lnYi = ln1 + 2ln X2i + 3ln X3i + Ui = 0 + 2ln X2i + 3ln X3i + Ui (7.10.2) Data set is in Table 7.3 Report of results is in page 216 May 2004 Prof.VuThieu 133
  • 134. 7-11 Polynomial Regression Models • Yi = 0 + 1 Xi + 2 X2 i +…+ k Xk i + Ui (7.11.3) • Example 7.4: Estimating the Total Cost Function • Data set is in Table 7.4 • Empirical results is in page 221 -------------------------------------------------------------- • 7-12. Summary and Conclusions (page 221) May 2004 Prof.VuThieu 134
  • 135. Basic Econometrics Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference May 2004 Prof.VuThieu 135
  • 136. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-3. Hypothesis testing in multiple regression: Testing hypotheses about an individual partial regression coefficient Testing the overall significance of the estimated multiple regression model, that is, finding out if all the partial slope coefficients are simultaneously equal to zero Testing that two or more coefficients are equal to one another Testing that the partial regression coefficients satisfy certain restrictions Testing the stability of the estimated regression model over time or in different cross-sectional units Testing the functional form of regression models May 2004 Prof.VuThieu 136
  • 137. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-4. Hypothesis testing about individual partial regression coefficients With the assumption that u i ~ N(0,2) we can use t-test to test a hypothesis about any individual partial regression coefficient. H0: 2 = 0 H1: 2  0 If the computed t value > critical t value at the chosen level of significance, we may reject the null hypothesis; otherwise, we may not reject it May 2004 Prof.VuThieu 137
  • 138. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-5. Testing the overall significance of a multiple regression: The F-Test For Yi = 1 + 2X2i + 3X3i + ........+ kXki + ui  To test the hypothesis H0: 2 =3 =....= k= 0 (all slope coefficients are simultaneously zero) versus H1: Not at all slope coefficients are simultaneously zero, compute F=(ESS/df)/(RSS/df)=(ESS/(k-1))/(RSS/(n-k)) (8.5.7) (k = total number of parameters to be estimated including intercept)  If F > F critical = F(k-1,n-k), reject H0  Otherwise you do not reject it May 2004 Prof.VuThieu 138
  • 139. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-5. Testing the overall significance of a multiple regression  Alternatively, if the p-value of F obtained from (8.5.7) is sufficiently low, one can reject H0  An important relationship between R2 and F: F=(ESS/(k-1))/(RSS/(n-k)) or R2/(k-1) F = ---------------- (8.5.1) (1-R2) / (n-k) ( see prove on page 249) May 2004 Prof.VuThieu 139
  • 140. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-5. Testing the overall significance of a multiple regression in terms of R2 For Yi = 1 + 2X2i + 3X3i + ........+ kXki + ui  To test the hypothesis H0: 2 = 3 = .....= k = 0 (all slope coefficients are simultaneously zero) versus H1: Not at all slope coefficients are simultaneously zero, compute  F = [R2/(k-1)] / [(1-R2) / (n-k)] (8.5.13) (k = total number of parameters to be estimated including intercept)  If F > F critical = F , (k-1,n-k), reject H0 May 2004 Prof.VuThieu 140
  • 141. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-5. Testing the overall significance of a multiple regression Alternatively, if the p-value of F obtained from (8.5.13) is sufficiently low, one can reject H0 The “Incremental” or “Marginal” contribution of an explanatory variable: Let X is the new (additional) term in the right hand of a regression. Under the usual assumption of the normality of ui and the HO:  = 0, it can be shown that the following F ratio will follow the F distribution with respectively degree of freedom May 2004 Prof.VuThieu 141
  • 142. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-5. Testing the overall significance of a multiple regression [R2 new - R2 old] / Df1 F com = ---------------------- (8.5.18) [1- R2 new] / Df2 Where Df1 = number of new regressors Df2 = n – number of parameters in the new model R2 new is standing for coefficient of determination of the new regression (by adding X); R2 old is standing for coefficient of determination of the old regression (before adding X). May 2004 Prof.VuThieu 142
  • 143. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-5. Testing the overall significance of a multiple regression Decision Rule: If F com > F , Df1 , Df2 one can reject the Ho that  = 0 and conclude that the addition of X to the model significantly increases ESS and hence the R2 value  When to Add a New Variable? If |t| of coefficient of X > 1 (or F= t 2 of that variable exceeds 1)  When to Add a Group of Variables? If adding a group of variables to the model will give F value greater than 1; May 2004 Prof.VuThieu 143
  • 144. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-6. Testing the equality of two regression coefficients Yi = 1 + 2X2i + 3X3i + 4X4i + ui (8.6.1) Test the hypotheses: H0: 3 = 4 or 3 - 4 = 0 (8.6.2) H1: 3  4 or 3 - 4  0 Under the classical assumption it can be shown: t = [(^3 - ^4) – (3 - 4)] / se(^3 - ^4) follows the t distribution with (n-4) df because (8.6.1) is a four-variable model or, more generally, with (n-k) df. where k is the total number of parameters estimated, including intercept term. se(^3 - ^4) =  [var((^3) + var( ^4) – 2cov(^3, ^4)] (8.6.4) (see appendix) May 2004 Prof.VuThieu 144
  • 145. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference t = (^3 - ^4) /  [var((^3) + var( ^4) – 2cov(^3, ^4)] (8.6.5) Steps for testing: 1. Estimate ^3 and ^4 2. Compute se(^3 - ^4) through (8.6.4) 3. Obtain t- ratio from (8.6.5) with H0: 3 = 4 4. If t-computed > t-critical at designated level of significance for given df, then reject H0. Otherwise do not reject it. Alternatively, if the p-value of t statistic from (8.6.5) is reasonable low, one can reject H0.  Example 8.2: The cubic cost function revisited May 2004 Prof.VuThieu 145
  • 146. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-7. Restricted least square: Testing linear equality restrictions Yi = 1X 2 2i X 3 3i eu i (7.10.1) and (8.7.1) Y = output X2 = labor input X3 = capital input In the log-form: lnYi = 0 + 2lnX2i + 3lnX3i + ui (8.7.2) with the constant return to scale: 2 + 3 = 1 (8.7.3) May 2004 Prof.VuThieu 146
  • 147. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-7. Restricted least square: Testing linear equality restrictions How to test (8.7.3)  The t Test approach (unrestricted): test of the hypothesis H0: 2 + 3 = 1 can be conducted by t- test: t = [(^2 + ^3) – (2 + 3)] / se(^2 - ^3) (8.7.4)  The F Test approach (restricted least square -RLS): Using, say, 2 = 1-3 and substitute it into (8.7.2) we get: ln(Yi /X2i) = 0 + 3 ln(X3i /X2i) + ui (8.7.8). Where (Yi /X2i) is output/labor ratio, and (X3i / X2i) is capital/labor ratio May 2004 Prof.VuThieu 147
  • 148. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-7. Restricted least square: Testing linear equality restrictions u^2 UR=RSSUR of unrestricted regression (8.7.2) and  u^2 R = RSSR of restricted regression (8.7.7), m = number of linear restrictions, k = number of parameters in the unrestricted regression, n = number of observations. R2 UR and R2 R are R2 values obtained from unrestricted and restricted regressions respectively. Then F=[(RSSR – RSSUR)/m]/[RSSUR/(n-k)] = = [(R2 UR – R2 R) / m] / [1 – R2 UR / (n-k)] (8.7.10) follows F distribution with m, (n-k) df. Decision rule: If F > F m, n-k , reject H0: 2 + 3 = 1 May 2004 Prof.VuThieu 148
  • 149. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-7. Restricted least square: Testing linear equality restrictions  Note: R2 UR  R2 R (8.7.11)  and  u^2 UR   u^2 R (8.7.12)  Example 8.3: The Cobb-Douglas Production function for Taiwanese Agricultural Sector, 1958-1972. (pages 259-260). Data in Table 7.3 (page 216)  General F Testing (page 260)  Example 8.4: The demand for chicken in the US, 1960-1982. Data in exercise 7.23 (page 228) May 2004 Prof.VuThieu 149
  • 150. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-8. Comparing two regressions: Testing for structural stability of regression models Table 8.8: Personal savings and income data, UK, 1946- 1963 (millions of pounds) Savings function:  Reconstruction period: Y t = 1+ 2X t + U1t (t = 1,2,...,n1)  Post-Reconstruction period: Y t = 1 + 2X t + U2t (t = 1,2,...,n2) Where Y is personal savings, X is personal income, the us are disturbance terms in the two equations and n1, n2 are the number of observations in the two period May 2004 Prof.VuThieu 150
  • 151. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-8. Comparing two regressions: Testing for structural stability of regression models + The structural change may mean that the two intercept are different, or the two slopes are different, or both are different, or any other suitable combination of the parameters. If there is no structural change we can combine all the n1, n2 and just estimate one savings function as: Y t = l1 + l2X t + Ut (t = 1,2,...,n1, 1,....n2). (8.8.3) How do we find out whether there is a structural change in the savings-income relationship between the two period? A popular test is Chow-Test, it is simply the F Test discussed earlier HO: i = i i Vs H1: i that i  i May 2004 Prof.VuThieu 151
  • 152. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-8. Comparing two regressions: Testing for structural stability of regression models + The assumptions underlying the Chow test u1t and u2t ~ N(0,s2), two error terms are normally distributed with the same variance u1t and u2t are independently distributed Step 1: Estimate (8.8.3), get RSS, say, S1 with df = (n1+n2 – k); k is number of parameters estimated ) Step 2: Estimate (8.8.1) and (8.8.2) individually and get their RSS, say, S2 and S3 , with df = (n1 – k) and (n2-k) respectively. Call S4 = S2+S3; with df = (n1+n2 – 2k) Step 3: S5 = S1 – S4; May 2004 Prof.VuThieu 152
  • 153. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-8. Comparing two regressions: Testing for structural stability of regression models Step 4: Given the assumptions of the Chow Test, it can be show that F = [S5 / k] / [S4 / (n1+n2 – 2k)] (8.8.4) follows the F distribution with Df = (k, n1+n2 – 2k) Decision Rule: If F computed by (8.8.4) > F- critical at the chosen level of significance a => reject the hypothesis that the regression (8.8.1) and (8.8.2) are the same, or reject the hypothesis of structural stability; One can use p-value of the F obtained from (8.8.4) to reject H0 if p-value low reasonably. + Apply for the data in Table 8.8 May 2004 Prof.VuThieu 153
  • 154. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-9. Testing the functional form of regression: Choosing between linear and log-linear regression models: MWD Test (MacKinnon, White and Davidson) H0: Linear Model Y is a linear function of regressors, the Xs; H1: Log-linear Model Y is a linear function of logs of regressors, the lnXs; May 2004 Prof.VuThieu 154
  • 155. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference 8-9. Testing the functional form of regression: Step 1: Estimate the linear model and obtain the estimated Y values. Call them Yf (i.e.,Y^). Take lnYf. Step 2: Estimate the log-linear model and obtain the estimated lnY values, call them lnf (i.e., ln^Y ) Step 3: Obtain Z1 = (lnYf – lnf) Step 4: Regress Y on Xs and Z1. Reject H0 if the coefficient of Z1 is statistically significant, by the usual t - test Step 5: Obtain Z2 = antilog of (lnf – Yf) Step 6: Regress lnY on lnXs and Z2. Reject H1 if the coefficient of Z2 is statistically significant, by the usual t-test May 2004 Prof.VuThieu 155
  • 156. Chapter 8 MULTIPLE REGRESSION ANALYSIS: The Problem of Inference Example 8.5: The demand for Roses (page 266- 267). Data in exercise 7.20 (page 225) 8-10. Prediction with multiple regression Follow the section 5-10 and the illustration in pages 267-268 by using data set in the Table 8.1 (page 241) 8-11. The troika of hypothesis tests: The likelihood ratio (LR), Wald (W) and Lagarange Multiplier (LM) Tests 8-12. Summary and Conclusions May 2004 Prof.VuThieu 156