Upcoming SlideShare
×

# 2.3 the simple regression model

693 views
597 views

Published on

Published in: Technology, Economy & Finance
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

Views
Total views
693
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
8
0
Likes
0
Embeds 0
No embeds

No notes for slide

### 2.3 the simple regression model

1. 1. Expected Values and Variances of OLS Estimators:We defined the population model , and we claimed that thekey assumption for simple regression analysis to be useful is that theexpected value of u given any value of x is zero.We discussed the algebraic properties of OLS estimation. We now return tothe population model and study the statistical properties of OLS.In other words, we now view as estimators for the parameters that appear in the population model.Assumptions for unbiased OLS:SLR 1.Liner in parameters:SLR 2.We use random sample size for the population model i= 1,2,3……….n.where, is the error or disturbance for observation i(for example, person i,firm i, city i, etc.).Thus , contains the unobservables for observation affects the .The should not be confused with the that was defined earlier.Discussion between the errors and residuals will be covered later.SLR 3.Zero Conditional Mean:For a random sample this assumption implies:
2. 2. SLR 4.The sample variation in the independent variable:This means if = wage and = education then SLR. 4 fail only if every one inthe sample has the same amount of education. This is hardly true!USING SLR1.- SLR4,for any values of In other words are unbiased estimates forVariance in OLS estimators:Once we know that are unbiased estimates for we must alsoknow how far do we expect to be from on an average.Among other things this allows us to choose the best estimator among all, or atleast a broad class of unbiased estimators.SLR 5.This is the homoskedasity Assumption.This assumption plays no role in showing are unbiased estimatorsof is often called the error variance or disturbance variance.
3. 3. ** NoteUnder Assumption SLR.1 through SLR.5Where these are conditional on the sample valuesNote: All the quantities of entering in the preceding equations except canbe estimated from the data.
4. 4. But variance can be estimated using the following formula (IF YOU WANTREFER TO APPENDIX 3.A)Estimating the Error in Variance:So far we know that: And .Difference between errors (or disturbance)and residuals is crucial forconstructing .Population model in terms of randomly observed sample can be written as:and is the ERROR for observation in terms of fitted value can be expressed as:and is the RESIDUAL for observationThus:We saw previously that for OLS to be unbiased .But .The difference between them does not have a zero expected value.Now returning to :
5. 5. Thus the unbiased “estimator” for is:But since we do not observer and observe only the OLS residual of .This is the true estimator, because it gives a complete rule for any sample dataon onOne slight drawback to this estimator is that it turns out to be biased(although for large the bias is small).The is biased only because it does not account for two restrictions thatOLS satisfies:Since there are only n-2 degrees of freedom in OLS residuals (as opposed todegrees of freedom in errors)If we apply the restrictions in are replace with the above restrictionswould no longer hold.The unbiased estimator of we will use makes degrees-of-freedomadjustment:*This estimator is also denoted asProperties of OLS Estimators:If assumption 1 through 4 hold then the estimators determined byOLS are known as Best Liner Unbiased Estimates (BLUE).What does BLUE stand for?
6. 6. Estimator - is an estimator of true value of .Linear- is linear in parameter.Unbiased – On average, the actual value of the will represent thetrue values.Best – Means of OLS estimator has minimum variance among the class oflinear unbiased estimators. The Gauss – Markov theorem provides of that ofOLS estimator is best.**Note