SlideShare a Scribd company logo
Heteroscedasticity
Remedial
Measures
Devendra Patil
M.Sc. Applied Statistics (Sem-IV)
Roll no.: 05
Introduction
2
We can define heteroscedasticity as the
condition in which the variance of the error
term or the residual term in a regression
model varies. As you can see in the above
diagram, in the case of homoscedasticity,
the data points are equally scattered while
in the case of heteroscedasticity, the data
points are not equally scattered.
Possible reasons for arising
Heteroscedasticity:
3
1.Often occurs in those data sets which have a large range
between the largest and the smallest observed values i.e.
when there are outliers.
2.When the model is not correctly specified.
3.If observations are mixed with different measures of scale.
4.When incorrect transformation of data is used to perform the
regression.
5.Skewness in the distribution of a regressor, and maybe some
other sources.
Effects of Heteroscedasticity
4
• OLS (Ordinary Least Square) estimators are not the Best Linear Unbiased
Estimator(BLUE) and their variance is not the lowest of all other
unbiased estimators.
• Estimators are no longer best/efficient.
• The tests of hypothesis (like t-test, F-test) are no longer valid due to the
inconsistency in the co-variance matrix of the estimated regression
coefficients.
Remedial Measures
Weighted Least Squares (WLS) Estimator
Presentation title 6
• The Weighted Least Squares estimator is the OLS estimator, which is applied
to a transformed model after multiplying each term on both sides of the
regression equation by a “weight”, denoted by wi . For instance, consider the
following general linear regression model with Heteroscedasticity:
• Yᵢ = 𝛽0 + 1Xᵢ1+ui ; i = 1,2, … n
• Var(ui) = ²Zᵢ² ; where Zᵢ = some function of Xᵢ
• To obtain WLS estimator, the transformed model will be:
• wᵢYᵢ= wᵢ 𝛽0 + 1(wᵢXᵢ1) + wᵢui ; i= 1,2, … n
7
Question : For the model 𝒀𝒊 = 𝜷𝑿𝒊 + 𝒖𝒊 𝐰ith variance var(𝒖𝒊) = 𝝈𝟐𝒁𝒊
𝟐
, prove that WLS estimator
of 𝜷 has lower variance then its OLS estimator , where the weight is 𝒘𝒊 =
𝟏
𝒁𝒊
ANSWER:- For the model 𝒀𝒊 = 𝜷𝑿𝒊 + 𝒖𝒊 𝐰ith variance var(𝒖𝒊) =𝝈𝟐𝒁𝒊
𝟐
The OLS estimator of 𝜷 : 𝜷 =
𝑿𝒊𝒀𝒊
𝑿𝒊
𝟐 and var( 𝜷 )=
𝑿𝒊
𝟐
var(𝒖𝒊)
(𝑿𝒊
𝟐)𝟐 =
𝝈𝟐 𝑿𝒊
𝟐
𝒁𝒊
𝟐
(𝑿𝒊
𝟐)𝟐
If we divide entire equation by 𝒁𝒊 ,
𝒀𝒊
𝒁𝒊
= 𝜷
𝑿𝒊
𝒁𝒊
+
𝒖𝒊
𝒁𝒊
or 𝒚𝒊 = 𝜷𝒙𝒊 + 𝒗𝒊
Here var(𝒗𝒊) =
var(𝒖𝒊)
𝒁𝒊
𝟐 =
𝝈𝟐𝒁𝒊
𝟐
𝒁𝒊
𝟐 = 𝝈𝟐 ( constant – Homoscedasticity )
The WLS estimator of 𝜷 : 𝜷∗ =
𝒙𝒊𝒚𝒊
𝒙𝒊
𝟐 =
𝒙𝒊(𝜷𝒙𝒊+𝒗𝒊)
𝒙𝒊
𝟐 = 𝜷 +
𝒙𝒊𝒗𝒊
𝒙𝒊
𝟐
Var(𝜷∗ )=E(𝜷∗ - 𝜷)² =
𝒙𝒊
𝟐
𝑬(𝒗𝒊
𝟐
)
(𝒙𝒊
𝟐)𝟐 =
𝝈𝟐 𝒙𝒊
𝟐
(𝒙𝒊
𝟐)𝟐 =
𝝈𝟐
𝒙𝒊
𝟐 [E(𝒖𝒊 , 𝒖𝒋) = 0 ]….(𝑿𝒊’s are independent)
8
Hence,
𝒗𝒂𝒓(𝜷∗)
𝒗𝒂𝒓(𝜷)
=
𝝈𝟐
𝒙𝒊
𝟐
(
𝝈𝟐 𝑿𝒊
𝟐 𝒁𝒊
𝟐
(𝑿𝒊
𝟐)𝟐
)
=
(𝑿𝒊
𝟐
)𝟐
{( 𝑿𝒊
𝟐 𝒁𝒊
𝟐) (𝒙𝒊
𝟐)}
=
(𝑿𝒊
𝟐
)𝟐
( 𝑿𝒊
𝟐 𝒁𝒊
𝟐)( (
𝑿𝒊
𝒁𝒊
)𝟐)
Let XᵢZᵢ = 𝒂𝒊,
𝑿𝒊
𝒁𝒊
= 𝒃𝒊 ; 𝒂𝒊𝒃𝒊= 𝑿𝒊
𝟐
Hence,
𝒗𝒂𝒓(𝜷∗)
𝒗𝒂𝒓(𝜷)
=
(𝜮𝒂𝒊𝒃𝒊)𝟐
( 𝒂𝒊
𝟐 )( 𝒃𝒊
𝟐 )
According to Cauchy-Schwartz, (𝜮𝒂𝒊𝒃𝒊)𝟐 < ( 𝒂𝒊
𝟐
)( 𝒃𝒊
𝟐
)
Var(𝜷∗)<Var(𝜷)
When 𝒂𝒊= 𝜽 𝒃𝒊
𝒗𝒂𝒓(𝜷∗)
𝒗𝒂𝒓(𝜷)
=
(𝜮𝒂𝒊𝒃𝒊)𝟐
( 𝒂𝒊
𝟐 )( 𝒃𝒊
𝟐 )
=1 i.e., Var(𝜷∗)=Var(𝜷)
9
When 𝒂𝒊= 𝜽 𝒃𝒊
𝒗𝒂𝒓(𝜷∗)
𝒗𝒂𝒓(𝜷)
=
(𝜮𝒂𝒊𝒃𝒊)𝟐
( 𝒂𝒊
𝟐 )( 𝒃𝒊
𝟐 )
=1 i.e., Var(𝜷∗)=Var(𝜷)
When 𝒂𝒊= 𝜽 𝒃𝒊
𝒂𝒊
𝒃𝒊
=
𝑿𝒊𝒁𝒊
𝑿𝒊
𝒁𝒊
= 𝒁𝒊
𝟐
=𝜽
i.e., var(𝒖𝒊)= 𝝈𝟐𝒁𝒊
𝟐
= 𝝈𝟐𝜽
FEASIBLE GLS
10
•
Here ,we do not know the nature of heteroscadasticity
• Step 1
• Y= 𝛽0 + 1X1+…….+ kXk + uᵢ and calculate 𝒖𝟐
(as further 𝒖𝟐
log(𝒖𝟐
))
• Step 2
• 𝒈𝟐 =Log(𝒖𝟐)= 𝜹𝟎 + 𝜹𝟏 X1….. +𝜹𝒌Xk + error
• Step 3
h(x)=exp(𝒈𝟐)
• Step 4
Y= 𝛽0 + 1X1+……. + kXk using h(x) weights as WLS
11
𝒀𝟎 = 𝜷𝟎 + 𝜷𝟏 X1+…….+ 𝜷𝒌 Xk +ui
Var(u|x)= σ²h(x)
h(x)=exp(𝛿0+𝜹1X1….. + 𝜹kXk )
Var(u|x)=σ²exp(𝛿0+𝜹1X1….. + 𝜹kXk )
Log(𝒖𝟐)= 𝜹𝟎 + 𝜹𝟏 X1….. + 𝜹𝒌 Xk + error
hᵢ(x)=exp(𝜹𝟎 + 𝜹𝟏 X1….. + 𝜹𝒌 Xk )
hᵢ(x) in WLS as weights
FEASIBLE
GLS
Remedial measures when true
error variance (𝝈ᵢ²)is unknown
13
• WLS method makes an implicit assumption that true error
variance (𝝈ᵢ²) is known. However, in reality, it is difficult to have
knowledge of the true error variance. Thus, we need some
other methods to obtain consistent estimate of variance of
error term.
• In this method, we need to make some assumptions about
true error variance (𝝈ᵢ²) and transform the original regression
model. After transformation, the new model satisfies
Homoscedasticity assumption. Let’s say original regression
model is:
Yᵢ = 𝛽1+𝛽2Xᵢ+ ui and var (uᵢ) = 𝝈ᵢ² ; i= 1,2, … n
Presentation title 14
When the error variance is proportional to
Xᵢ
Run the original OLS regression and obtain the residuals. Plot the square of these residuals , (σᵢ²) , against the
explanatory variable X. If we get a pattern similar to figure 1, then we say that error variance is proportional
to Xi or linearly related to Xᵢ and (σ²) is the factor of proportionality, which is a constant. Symbolically, E (Ui² )
= σ² Xi i = 1,2, … n
Now we transform the original regression model by dividing
original regression equation by √Xi , we get:
Yi /√Xi = β₁/√Xi + β₂ Xi/√Xi +ui / √Xi i = 1,2, … .n
= β₁ /√Xi + 𝜷𝟐 √Xi +Vi i = 1,2, … . n
Here, Vi= ui /√Xi and Xi > 0This transformed regression
equation is called “Square Root Transformation” and the error
variance Vi is Homoscedastic.
Proof:
E(Vi²)= E (ui / √Xi )²= E(Ui² )/ Xi = σ ²
Presentation title 15
When the error variance is proportional to
Xᵢ²
Run the original OLS regression and obtain the residuals. Plot the square of these residuals , (σᵢ²) , against the
explanatory variable X. If we get a pattern similar to figure 2, then we say that error variance is proportional
to Xi or non-linearly related to Xᵢ and (σ²) is the factor of proportionality, which is a constant. Symbolically, E
(Ui² ) = σ² Xᵢ² i = 1,2, … n
Now we transform the original regression model by dividing
original regression equation by Xi , we get:
Yi /Xi = β₁/Xi + β₂ Xi/Xi +ui / Xi i = 1,2, … .n
= β₁ /Xi + 𝜷𝟐 +Vi i = 1,2, … . n
Here, Vi= ui /Xi and Xi > 0This transformed regression
equation is called “Square Transformation” and the error
variance Vi is Homoscedastic.
Proof:
E(Vi²)= E (ui / Xi )²= E(Ui² )/ Xᵢ² = σ ²
Presentation title 16
When the error variance is
proportional to square of the mean
value of Y
According to this assumption, the error variance is proportional to square of the
mean value of Y and σ² is a constant. Symbolically, E (Uᵢ²) = σ² . [E (Yᵢ)] ²; i =
1,2, … . n.
Now we transform the original regression model by dividing it by E(Yᵢ) and we
get: Where, E(Yᵢ) = β₁ + β₂ Xᵢ
Yᵢ/E(Yᵢ) = β₁ /E(Yᵢ) + β₂Xᵢ/E(Yᵢ) + μᵢ /E(Yᵢ)
= β₁ /E(Yᵢ) + β₂Xᵢ/E(Yᵢ) + Vᵢ; i = 1,2, … . n
Where, Vᵢ= μᵢ /E(Yᵢ)
We can show that the error variance Vᵢ is Homoscedastic
Proof: E (Vᵢ²)= E (uᵢ / E(Yᵢ))² = E(Uᵢ)² / E(Yᵢ)² = σ²
Presentation title 17
E(Yi) depends on 𝜷₁ and𝜷₂ which are unknown. We know
𝒀𝒊= 𝜷₁ + 𝜷₂𝑿𝒊
Which is an estimator of E(Yi)
First, we run the usual OLS regression, disregarding the
heteroscedasticity problem , and obtain 𝒀𝒊 then using the
estimated 𝒀𝒊 , we transform our model:
𝒀𝒊
𝒀𝒊
=𝜷₁
𝟏
𝒀𝒊
+ 𝜷₂
𝑿𝒊
𝒀𝒊
+
𝒖𝒊
𝒀𝒊
………. i = 1,2, … . n.
The transformation will perform satisfactorily in practice if the
sample size is reasonably large.
18
log transformation of the original regression model can help to
reduce the problem of heteroscedasticity. Symbolically,
log(Yᵢ)=𝜷₁ + 𝜷₂log(Xᵢ) +uᵢ
i = 1,2, … . n.
Log Transformation
19
20
21
22
23
The best model is the model having high p-value:
Remedial Measure 1 is the best as p-value =0.6251
24
HETEROSCEDASTICITY
HOMOSCEDASTICITY
Some “problems” associated with this
transformation method
25
• In multiple regression models, we may not decide which of the X variables should be
chosen for transforming the data.
• Log transformation is not applicable if some of the Y and X values are zero or negative.
• It may happen that the ratios of variables are found to be correlated even though the
original variables are uncorrelated or random. For instance, in the model, Yᵢ= β₁ +
β₂Xᵢ+ uᵢ ; Y and X may not be correlated but in the transformed model Yᵢ /Xᵢ= β₁/Xᵢ+ β₂
+ uᵢ /Xᵢ , Yᵢ/ Xᵢ and 1/Xᵢ are often found to be correlated. Hence, there is a problem of
spurious correlation.
Summary
All of the remedial measures discussed above are just a way
to speculate about the nature of the population error
variance, 𝝈ᵢ² and which method is to be used depends upon
the nature of the problem and severity of Heteroscedasticity.
26
Thank you
DEVENDRA PATIL
EMAIL:
devendrapatil1631@gmail.com
LinkedIn:
linkedin.com/in/devendrapatil161299

More Related Content

What's hot

Overview of econometrics 1
Overview of econometrics 1Overview of econometrics 1
Overview of econometrics 1
Emeni Joshua
 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
Geethu Rangan
 
Multicolinearity
MulticolinearityMulticolinearity
Multicolinearity
Pawan Kawan
 
Econometrics
EconometricsEconometrics
Econometrics
GarimaGupta229
 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
Madurai Kamaraj University
 
Econometrics
Econometrics Econometrics
Econometrics
DavidEdem4
 
Autocorrelation- Detection- part 1- Durbin-Watson d test
Autocorrelation- Detection- part 1- Durbin-Watson d testAutocorrelation- Detection- part 1- Durbin-Watson d test
Autocorrelation- Detection- part 1- Durbin-Watson d test
Shilpa Chaudhary
 
Autocorrelation
AutocorrelationAutocorrelation
AutocorrelationAkram Ali
 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
Muhammad Ali
 
Dummy variable
Dummy variableDummy variable
Dummy variableAkram Ali
 
Econometrics lecture 1st
Econometrics lecture 1stEconometrics lecture 1st
Econometrics lecture 1stIshaq Ahmad
 
Basic econometrics lectues_1
Basic econometrics lectues_1Basic econometrics lectues_1
Basic econometrics lectues_1
Nivedita Sharma
 
Introduction to regression analysis 2
Introduction to regression analysis 2Introduction to regression analysis 2
Introduction to regression analysis 2
Sibashis Chakraborty
 
IV Slides 2020.pptx
IV Slides 2020.pptxIV Slides 2020.pptx
IV Slides 2020.pptx
Mamdouh Mohamed
 
Multicollinearity PPT
Multicollinearity PPTMulticollinearity PPT
Multicollinearity PPT
GunjanKhandelwal13
 
Autocorrelation (1)
Autocorrelation (1)Autocorrelation (1)
Autocorrelation (1)
Manokamna Kochar
 
Auto Correlation Presentation
Auto Correlation PresentationAuto Correlation Presentation
Auto Correlation Presentation
Irfan Hussain
 
Autocorrelation
AutocorrelationAutocorrelation
Autocorrelation
Pabitra Mishra
 
Basic concepts of_econometrics
Basic concepts of_econometricsBasic concepts of_econometrics
Basic concepts of_econometrics
SwapnaJahan
 
Dummyvariable1
Dummyvariable1Dummyvariable1
Dummyvariable1
Sreenivasa Harish
 

What's hot (20)

Overview of econometrics 1
Overview of econometrics 1Overview of econometrics 1
Overview of econometrics 1
 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
 
Multicolinearity
MulticolinearityMulticolinearity
Multicolinearity
 
Econometrics
EconometricsEconometrics
Econometrics
 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
 
Econometrics
Econometrics Econometrics
Econometrics
 
Autocorrelation- Detection- part 1- Durbin-Watson d test
Autocorrelation- Detection- part 1- Durbin-Watson d testAutocorrelation- Detection- part 1- Durbin-Watson d test
Autocorrelation- Detection- part 1- Durbin-Watson d test
 
Autocorrelation
AutocorrelationAutocorrelation
Autocorrelation
 
Heteroscedasticity
HeteroscedasticityHeteroscedasticity
Heteroscedasticity
 
Dummy variable
Dummy variableDummy variable
Dummy variable
 
Econometrics lecture 1st
Econometrics lecture 1stEconometrics lecture 1st
Econometrics lecture 1st
 
Basic econometrics lectues_1
Basic econometrics lectues_1Basic econometrics lectues_1
Basic econometrics lectues_1
 
Introduction to regression analysis 2
Introduction to regression analysis 2Introduction to regression analysis 2
Introduction to regression analysis 2
 
IV Slides 2020.pptx
IV Slides 2020.pptxIV Slides 2020.pptx
IV Slides 2020.pptx
 
Multicollinearity PPT
Multicollinearity PPTMulticollinearity PPT
Multicollinearity PPT
 
Autocorrelation (1)
Autocorrelation (1)Autocorrelation (1)
Autocorrelation (1)
 
Auto Correlation Presentation
Auto Correlation PresentationAuto Correlation Presentation
Auto Correlation Presentation
 
Autocorrelation
AutocorrelationAutocorrelation
Autocorrelation
 
Basic concepts of_econometrics
Basic concepts of_econometricsBasic concepts of_econometrics
Basic concepts of_econometrics
 
Dummyvariable1
Dummyvariable1Dummyvariable1
Dummyvariable1
 

Similar to Heteroscedasticity Remedial Measures.pptx

Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic NetsData Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Derek Kane
 
Simple linear regression
Simple linear regressionSimple linear regression
Simple linear regression
pankaj8108
 
Get Multiple Regression Assignment Help
Get Multiple Regression Assignment Help Get Multiple Regression Assignment Help
Get Multiple Regression Assignment Help
HelpWithAssignment.com
 
Multivariate reg analysis
Multivariate reg analysisMultivariate reg analysis
Multivariate reg analysis
Irfan Hussain
 
ML-UNIT-IV complete notes download here
ML-UNIT-IV  complete notes download hereML-UNIT-IV  complete notes download here
ML-UNIT-IV complete notes download here
keerthanakshatriya20
 
Talk 4
Talk 4Talk 4
Chapter8
Chapter8Chapter8
Chapter8
Vu Vo
 
2- Introduction to Modeling.pdf
2- Introduction to Modeling.pdf2- Introduction to Modeling.pdf
2- Introduction to Modeling.pdf
MUHAMMADSAEED509568
 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regression
Avjinder (Avi) Kaler
 
Lecture 4
Lecture 4Lecture 4
FSE 200AdkinsPage 1 of 10Simple Linear Regression Corr.docx
FSE 200AdkinsPage 1 of 10Simple Linear Regression Corr.docxFSE 200AdkinsPage 1 of 10Simple Linear Regression Corr.docx
FSE 200AdkinsPage 1 of 10Simple Linear Regression Corr.docx
budbarber38650
 
2. diagnostics, collinearity, transformation, and missing data
2. diagnostics, collinearity, transformation, and missing data 2. diagnostics, collinearity, transformation, and missing data
2. diagnostics, collinearity, transformation, and missing data
Malik Hassan Qayyum 🕵🏻‍♂️
 
The linear regression model: Theory and Application
The linear regression model: Theory and ApplicationThe linear regression model: Theory and Application
The linear regression model: Theory and ApplicationUniversity of Salerno
 
REGRESSION ANALYSIS THEORY EXPLAINED HERE
REGRESSION ANALYSIS THEORY EXPLAINED HEREREGRESSION ANALYSIS THEORY EXPLAINED HERE
REGRESSION ANALYSIS THEORY EXPLAINED HERE
ShriramKargaonkar
 
Statistical parameters
Statistical parametersStatistical parameters
Statistical parameters
Burdwan University
 
Eigenvalues for HIV-1 dynamic model with two delays
Eigenvalues for HIV-1 dynamic model with two delaysEigenvalues for HIV-1 dynamic model with two delays
Eigenvalues for HIV-1 dynamic model with two delays
IOSR Journals
 
Logistic regression
Logistic regressionLogistic regression
Logistic regression
Ayurdata
 

Similar to Heteroscedasticity Remedial Measures.pptx (20)

Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic NetsData Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
Data Science - Part XII - Ridge Regression, LASSO, and Elastic Nets
 
Simple linear regression
Simple linear regressionSimple linear regression
Simple linear regression
 
Get Multiple Regression Assignment Help
Get Multiple Regression Assignment Help Get Multiple Regression Assignment Help
Get Multiple Regression Assignment Help
 
Multivariate reg analysis
Multivariate reg analysisMultivariate reg analysis
Multivariate reg analysis
 
JISA_Paper
JISA_PaperJISA_Paper
JISA_Paper
 
ML-UNIT-IV complete notes download here
ML-UNIT-IV  complete notes download hereML-UNIT-IV  complete notes download here
ML-UNIT-IV complete notes download here
 
Talk 4
Talk 4Talk 4
Talk 4
 
Chapter 14 Part I
Chapter 14 Part IChapter 14 Part I
Chapter 14 Part I
 
Chapter8
Chapter8Chapter8
Chapter8
 
2- Introduction to Modeling.pdf
2- Introduction to Modeling.pdf2- Introduction to Modeling.pdf
2- Introduction to Modeling.pdf
 
Multiple linear regression
Multiple linear regressionMultiple linear regression
Multiple linear regression
 
Lecture 4
Lecture 4Lecture 4
Lecture 4
 
FSE 200AdkinsPage 1 of 10Simple Linear Regression Corr.docx
FSE 200AdkinsPage 1 of 10Simple Linear Regression Corr.docxFSE 200AdkinsPage 1 of 10Simple Linear Regression Corr.docx
FSE 200AdkinsPage 1 of 10Simple Linear Regression Corr.docx
 
2. diagnostics, collinearity, transformation, and missing data
2. diagnostics, collinearity, transformation, and missing data 2. diagnostics, collinearity, transformation, and missing data
2. diagnostics, collinearity, transformation, and missing data
 
The linear regression model: Theory and Application
The linear regression model: Theory and ApplicationThe linear regression model: Theory and Application
The linear regression model: Theory and Application
 
REGRESSION ANALYSIS THEORY EXPLAINED HERE
REGRESSION ANALYSIS THEORY EXPLAINED HEREREGRESSION ANALYSIS THEORY EXPLAINED HERE
REGRESSION ANALYSIS THEORY EXPLAINED HERE
 
Statistical parameters
Statistical parametersStatistical parameters
Statistical parameters
 
Eigenvalues for HIV-1 dynamic model with two delays
Eigenvalues for HIV-1 dynamic model with two delaysEigenvalues for HIV-1 dynamic model with two delays
Eigenvalues for HIV-1 dynamic model with two delays
 
Logistic regression
Logistic regressionLogistic regression
Logistic regression
 
9057263
90572639057263
9057263
 

Recently uploaded

Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
vcaxypu
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
ewymefz
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
ewymefz
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
Subhajit Sahu
 
Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)
TravisMalana
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
Tiktokethiodaily
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
ukgaet
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Boston Institute of Analytics
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
enxupq
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
nscud
 
Tabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflowsTabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflows
alex933524
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
vcaxypu
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
jerlynmaetalle
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Subhajit Sahu
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
NABLAS株式会社
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
Oppotus
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
ArpitMalhotra16
 

Recently uploaded (20)

Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
一比一原版(ArtEZ毕业证)ArtEZ艺术学院毕业证成绩单
 
Criminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdfCriminal IP - Threat Hunting Webinar.pdf
Criminal IP - Threat Hunting Webinar.pdf
 
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
一比一原版(UofM毕业证)明尼苏达大学毕业证成绩单
 
一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单一比一原版(BU毕业证)波士顿大学毕业证成绩单
一比一原版(BU毕业证)波士顿大学毕业证成绩单
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
 
Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)Malana- Gimlet Market Analysis (Portfolio 2)
Malana- Gimlet Market Analysis (Portfolio 2)
 
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
1.Seydhcuxhxyxhccuuxuxyxyxmisolids 2019.pptx
 
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
一比一原版(UVic毕业证)维多利亚大学毕业证成绩单
 
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project PresentationPredicting Product Ad Campaign Performance: A Data Analysis Project Presentation
Predicting Product Ad Campaign Performance: A Data Analysis Project Presentation
 
一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单一比一原版(QU毕业证)皇后大学毕业证成绩单
一比一原版(QU毕业证)皇后大学毕业证成绩单
 
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
一比一原版(CBU毕业证)不列颠海角大学毕业证成绩单
 
Tabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflowsTabula.io Cheatsheet: automate your data workflows
Tabula.io Cheatsheet: automate your data workflows
 
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
一比一原版(RUG毕业证)格罗宁根大学毕业证成绩单
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
 
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
Algorithmic optimizations for Dynamic Levelwise PageRank (from STICD) : SHORT...
 
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
【社内勉強会資料_Octo: An Open-Source Generalist Robot Policy】
 
Q1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year ReboundQ1’2024 Update: MYCI’s Leap Year Rebound
Q1’2024 Update: MYCI’s Leap Year Rebound
 
standardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghhstandardisation of garbhpala offhgfffghh
standardisation of garbhpala offhgfffghh
 

Heteroscedasticity Remedial Measures.pptx

  • 2. Introduction 2 We can define heteroscedasticity as the condition in which the variance of the error term or the residual term in a regression model varies. As you can see in the above diagram, in the case of homoscedasticity, the data points are equally scattered while in the case of heteroscedasticity, the data points are not equally scattered.
  • 3. Possible reasons for arising Heteroscedasticity: 3 1.Often occurs in those data sets which have a large range between the largest and the smallest observed values i.e. when there are outliers. 2.When the model is not correctly specified. 3.If observations are mixed with different measures of scale. 4.When incorrect transformation of data is used to perform the regression. 5.Skewness in the distribution of a regressor, and maybe some other sources.
  • 4. Effects of Heteroscedasticity 4 • OLS (Ordinary Least Square) estimators are not the Best Linear Unbiased Estimator(BLUE) and their variance is not the lowest of all other unbiased estimators. • Estimators are no longer best/efficient. • The tests of hypothesis (like t-test, F-test) are no longer valid due to the inconsistency in the co-variance matrix of the estimated regression coefficients.
  • 6. Weighted Least Squares (WLS) Estimator Presentation title 6 • The Weighted Least Squares estimator is the OLS estimator, which is applied to a transformed model after multiplying each term on both sides of the regression equation by a “weight”, denoted by wi . For instance, consider the following general linear regression model with Heteroscedasticity: • Yᵢ = 𝛽0 + 1Xᵢ1+ui ; i = 1,2, … n • Var(ui) = ²Zᵢ² ; where Zᵢ = some function of Xᵢ • To obtain WLS estimator, the transformed model will be: • wᵢYᵢ= wᵢ 𝛽0 + 1(wᵢXᵢ1) + wᵢui ; i= 1,2, … n
  • 7. 7 Question : For the model 𝒀𝒊 = 𝜷𝑿𝒊 + 𝒖𝒊 𝐰ith variance var(𝒖𝒊) = 𝝈𝟐𝒁𝒊 𝟐 , prove that WLS estimator of 𝜷 has lower variance then its OLS estimator , where the weight is 𝒘𝒊 = 𝟏 𝒁𝒊 ANSWER:- For the model 𝒀𝒊 = 𝜷𝑿𝒊 + 𝒖𝒊 𝐰ith variance var(𝒖𝒊) =𝝈𝟐𝒁𝒊 𝟐 The OLS estimator of 𝜷 : 𝜷 = 𝑿𝒊𝒀𝒊 𝑿𝒊 𝟐 and var( 𝜷 )= 𝑿𝒊 𝟐 var(𝒖𝒊) (𝑿𝒊 𝟐)𝟐 = 𝝈𝟐 𝑿𝒊 𝟐 𝒁𝒊 𝟐 (𝑿𝒊 𝟐)𝟐 If we divide entire equation by 𝒁𝒊 , 𝒀𝒊 𝒁𝒊 = 𝜷 𝑿𝒊 𝒁𝒊 + 𝒖𝒊 𝒁𝒊 or 𝒚𝒊 = 𝜷𝒙𝒊 + 𝒗𝒊 Here var(𝒗𝒊) = var(𝒖𝒊) 𝒁𝒊 𝟐 = 𝝈𝟐𝒁𝒊 𝟐 𝒁𝒊 𝟐 = 𝝈𝟐 ( constant – Homoscedasticity ) The WLS estimator of 𝜷 : 𝜷∗ = 𝒙𝒊𝒚𝒊 𝒙𝒊 𝟐 = 𝒙𝒊(𝜷𝒙𝒊+𝒗𝒊) 𝒙𝒊 𝟐 = 𝜷 + 𝒙𝒊𝒗𝒊 𝒙𝒊 𝟐 Var(𝜷∗ )=E(𝜷∗ - 𝜷)² = 𝒙𝒊 𝟐 𝑬(𝒗𝒊 𝟐 ) (𝒙𝒊 𝟐)𝟐 = 𝝈𝟐 𝒙𝒊 𝟐 (𝒙𝒊 𝟐)𝟐 = 𝝈𝟐 𝒙𝒊 𝟐 [E(𝒖𝒊 , 𝒖𝒋) = 0 ]….(𝑿𝒊’s are independent)
  • 8. 8 Hence, 𝒗𝒂𝒓(𝜷∗) 𝒗𝒂𝒓(𝜷) = 𝝈𝟐 𝒙𝒊 𝟐 ( 𝝈𝟐 𝑿𝒊 𝟐 𝒁𝒊 𝟐 (𝑿𝒊 𝟐)𝟐 ) = (𝑿𝒊 𝟐 )𝟐 {( 𝑿𝒊 𝟐 𝒁𝒊 𝟐) (𝒙𝒊 𝟐)} = (𝑿𝒊 𝟐 )𝟐 ( 𝑿𝒊 𝟐 𝒁𝒊 𝟐)( ( 𝑿𝒊 𝒁𝒊 )𝟐) Let XᵢZᵢ = 𝒂𝒊, 𝑿𝒊 𝒁𝒊 = 𝒃𝒊 ; 𝒂𝒊𝒃𝒊= 𝑿𝒊 𝟐 Hence, 𝒗𝒂𝒓(𝜷∗) 𝒗𝒂𝒓(𝜷) = (𝜮𝒂𝒊𝒃𝒊)𝟐 ( 𝒂𝒊 𝟐 )( 𝒃𝒊 𝟐 ) According to Cauchy-Schwartz, (𝜮𝒂𝒊𝒃𝒊)𝟐 < ( 𝒂𝒊 𝟐 )( 𝒃𝒊 𝟐 ) Var(𝜷∗)<Var(𝜷) When 𝒂𝒊= 𝜽 𝒃𝒊 𝒗𝒂𝒓(𝜷∗) 𝒗𝒂𝒓(𝜷) = (𝜮𝒂𝒊𝒃𝒊)𝟐 ( 𝒂𝒊 𝟐 )( 𝒃𝒊 𝟐 ) =1 i.e., Var(𝜷∗)=Var(𝜷)
  • 9. 9 When 𝒂𝒊= 𝜽 𝒃𝒊 𝒗𝒂𝒓(𝜷∗) 𝒗𝒂𝒓(𝜷) = (𝜮𝒂𝒊𝒃𝒊)𝟐 ( 𝒂𝒊 𝟐 )( 𝒃𝒊 𝟐 ) =1 i.e., Var(𝜷∗)=Var(𝜷) When 𝒂𝒊= 𝜽 𝒃𝒊 𝒂𝒊 𝒃𝒊 = 𝑿𝒊𝒁𝒊 𝑿𝒊 𝒁𝒊 = 𝒁𝒊 𝟐 =𝜽 i.e., var(𝒖𝒊)= 𝝈𝟐𝒁𝒊 𝟐 = 𝝈𝟐𝜽
  • 10. FEASIBLE GLS 10 • Here ,we do not know the nature of heteroscadasticity • Step 1 • Y= 𝛽0 + 1X1+…….+ kXk + uᵢ and calculate 𝒖𝟐 (as further 𝒖𝟐 log(𝒖𝟐 )) • Step 2 • 𝒈𝟐 =Log(𝒖𝟐)= 𝜹𝟎 + 𝜹𝟏 X1….. +𝜹𝒌Xk + error • Step 3 h(x)=exp(𝒈𝟐) • Step 4 Y= 𝛽0 + 1X1+……. + kXk using h(x) weights as WLS
  • 11. 11 𝒀𝟎 = 𝜷𝟎 + 𝜷𝟏 X1+…….+ 𝜷𝒌 Xk +ui Var(u|x)= σ²h(x) h(x)=exp(𝛿0+𝜹1X1….. + 𝜹kXk ) Var(u|x)=σ²exp(𝛿0+𝜹1X1….. + 𝜹kXk ) Log(𝒖𝟐)= 𝜹𝟎 + 𝜹𝟏 X1….. + 𝜹𝒌 Xk + error hᵢ(x)=exp(𝜹𝟎 + 𝜹𝟏 X1….. + 𝜹𝒌 Xk ) hᵢ(x) in WLS as weights FEASIBLE GLS
  • 12. Remedial measures when true error variance (𝝈ᵢ²)is unknown
  • 13. 13 • WLS method makes an implicit assumption that true error variance (𝝈ᵢ²) is known. However, in reality, it is difficult to have knowledge of the true error variance. Thus, we need some other methods to obtain consistent estimate of variance of error term. • In this method, we need to make some assumptions about true error variance (𝝈ᵢ²) and transform the original regression model. After transformation, the new model satisfies Homoscedasticity assumption. Let’s say original regression model is: Yᵢ = 𝛽1+𝛽2Xᵢ+ ui and var (uᵢ) = 𝝈ᵢ² ; i= 1,2, … n
  • 14. Presentation title 14 When the error variance is proportional to Xᵢ Run the original OLS regression and obtain the residuals. Plot the square of these residuals , (σᵢ²) , against the explanatory variable X. If we get a pattern similar to figure 1, then we say that error variance is proportional to Xi or linearly related to Xᵢ and (σ²) is the factor of proportionality, which is a constant. Symbolically, E (Ui² ) = σ² Xi i = 1,2, … n Now we transform the original regression model by dividing original regression equation by √Xi , we get: Yi /√Xi = β₁/√Xi + β₂ Xi/√Xi +ui / √Xi i = 1,2, … .n = β₁ /√Xi + 𝜷𝟐 √Xi +Vi i = 1,2, … . n Here, Vi= ui /√Xi and Xi > 0This transformed regression equation is called “Square Root Transformation” and the error variance Vi is Homoscedastic. Proof: E(Vi²)= E (ui / √Xi )²= E(Ui² )/ Xi = σ ²
  • 15. Presentation title 15 When the error variance is proportional to Xᵢ² Run the original OLS regression and obtain the residuals. Plot the square of these residuals , (σᵢ²) , against the explanatory variable X. If we get a pattern similar to figure 2, then we say that error variance is proportional to Xi or non-linearly related to Xᵢ and (σ²) is the factor of proportionality, which is a constant. Symbolically, E (Ui² ) = σ² Xᵢ² i = 1,2, … n Now we transform the original regression model by dividing original regression equation by Xi , we get: Yi /Xi = β₁/Xi + β₂ Xi/Xi +ui / Xi i = 1,2, … .n = β₁ /Xi + 𝜷𝟐 +Vi i = 1,2, … . n Here, Vi= ui /Xi and Xi > 0This transformed regression equation is called “Square Transformation” and the error variance Vi is Homoscedastic. Proof: E(Vi²)= E (ui / Xi )²= E(Ui² )/ Xᵢ² = σ ²
  • 16. Presentation title 16 When the error variance is proportional to square of the mean value of Y According to this assumption, the error variance is proportional to square of the mean value of Y and σ² is a constant. Symbolically, E (Uᵢ²) = σ² . [E (Yᵢ)] ²; i = 1,2, … . n. Now we transform the original regression model by dividing it by E(Yᵢ) and we get: Where, E(Yᵢ) = β₁ + β₂ Xᵢ Yᵢ/E(Yᵢ) = β₁ /E(Yᵢ) + β₂Xᵢ/E(Yᵢ) + μᵢ /E(Yᵢ) = β₁ /E(Yᵢ) + β₂Xᵢ/E(Yᵢ) + Vᵢ; i = 1,2, … . n Where, Vᵢ= μᵢ /E(Yᵢ) We can show that the error variance Vᵢ is Homoscedastic Proof: E (Vᵢ²)= E (uᵢ / E(Yᵢ))² = E(Uᵢ)² / E(Yᵢ)² = σ²
  • 17. Presentation title 17 E(Yi) depends on 𝜷₁ and𝜷₂ which are unknown. We know 𝒀𝒊= 𝜷₁ + 𝜷₂𝑿𝒊 Which is an estimator of E(Yi) First, we run the usual OLS regression, disregarding the heteroscedasticity problem , and obtain 𝒀𝒊 then using the estimated 𝒀𝒊 , we transform our model: 𝒀𝒊 𝒀𝒊 =𝜷₁ 𝟏 𝒀𝒊 + 𝜷₂ 𝑿𝒊 𝒀𝒊 + 𝒖𝒊 𝒀𝒊 ………. i = 1,2, … . n. The transformation will perform satisfactorily in practice if the sample size is reasonably large.
  • 18. 18 log transformation of the original regression model can help to reduce the problem of heteroscedasticity. Symbolically, log(Yᵢ)=𝜷₁ + 𝜷₂log(Xᵢ) +uᵢ i = 1,2, … . n. Log Transformation
  • 19. 19
  • 20. 20
  • 21. 21
  • 22. 22
  • 23. 23 The best model is the model having high p-value: Remedial Measure 1 is the best as p-value =0.6251
  • 25. Some “problems” associated with this transformation method 25 • In multiple regression models, we may not decide which of the X variables should be chosen for transforming the data. • Log transformation is not applicable if some of the Y and X values are zero or negative. • It may happen that the ratios of variables are found to be correlated even though the original variables are uncorrelated or random. For instance, in the model, Yᵢ= β₁ + β₂Xᵢ+ uᵢ ; Y and X may not be correlated but in the transformed model Yᵢ /Xᵢ= β₁/Xᵢ+ β₂ + uᵢ /Xᵢ , Yᵢ/ Xᵢ and 1/Xᵢ are often found to be correlated. Hence, there is a problem of spurious correlation.
  • 26. Summary All of the remedial measures discussed above are just a way to speculate about the nature of the population error variance, 𝝈ᵢ² and which method is to be used depends upon the nature of the problem and severity of Heteroscedasticity. 26