Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- HUMAN DEVELOPMENT INDEX AND ITS MEA... by arslan_bzu 58938 views
- Theories & factors affecting growth... by Aruna Naudasari 186568 views
- The Human Development Index by tutor2u 160496 views
- The Cold War: Actions and Reactions by mspitt 10299 views
- Human development indicators by Bijith VB 14986 views
- Factors That Affect Growth And Deve... by lavadoods Masta 200880 views

10,195 views

Published on

No Downloads

Total views

10,195

On SlideShare

0

From Embeds

0

Number of Embeds

5

Shares

0

Downloads

106

Comments

0

Likes

2

No embeds

No notes for slide

- 1. Factors influencing the Human Development Index (HDI) using Multiple linear regression<br />ADITYA PANUGANTI<br />1202062944<br />Industrial Engineering<br />Year of data: 2008<br />Source: UN Development Programme Database<br />
- 2. Objective and Dataset description<br />To find which of the following variables have an effect on the Human Development Index (HDI)<br />
- 3. Fitting the full model without interaction terms<br />The regression equation for full model is<br />y = 0.0596 + 0.00440 LIF + 0.000007 GDP - 0.000748 GRO + 0.0158 SCH + 0.0080 GEN+ 0.0159 EXP - 0.000004 GNI + 0.000003 MAT - 0.000051 HOM - 0.000540 MOR+ 0.000176 LIT - 0.0185 DEP + 0.0023 CON1 - 0.0117 CON2 - 0.0100 CON3+ 0.00431 CON4 - 0.0268 CON5<br />Difficult to interpret the coefficients of the above regression equation.<br />Hence standardized the regression coefficients using Unit Normal scaling<br />
- 4. Fitting the full model after Standardization<br />The regression equation is<br /> y = 0.684 + 0.0404 LIF + 0.100 GDP - 0.0117 GRO + 0.0408 SCH + 0.00136 GEN+ 0.0443 EXP - 0.0627 GNI + 0.00089 MAT - 0.00068 HOM - 0.0196 MOR+ 0.00259 LIT - 0.0185 DEP + 0.0023 CON1 - 0.0117 CON2 - 0.0100 CON3+ 0.00431 CON4 - 0.0268 CON5<br />Model Statistics:<br />R-Sq = 98.5% R-Sq(adj) = 98.2%<br />Analysis of Variance (ANOVA)<br /> Source DF SS MS F P<br /> Regression 17 2.21784 0.13046 325.49 0.000<br /> Residual Error 84 0.03367 0.00040<br /> Total 101 2.25150<br />
- 5. Signs of Multicollinearity<br />Inference from Variance Inflation Factor (VIFs):<br /> VIF of GDP = 560.116 and VIF of GNI = 533.109 (Indicating Severe Multicollinearity)<br /> VIF of EXP = 18.368 and VIF of GRO = 16.456 (just over 10; Indicating Multicollinearity)<br />Inference from Correlation matrix: <br /> LIF GDP GRO SCH GEN EXP GNI MAT<br /> GDP 0.595<br /> GRO 0.719 0.630<br /> SCH 0.603 0.553 0.776<br /> GEN -0.677 -0.705 -0.758 -0.743<br /> EXP 0.692 0.636 0.956 0.774 -0.798<br /> GNI 0.584 0.999 0.618 0.539 -0.688 0.620<br /><ul><li>Dropped GNI from the model.
- 6. No change in R-sq and R-sq(adj) statistics before and after dropping the model </li></ul>R-Sq = 98.5% R-Sq(adj) = 98.2%<br />To confirm Multicollinearity between EXP and GRO, did a further analysis using Principal Component Analysis.<br />Found the condition number to be (Condition number = λmax/ λmin=7.8001/0.0327 = 238.53 <br />>100, indicating moderate multicollinearity<br /><ul><li>Dropped EXP also from the model and check the Model summary statistics- a slight reduction in R-sq and R-sq(adj) .</li></li></ul><li>Residual plots and Model Adequacy<br />Both normality and Residual vs fitted plots look good, satisfying the normality and constant variance conditions<br />
- 7. Indicator Interactions<br />Considered interaction terms of DEP and other numerical variables.<br />24 variables in all including all the interaction terms<br />S = 0.0220704 R-Sq = 98.3% R-Sq(adj) = 97.8%; R-Sq(pred) = 96.80%<br />Residual plots:<br />
- 8. Outliers and Influential points<br />
- 9. Other outliers in graph<br />Fitting each of the datapoints 45, 50, 80 and checking if there is any changes in summary stats<br />These points are not contributing to any leverage, nor being influential; except for the fact that they are outliers; also R-sq not changing much, therefore we are leaving them in the model.<br />
- 10. Residual plots after taking off the outliers and influential points<br /><ul><li>Normal probability plot looks good but the Residuals vs fit looks like a double bow shaped.
- 11. To confirm this, we have used box cox transformation which showed us that there is a need in the transformation on ‘y’</li></li></ul><li>Box-Cox Transformation<br />Suggests lambda = 2, implies transform y y2<br />
- 12. Residual plots after transformation<br />Can find some outliers in the Normal probability plot<br />
- 13. Outliers and Influential points<br />
- 14. Residual plots after taking off the outliers and influential points<br />No need for any transformation, Box-Cox suggests λ = 1<br />
- 15. Variable selection and Model building<br />
- 16. Fit the selected model<br />Regression equation:<br /> y2= 0.476 - 0.0164 GEN + 0.0403 GRO + 0.0422 LIF + 0.0557 GDP + 0.0449 SCH - 0.0181 CON2 - 0.0388 MOR + 0.0523 GDP_D + 0.0289 CON5 + 0.0412 MOR_D - 0.0476 HOM_D<br />Detected Multicollinearity using Principal component analysis<br />condition number = 134.837 (>100, Moderate Multicollinearity)<br />Linear dependency equation: 0.107GRO+0.337LIF+0.798MOR-0.467MOR_D (dependency between the variables in the equation)<br />Using correlation matrix found that the variable MOR has large correlation with LIF and MOR_D.<br />Dropping MOR removed multicollinearity from model (condition number = 39.04617 (<100, No multicollinearity)<br />
- 17. Residual plots after dropping MOR<br /><ul><li>Presence of an outlier datapoint 72
- 18. No need for any transformation, Box-Cox suggests λ = 1</li></li></ul><li>Fit the model after dropping off the outlier<br />The regression equation is<br /> y2= 0.482 - 0.0221 GEN + 0.0436 GRO + 0.0576 LIF + 0.0528 GDP + 0.0483 SCH - 0.0115 CON2 + 0.0556 GDP_D + 0.0182 CON5 + 0.0169 MOR_D - 0.0538 HOM_D<br />R-sq = 99.1% R-sq(adj) = 99% R-sq(pred) = 98.73%<br />
- 19. Model validation<br />Considered 118 countries for modelling <br />102 Estimation data and 16 prediction data<br />
- 20. Conclusion<br />The reduced model has a better R-sq than the actual model and most of the variables are significant (low p-value) in the model.<br />The following variables were found to be significant <br />Gender inequality index<br />Combined gross enrolment<br />Life expectancy at birth<br />GDP<br />Mean schooling years<br />Countries in continent 2<br />GDP& intensity of deprivation<br />Under 5 mortality rate& intensity of deprivation<br />Homicide rate& intensity of deprivation<br />
- 21. Possible improvements<br />More datapoints<br />Ridge regression to eliminate multicollinearity<br />Robust regression – to add more weight to the datapoints and retain them in the model.<br />

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment