Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

1,083 views

Published on

Fitting and understanding Multilevel Models-Andrew Gelman

No Downloads

Total views

1,083

On SlideShare

0

From Embeds

0

Number of Embeds

2

Shares

0

Downloads

85

Comments

0

Likes

1

No embeds

No notes for slide

- 1. EﬀectivenessUbiquityWay of lifeFitting and understanding multilevel (hierarchical)modelsAndrew GelmanDepartment of Statistics and Department of Political ScienceColumbia University8 December 2004Andrew Gelman Fitting and understanding multilevel models
- 2. EﬀectivenessUbiquityWay of lifeMaking more use of existing informationThe problem: not enough data to estimate eﬀects withconﬁdenceThe solution: make your studies broader and deeperBroader: extend to other countries, other years, otheroutcomes, . . .Deeper: inferences for individual states, demographicsubgroups, components of outcomes, . . .The solution: multilevel modelingRegression with coeﬃcients grouped into batchesNo such thing as “too many predictors”Andrew Gelman Fitting and understanding multilevel models
- 3. EﬀectivenessUbiquityWay of lifeMaking more use of existing informationThe problem: not enough data to estimate eﬀects withconﬁdenceThe solution: make your studies broader and deeperBroader: extend to other countries, other years, otheroutcomes, . . .Deeper: inferences for individual states, demographicsubgroups, components of outcomes, . . .The solution: multilevel modelingRegression with coeﬃcients grouped into batchesNo such thing as “too many predictors”Andrew Gelman Fitting and understanding multilevel models
- 4. EﬀectivenessUbiquityWay of lifeMaking more use of existing informationThe problem: not enough data to estimate eﬀects withconﬁdenceThe solution: make your studies broader and deeperBroader: extend to other countries, other years, otheroutcomes, . . .Deeper: inferences for individual states, demographicsubgroups, components of outcomes, . . .The solution: multilevel modelingRegression with coeﬃcients grouped into batchesNo such thing as “too many predictors”Andrew Gelman Fitting and understanding multilevel models
- 5. EﬀectivenessUbiquityWay of lifeMaking more use of existing informationThe problem: not enough data to estimate eﬀects withconﬁdenceThe solution: make your studies broader and deeperBroader: extend to other countries, other years, otheroutcomes, . . .Deeper: inferences for individual states, demographicsubgroups, components of outcomes, . . .The solution: multilevel modelingRegression with coeﬃcients grouped into batchesNo such thing as “too many predictors”Andrew Gelman Fitting and understanding multilevel models
- 6. EﬀectivenessUbiquityWay of lifeMaking more use of existing informationThe problem: not enough data to estimate eﬀects withconﬁdenceThe solution: make your studies broader and deeperBroader: extend to other countries, other years, otheroutcomes, . . .Deeper: inferences for individual states, demographicsubgroups, components of outcomes, . . .The solution: multilevel modelingRegression with coeﬃcients grouped into batchesNo such thing as “too many predictors”Andrew Gelman Fitting and understanding multilevel models
- 7. EﬀectivenessUbiquityWay of lifeMaking more use of existing informationThe problem: not enough data to estimate eﬀects withconﬁdenceThe solution: make your studies broader and deeperBroader: extend to other countries, other years, otheroutcomes, . . .Deeper: inferences for individual states, demographicsubgroups, components of outcomes, . . .The solution: multilevel modelingRegression with coeﬃcients grouped into batchesNo such thing as “too many predictors”Andrew Gelman Fitting and understanding multilevel models
- 8. EﬀectivenessUbiquityWay of lifeMaking more use of existing informationThe problem: not enough data to estimate eﬀects withconﬁdenceThe solution: make your studies broader and deeperBroader: extend to other countries, other years, otheroutcomes, . . .Deeper: inferences for individual states, demographicsubgroups, components of outcomes, . . .The solution: multilevel modelingRegression with coeﬃcients grouped into batchesNo such thing as “too many predictors”Andrew Gelman Fitting and understanding multilevel models
- 9. EﬀectivenessUbiquityWay of lifeMaking more use of existing informationThe problem: not enough data to estimate eﬀects withconﬁdenceThe solution: make your studies broader and deeperBroader: extend to other countries, other years, otheroutcomes, . . .Deeper: inferences for individual states, demographicsubgroups, components of outcomes, . . .The solution: multilevel modelingRegression with coeﬃcients grouped into batchesNo such thing as “too many predictors”Andrew Gelman Fitting and understanding multilevel models
- 10. EﬀectivenessUbiquityWay of lifeFitting and understanding multilevel modelsThe eﬀectiveness of multilevel modelsMultilevel models in unexpected placesMultilevel models as a way of lifecollaborators:Iain Pardoe, Dept of Decision Sciences, University of OregonDavid Park, Dept of Political Science, Washington UniversityJoe Bafumi, Dept of Political Science, Columbia UniversityBoris Shor, Dept of Political Science, Columbia UniversityNoah Kaplan, Dept of Political Science, University of HoustonShouhao Zhao, Dept of Statistics, Columbia UniversityZaiying Huang, Circulation, New York TimesAndrew Gelman Fitting and understanding multilevel models
- 11. EﬀectivenessUbiquityWay of lifeFitting and understanding multilevel modelsThe eﬀectiveness of multilevel modelsMultilevel models in unexpected placesMultilevel models as a way of lifecollaborators:Iain Pardoe, Dept of Decision Sciences, University of OregonDavid Park, Dept of Political Science, Washington UniversityJoe Bafumi, Dept of Political Science, Columbia UniversityBoris Shor, Dept of Political Science, Columbia UniversityNoah Kaplan, Dept of Political Science, University of HoustonShouhao Zhao, Dept of Statistics, Columbia UniversityZaiying Huang, Circulation, New York TimesAndrew Gelman Fitting and understanding multilevel models
- 12. EﬀectivenessUbiquityWay of lifeFitting and understanding multilevel modelsThe eﬀectiveness of multilevel modelsMultilevel models in unexpected placesMultilevel models as a way of lifecollaborators:Iain Pardoe, Dept of Decision Sciences, University of OregonDavid Park, Dept of Political Science, Washington UniversityJoe Bafumi, Dept of Political Science, Columbia UniversityBoris Shor, Dept of Political Science, Columbia UniversityNoah Kaplan, Dept of Political Science, University of HoustonShouhao Zhao, Dept of Statistics, Columbia UniversityZaiying Huang, Circulation, New York TimesAndrew Gelman Fitting and understanding multilevel models
- 13. EﬀectivenessUbiquityWay of lifeFitting and understanding multilevel modelsThe eﬀectiveness of multilevel modelsMultilevel models in unexpected placesMultilevel models as a way of lifecollaborators:Iain Pardoe, Dept of Decision Sciences, University of OregonDavid Park, Dept of Political Science, Washington UniversityJoe Bafumi, Dept of Political Science, Columbia UniversityBoris Shor, Dept of Political Science, Columbia UniversityNoah Kaplan, Dept of Political Science, University of HoustonShouhao Zhao, Dept of Statistics, Columbia UniversityZaiying Huang, Circulation, New York TimesAndrew Gelman Fitting and understanding multilevel models
- 14. EﬀectivenessUbiquityWay of lifeFitting and understanding multilevel modelsThe eﬀectiveness of multilevel modelsMultilevel models in unexpected placesMultilevel models as a way of lifecollaborators:Iain Pardoe, Dept of Decision Sciences, University of OregonDavid Park, Dept of Political Science, Washington UniversityJoe Bafumi, Dept of Political Science, Columbia UniversityBoris Shor, Dept of Political Science, Columbia UniversityNoah Kaplan, Dept of Political Science, University of HoustonShouhao Zhao, Dept of Statistics, Columbia UniversityZaiying Huang, Circulation, New York TimesAndrew Gelman Fitting and understanding multilevel models
- 15. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 16. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 17. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 18. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 19. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 20. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 21. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 22. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 23. EﬀectivenessUbiquityWay of lifeOutline of talkThe eﬀectiveness of multilevel modelsState-level opinions from national polls(crossed multilevel modeling and poststratiﬁcation)Multilevel models in unexpected placesEstimating incumbency advantage and its variationBefore-after studiesMultilevel models as a way of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 24. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationNational opinion trends1940 1950 1960 1970 1980 1990 200050607080YearPercentagesupportforthedeathpenaltyAndrew Gelman Fitting and understanding multilevel models
- 25. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationState-level opinion trendsGoal: estimating time series within each stateOne poll at a time: small-area estimationIt works! Validated for pre-election pollsCombining surveys: model for parallel time seriesMultilevel modeling + poststratiﬁcationPoststratiﬁcation cells: sex × ethnicity × age × education ×stateAndrew Gelman Fitting and understanding multilevel models
- 26. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationState-level opinion trendsGoal: estimating time series within each stateOne poll at a time: small-area estimationIt works! Validated for pre-election pollsCombining surveys: model for parallel time seriesMultilevel modeling + poststratiﬁcationPoststratiﬁcation cells: sex × ethnicity × age × education ×stateAndrew Gelman Fitting and understanding multilevel models
- 27. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationState-level opinion trendsGoal: estimating time series within each stateOne poll at a time: small-area estimationIt works! Validated for pre-election pollsCombining surveys: model for parallel time seriesMultilevel modeling + poststratiﬁcationPoststratiﬁcation cells: sex × ethnicity × age × education ×stateAndrew Gelman Fitting and understanding multilevel models
- 28. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationState-level opinion trendsGoal: estimating time series within each stateOne poll at a time: small-area estimationIt works! Validated for pre-election pollsCombining surveys: model for parallel time seriesMultilevel modeling + poststratiﬁcationPoststratiﬁcation cells: sex × ethnicity × age × education ×stateAndrew Gelman Fitting and understanding multilevel models
- 29. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationState-level opinion trendsGoal: estimating time series within each stateOne poll at a time: small-area estimationIt works! Validated for pre-election pollsCombining surveys: model for parallel time seriesMultilevel modeling + poststratiﬁcationPoststratiﬁcation cells: sex × ethnicity × age × education ×stateAndrew Gelman Fitting and understanding multilevel models
- 30. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationState-level opinion trendsGoal: estimating time series within each stateOne poll at a time: small-area estimationIt works! Validated for pre-election pollsCombining surveys: model for parallel time seriesMultilevel modeling + poststratiﬁcationPoststratiﬁcation cells: sex × ethnicity × age × education ×stateAndrew Gelman Fitting and understanding multilevel models
- 31. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationMultilevel modeling of opinionsLogistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsBayesian inference, summarize by posterior simulations of β:Simulation θ1 · · · θ751 ** · · · **............1000 ** · · · **Andrew Gelman Fitting and understanding multilevel models
- 32. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationMultilevel modeling of opinionsLogistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsBayesian inference, summarize by posterior simulations of β:Simulation θ1 · · · θ751 ** · · · **............1000 ** · · · **Andrew Gelman Fitting and understanding multilevel models
- 33. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationMultilevel modeling of opinionsLogistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsBayesian inference, summarize by posterior simulations of β:Simulation θ1 · · · θ751 ** · · · **............1000 ** · · · **Andrew Gelman Fitting and understanding multilevel models
- 34. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationMultilevel modeling of opinionsLogistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsBayesian inference, summarize by posterior simulations of β:Simulation θ1 · · · θ751 ** · · · **............1000 ** · · · **Andrew Gelman Fitting and understanding multilevel models
- 35. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationMultilevel modeling of opinionsLogistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsBayesian inference, summarize by posterior simulations of β:Simulation θ1 · · · θ751 ** · · · **............1000 ** · · · **Andrew Gelman Fitting and understanding multilevel models
- 36. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationMultilevel modeling of opinionsLogistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsBayesian inference, summarize by posterior simulations of β:Simulation θ1 · · · θ751 ** · · · **............1000 ** · · · **Andrew Gelman Fitting and understanding multilevel models
- 37. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationInterlude: why “multilevel” = “hierarchical”Logistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsCrossed (nonnested) structure of age, education, stateSeveral overlapping “hierarchies”Andrew Gelman Fitting and understanding multilevel models
- 38. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationInterlude: why “multilevel” = “hierarchical”Logistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsCrossed (nonnested) structure of age, education, stateSeveral overlapping “hierarchies”Andrew Gelman Fitting and understanding multilevel models
- 39. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationInterlude: why “multilevel” = “hierarchical”Logistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsCrossed (nonnested) structure of age, education, stateSeveral overlapping “hierarchies”Andrew Gelman Fitting and understanding multilevel models
- 40. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationInterlude: why “multilevel” = “hierarchical”Logistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsCrossed (nonnested) structure of age, education, stateSeveral overlapping “hierarchies”Andrew Gelman Fitting and understanding multilevel models
- 41. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationInterlude: why “multilevel” = “hierarchical”Logistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsCrossed (nonnested) structure of age, education, stateSeveral overlapping “hierarchies”Andrew Gelman Fitting and understanding multilevel models
- 42. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationInterlude: why “multilevel” = “hierarchical”Logistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsCrossed (nonnested) structure of age, education, stateSeveral overlapping “hierarchies”Andrew Gelman Fitting and understanding multilevel models
- 43. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationInterlude: why “multilevel” = “hierarchical”Logistic regression: Pr(yi = 1) = logit−1((Xβ)i )X includes demographic and geographic predictorsGroup-level model for the 16 age × education predictorsGroup-level model for the 50 state predictorsCrossed (nonnested) structure of age, education, stateSeveral overlapping “hierarchies”Andrew Gelman Fitting and understanding multilevel models
- 44. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationPoststratiﬁcation to estimate state opinionsImplied inference for θj = logit−1(Xβ) in each of 3264 cells j(e.g., black female, age 18–29, college graduate,Massachusetts)PoststratiﬁcationWithin each state s, average over 64 cells:j∈s Nj θj j∈s NjNj = population in cell j (from Census)1000 simulation draws propagate to uncertainty for each θjAndrew Gelman Fitting and understanding multilevel models
- 45. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationPoststratiﬁcation to estimate state opinionsImplied inference for θj = logit−1(Xβ) in each of 3264 cells j(e.g., black female, age 18–29, college graduate,Massachusetts)PoststratiﬁcationWithin each state s, average over 64 cells:j∈s Nj θj j∈s NjNj = population in cell j (from Census)1000 simulation draws propagate to uncertainty for each θjAndrew Gelman Fitting and understanding multilevel models
- 46. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationPoststratiﬁcation to estimate state opinionsImplied inference for θj = logit−1(Xβ) in each of 3264 cells j(e.g., black female, age 18–29, college graduate,Massachusetts)PoststratiﬁcationWithin each state s, average over 64 cells:j∈s Nj θj j∈s NjNj = population in cell j (from Census)1000 simulation draws propagate to uncertainty for each θjAndrew Gelman Fitting and understanding multilevel models
- 47. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationPoststratiﬁcation to estimate state opinionsImplied inference for θj = logit−1(Xβ) in each of 3264 cells j(e.g., black female, age 18–29, college graduate,Massachusetts)PoststratiﬁcationWithin each state s, average over 64 cells:j∈s Nj θj j∈s NjNj = population in cell j (from Census)1000 simulation draws propagate to uncertainty for each θjAndrew Gelman Fitting and understanding multilevel models
- 48. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationPoststratiﬁcation to estimate state opinionsImplied inference for θj = logit−1(Xβ) in each of 3264 cells j(e.g., black female, age 18–29, college graduate,Massachusetts)PoststratiﬁcationWithin each state s, average over 64 cells:j∈s Nj θj j∈s NjNj = population in cell j (from Census)1000 simulation draws propagate to uncertainty for each θjAndrew Gelman Fitting and understanding multilevel models
- 49. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationPoststratiﬁcation to estimate state opinionsImplied inference for θj = logit−1(Xβ) in each of 3264 cells j(e.g., black female, age 18–29, college graduate,Massachusetts)PoststratiﬁcationWithin each state s, average over 64 cells:j∈s Nj θj j∈s NjNj = population in cell j (from Census)1000 simulation draws propagate to uncertainty for each θjAndrew Gelman Fitting and understanding multilevel models
- 50. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 51. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 52. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 53. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 54. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 55. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 56. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 57. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 58. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 59. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationCBS/New York Times pre-election polls from 1988Validation study: ﬁt model on poll data and compare toelection resultsCompeting estimates:No pooling: separate estimate within each stateComplete pooling: no state predictorsHierarchical model and poststratifyMean absolute state errors:No pooling: 10.4%Complete pooling: 5.4%Hierarchical model with poststratiﬁcation: 4.5%Andrew Gelman Fitting and understanding multilevel models
- 60. EﬀectivenessUbiquityWay of lifeState-level opinions from national pollsPoststratiﬁcationValidationValidation study: comparison of state errors1988 election outcome vs. poll estimate0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0no pooling of state effectsEstimated Bush supportActualelectionoutcome0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0complete pooling (no state effects)Estimated Bush supportActualelectionoutcome0.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0multilevel modelEstimated Bush supportActualelectionoutcomeAndrew Gelman Fitting and understanding multilevel models
- 61. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel models alwaysAnything worth doing is worth doing repeatedlyA “method” is any procedure applied more than onceCity planningOutward expansion: ﬁtting a model to other countries, otheryears, other outcomes, . . .Inﬁlling: inferences for individual states, demographicsubgroups, components of data, . . .“Frequentist” statistical theory of repeated inferencesAndrew Gelman Fitting and understanding multilevel models
- 62. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel models alwaysAnything worth doing is worth doing repeatedlyA “method” is any procedure applied more than onceCity planningOutward expansion: ﬁtting a model to other countries, otheryears, other outcomes, . . .Inﬁlling: inferences for individual states, demographicsubgroups, components of data, . . .“Frequentist” statistical theory of repeated inferencesAndrew Gelman Fitting and understanding multilevel models
- 63. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel models alwaysAnything worth doing is worth doing repeatedlyA “method” is any procedure applied more than onceCity planningOutward expansion: ﬁtting a model to other countries, otheryears, other outcomes, . . .Inﬁlling: inferences for individual states, demographicsubgroups, components of data, . . .“Frequentist” statistical theory of repeated inferencesAndrew Gelman Fitting and understanding multilevel models
- 64. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel models alwaysAnything worth doing is worth doing repeatedlyA “method” is any procedure applied more than onceCity planningOutward expansion: ﬁtting a model to other countries, otheryears, other outcomes, . . .Inﬁlling: inferences for individual states, demographicsubgroups, components of data, . . .“Frequentist” statistical theory of repeated inferencesAndrew Gelman Fitting and understanding multilevel models
- 65. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel models alwaysAnything worth doing is worth doing repeatedlyA “method” is any procedure applied more than onceCity planningOutward expansion: ﬁtting a model to other countries, otheryears, other outcomes, . . .Inﬁlling: inferences for individual states, demographicsubgroups, components of data, . . .“Frequentist” statistical theory of repeated inferencesAndrew Gelman Fitting and understanding multilevel models
- 66. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel models alwaysAnything worth doing is worth doing repeatedlyA “method” is any procedure applied more than onceCity planningOutward expansion: ﬁtting a model to other countries, otheryears, other outcomes, . . .Inﬁlling: inferences for individual states, demographicsubgroups, components of data, . . .“Frequentist” statistical theory of repeated inferencesAndrew Gelman Fitting and understanding multilevel models
- 67. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesIncumbency advantage in U.S. House electionsRegression approach (Gelman and King, 1990):For any year, compare districts with and without incs runningControl for vote in previous electionControl for incumbent partyvit = β0 + β1vi,t−1 + β2Pit + ψIit + itOther estimates (sophomore surge, etc.) have selection biasAndrew Gelman Fitting and understanding multilevel models
- 68. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesIncumbency advantage in U.S. House electionsRegression approach (Gelman and King, 1990):For any year, compare districts with and without incs runningControl for vote in previous electionControl for incumbent partyvit = β0 + β1vi,t−1 + β2Pit + ψIit + itOther estimates (sophomore surge, etc.) have selection biasAndrew Gelman Fitting and understanding multilevel models
- 69. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesIncumbency advantage in U.S. House electionsRegression approach (Gelman and King, 1990):For any year, compare districts with and without incs runningControl for vote in previous electionControl for incumbent partyvit = β0 + β1vi,t−1 + β2Pit + ψIit + itOther estimates (sophomore surge, etc.) have selection biasAndrew Gelman Fitting and understanding multilevel models
- 70. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesIncumbency advantage in U.S. House electionsRegression approach (Gelman and King, 1990):For any year, compare districts with and without incs runningControl for vote in previous electionControl for incumbent partyvit = β0 + β1vi,t−1 + β2Pit + ψIit + itOther estimates (sophomore surge, etc.) have selection biasAndrew Gelman Fitting and understanding multilevel models
- 71. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesIncumbency advantage in U.S. House electionsRegression approach (Gelman and King, 1990):For any year, compare districts with and without incs runningControl for vote in previous electionControl for incumbent partyvit = β0 + β1vi,t−1 + β2Pit + ψIit + itOther estimates (sophomore surge, etc.) have selection biasAndrew Gelman Fitting and understanding multilevel models
- 72. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesIncumbency advantage in U.S. House electionsRegression approach (Gelman and King, 1990):For any year, compare districts with and without incs runningControl for vote in previous electionControl for incumbent partyvit = β0 + β1vi,t−1 + β2Pit + ψIit + itOther estimates (sophomore surge, etc.) have selection biasAndrew Gelman Fitting and understanding multilevel models
- 73. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesEstimated incumbency advantage from lagged regressionsYearEstincadvfromlaggedregression1900 1920 1940 1960 1980 20000.00.050.100.15Andrew Gelman Fitting and understanding multilevel models
- 74. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesCan we do better?Regression estimate: vit = β0 + β1vi,t−1 + β2Pit + ψIit + it“Political science” problem: ψ is assumed to be same in alldistricts“Statistics” problem: the model doesn’t ﬁt the dataWe’ll show pictures of the model not ﬁttingWe’ll set up a model allowing inc advantage to varyAndrew Gelman Fitting and understanding multilevel models
- 75. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesCan we do better?Regression estimate: vit = β0 + β1vi,t−1 + β2Pit + ψIit + it“Political science” problem: ψ is assumed to be same in alldistricts“Statistics” problem: the model doesn’t ﬁt the dataWe’ll show pictures of the model not ﬁttingWe’ll set up a model allowing inc advantage to varyAndrew Gelman Fitting and understanding multilevel models
- 76. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesCan we do better?Regression estimate: vit = β0 + β1vi,t−1 + β2Pit + ψIit + it“Political science” problem: ψ is assumed to be same in alldistricts“Statistics” problem: the model doesn’t ﬁt the dataWe’ll show pictures of the model not ﬁttingWe’ll set up a model allowing inc advantage to varyAndrew Gelman Fitting and understanding multilevel models
- 77. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesCan we do better?Regression estimate: vit = β0 + β1vi,t−1 + β2Pit + ψIit + it“Political science” problem: ψ is assumed to be same in alldistricts“Statistics” problem: the model doesn’t ﬁt the dataWe’ll show pictures of the model not ﬁttingWe’ll set up a model allowing inc advantage to varyAndrew Gelman Fitting and understanding multilevel models
- 78. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesCan we do better?Regression estimate: vit = β0 + β1vi,t−1 + β2Pit + ψIit + it“Political science” problem: ψ is assumed to be same in alldistricts“Statistics” problem: the model doesn’t ﬁt the dataWe’ll show pictures of the model not ﬁttingWe’ll set up a model allowing inc advantage to varyAndrew Gelman Fitting and understanding multilevel models
- 79. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesModel misﬁtUnder the model, parallel lines are ﬁtted to the circles (open seats)and dots (incs running for reelection)Democratic vote in 1986Democraticvotein19880.0 0.2 0.4 0.6 0.8 1.00.00.20.40.60.81.0ooooooooo oooooooooooooYearCoefficientsforlaggedvote1900 1920 1940 1960 1980 20000.20.40.60.81.01.2incumbents runningopen seatsAndrew Gelman Fitting and understanding multilevel models
- 80. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel modelfor t = 1, 2: vit = 0.5 + δt + αi + φitIit + itδ2 − δ1 is the national vote swingαi is the “normal vote” for district i: mean 0, sd σα.φit is the inc advantage in district i at time t: mean ψ, sd σφit’s are independent errors: mean 0 and sd σ .Candidate-level incumbency eﬀects:If the same incumbent is running in years 1 and 2, thenφi2 ≡ φi1Otherwise, φi1 and φi2 are independentAndrew Gelman Fitting and understanding multilevel models
- 81. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel modelfor t = 1, 2: vit = 0.5 + δt + αi + φitIit + itδ2 − δ1 is the national vote swingαi is the “normal vote” for district i: mean 0, sd σα.φit is the inc advantage in district i at time t: mean ψ, sd σφit’s are independent errors: mean 0 and sd σ .Candidate-level incumbency eﬀects:If the same incumbent is running in years 1 and 2, thenφi2 ≡ φi1Otherwise, φi1 and φi2 are independentAndrew Gelman Fitting and understanding multilevel models
- 82. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel modelfor t = 1, 2: vit = 0.5 + δt + αi + φitIit + itδ2 − δ1 is the national vote swingαi is the “normal vote” for district i: mean 0, sd σα.φit is the inc advantage in district i at time t: mean ψ, sd σφit’s are independent errors: mean 0 and sd σ .Candidate-level incumbency eﬀects:If the same incumbent is running in years 1 and 2, thenφi2 ≡ φi1Otherwise, φi1 and φi2 are independentAndrew Gelman Fitting and understanding multilevel models
- 83. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel modelfor t = 1, 2: vit = 0.5 + δt + αi + φitIit + itδ2 − δ1 is the national vote swingαi is the “normal vote” for district i: mean 0, sd σα.φit is the inc advantage in district i at time t: mean ψ, sd σφit’s are independent errors: mean 0 and sd σ .Candidate-level incumbency eﬀects:If the same incumbent is running in years 1 and 2, thenφi2 ≡ φi1Otherwise, φi1 and φi2 are independentAndrew Gelman Fitting and understanding multilevel models
- 84. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel modelfor t = 1, 2: vit = 0.5 + δt + αi + φitIit + itδ2 − δ1 is the national vote swingαi is the “normal vote” for district i: mean 0, sd σα.φit is the inc advantage in district i at time t: mean ψ, sd σφit’s are independent errors: mean 0 and sd σ .Candidate-level incumbency eﬀects:If the same incumbent is running in years 1 and 2, thenφi2 ≡ φi1Otherwise, φi1 and φi2 are independentAndrew Gelman Fitting and understanding multilevel models
- 85. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel modelfor t = 1, 2: vit = 0.5 + δt + αi + φitIit + itδ2 − δ1 is the national vote swingαi is the “normal vote” for district i: mean 0, sd σα.φit is the inc advantage in district i at time t: mean ψ, sd σφit’s are independent errors: mean 0 and sd σ .Candidate-level incumbency eﬀects:If the same incumbent is running in years 1 and 2, thenφi2 ≡ φi1Otherwise, φi1 and φi2 are independentAndrew Gelman Fitting and understanding multilevel models
- 86. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel modelfor t = 1, 2: vit = 0.5 + δt + αi + φitIit + itδ2 − δ1 is the national vote swingαi is the “normal vote” for district i: mean 0, sd σα.φit is the inc advantage in district i at time t: mean ψ, sd σφit’s are independent errors: mean 0 and sd σ .Candidate-level incumbency eﬀects:If the same incumbent is running in years 1 and 2, thenφi2 ≡ φi1Otherwise, φi1 and φi2 are independentAndrew Gelman Fitting and understanding multilevel models
- 87. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesMultilevel modelfor t = 1, 2: vit = 0.5 + δt + αi + φitIit + itδ2 − δ1 is the national vote swingαi is the “normal vote” for district i: mean 0, sd σα.φit is the inc advantage in district i at time t: mean ψ, sd σφit’s are independent errors: mean 0 and sd σ .Candidate-level incumbency eﬀects:If the same incumbent is running in years 1 and 2, thenφi2 ≡ φi1Otherwise, φi1 and φi2 are independentAndrew Gelman Fitting and understanding multilevel models
- 88. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesFitting the multilevel modelBayesian inferenceLinear parameters: national vote swings, district eﬀects,incumbency eﬀects3 variance parameters: district eﬀects, incumbency eﬀects,residual errorsNeed to model a selection eﬀect: information provided by theincumbent party at time 1Solve analytically for Pr(inclusion), include factor in thelikelihoodGibbs-Metropolis sampling, program in SplusAndrew Gelman Fitting and understanding multilevel models
- 89. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesFitting the multilevel modelBayesian inferenceLinear parameters: national vote swings, district eﬀects,incumbency eﬀects3 variance parameters: district eﬀects, incumbency eﬀects,residual errorsNeed to model a selection eﬀect: information provided by theincumbent party at time 1Solve analytically for Pr(inclusion), include factor in thelikelihoodGibbs-Metropolis sampling, program in SplusAndrew Gelman Fitting and understanding multilevel models
- 90. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesFitting the multilevel modelBayesian inferenceLinear parameters: national vote swings, district eﬀects,incumbency eﬀects3 variance parameters: district eﬀects, incumbency eﬀects,residual errorsNeed to model a selection eﬀect: information provided by theincumbent party at time 1Solve analytically for Pr(inclusion), include factor in thelikelihoodGibbs-Metropolis sampling, program in SplusAndrew Gelman Fitting and understanding multilevel models
- 91. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesFitting the multilevel modelBayesian inferenceLinear parameters: national vote swings, district eﬀects,incumbency eﬀects3 variance parameters: district eﬀects, incumbency eﬀects,residual errorsNeed to model a selection eﬀect: information provided by theincumbent party at time 1Solve analytically for Pr(inclusion), include factor in thelikelihoodGibbs-Metropolis sampling, program in SplusAndrew Gelman Fitting and understanding multilevel models
- 92. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesFitting the multilevel modelBayesian inferenceLinear parameters: national vote swings, district eﬀects,incumbency eﬀects3 variance parameters: district eﬀects, incumbency eﬀects,residual errorsNeed to model a selection eﬀect: information provided by theincumbent party at time 1Solve analytically for Pr(inclusion), include factor in thelikelihoodGibbs-Metropolis sampling, program in SplusAndrew Gelman Fitting and understanding multilevel models
- 93. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesFitting the multilevel modelBayesian inferenceLinear parameters: national vote swings, district eﬀects,incumbency eﬀects3 variance parameters: district eﬀects, incumbency eﬀects,residual errorsNeed to model a selection eﬀect: information provided by theincumbent party at time 1Solve analytically for Pr(inclusion), include factor in thelikelihoodGibbs-Metropolis sampling, program in SplusAndrew Gelman Fitting and understanding multilevel models
- 94. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesEstimated incumbency advantage and its variationYearAverageIncumbencyAdvantage1900 1920 1940 1960 1980 20000.00.040.080.12YearSDofDistrictEffects1900 1920 1940 1960 1980 20000.00.050.100.15YearSDofIncumbencyAdvantage1900 1920 1940 1960 1980 20000.00.020.040.06YearResidualSDofElectionResults1900 1920 1940 1960 1980 20000.00.020.040.06Andrew Gelman Fitting and understanding multilevel models
- 95. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesCompare oldand new estimatesYearEstincadvfromlaggedregression1900 1920 1940 1960 1980 20000.00.050.100.15YearAverageIncumbencyAdvantage 1900 1920 1940 1960 1980 20000.00.040.080.12ntage0.06Andrew Gelman Fitting and understanding multilevel models
- 96. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesNo-interaction modelBefore-after data with treatment and control groupsDefault model: constant treatment eﬀectsFisher’s classical null hyp: eﬀect is zero for all casesRegression model: yi = Ti θ + Xi β + icontroltreatment"before" measurement, x"after"measurement,yAndrew Gelman Fitting and understanding multilevel models
- 97. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesNo-interaction modelBefore-after data with treatment and control groupsDefault model: constant treatment eﬀectsFisher’s classical null hyp: eﬀect is zero for all casesRegression model: yi = Ti θ + Xi β + icontroltreatment"before" measurement, x"after"measurement,yAndrew Gelman Fitting and understanding multilevel models
- 98. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesNo-interaction modelBefore-after data with treatment and control groupsDefault model: constant treatment eﬀectsFisher’s classical null hyp: eﬀect is zero for all casesRegression model: yi = Ti θ + Xi β + icontroltreatment"before" measurement, x"after"measurement,yAndrew Gelman Fitting and understanding multilevel models
- 99. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesNo-interaction modelBefore-after data with treatment and control groupsDefault model: constant treatment eﬀectsFisher’s classical null hyp: eﬀect is zero for all casesRegression model: yi = Ti θ + Xi β + icontroltreatment"before" measurement, x"after"measurement,yAndrew Gelman Fitting and understanding multilevel models
- 100. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesActual data show interactionsTreatment interacts with “before” measurementBefore-after correlation is higher for controls than for treatedunitsExamplesAn observational study of legislative redistrictingAn experiment with pre-test, post-test dataCongressional elections with incumbents and open seatsAndrew Gelman Fitting and understanding multilevel models
- 101. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesActual data show interactionsTreatment interacts with “before” measurementBefore-after correlation is higher for controls than for treatedunitsExamplesAn observational study of legislative redistrictingAn experiment with pre-test, post-test dataCongressional elections with incumbents and open seatsAndrew Gelman Fitting and understanding multilevel models
- 102. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesActual data show interactionsTreatment interacts with “before” measurementBefore-after correlation is higher for controls than for treatedunitsExamplesAn observational study of legislative redistrictingAn experiment with pre-test, post-test dataCongressional elections with incumbents and open seatsAndrew Gelman Fitting and understanding multilevel models
- 103. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesActual data show interactionsTreatment interacts with “before” measurementBefore-after correlation is higher for controls than for treatedunitsExamplesAn observational study of legislative redistrictingAn experiment with pre-test, post-test dataCongressional elections with incumbents and open seatsAndrew Gelman Fitting and understanding multilevel models
- 104. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesActual data show interactionsTreatment interacts with “before” measurementBefore-after correlation is higher for controls than for treatedunitsExamplesAn observational study of legislative redistrictingAn experiment with pre-test, post-test dataCongressional elections with incumbents and open seatsAndrew Gelman Fitting and understanding multilevel models
- 105. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesActual data show interactionsTreatment interacts with “before” measurementBefore-after correlation is higher for controls than for treatedunitsExamplesAn observational study of legislative redistrictingAn experiment with pre-test, post-test dataCongressional elections with incumbents and open seatsAndrew Gelman Fitting and understanding multilevel models
- 106. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesObservational study of legislative redistrictingbefore-after dataEstimated partisan bias in previous electionEstimatedpartisanbias(adjustedforstate)-0.05 0.0 0.05-0.050.00.05no redistrictingbipartisan redistrictDem. redistrictRep. redistrict.. .. ............................ .. ............ .... ........ ............... ............... ......... .............. oooooooxxxxxxxxxx••• ••••••••••••(favors Democrats)(favors Republicans)Andrew Gelman Fitting and understanding multilevel models
- 107. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesExperiment: correlation between pre-test and post-testdata for controls and for treated unitsgradecorrelation1 2 3 40.80.91.0controlstreatedAndrew Gelman Fitting and understanding multilevel models
- 108. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesCorrelation between two successive Congressional electionsfor incumbents running (controls) and open seats (treated)1900 1920 1940 1960 1980 20000.00.20.40.60.8yearcorrelationincumbentsopen seatsAndrew Gelman Fitting and understanding multilevel models
- 109. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesInteractions as variance componentsUnit-level “error term” ηiFor control units, ηi persists from time 1 to time 2For treatment units, ηi changes:Subtractive treatment error (ηi only at time 1)Additive treatment error (ηi only at time 2)Replacement treatment errorUnder all these models, the before-after correlation is higherfor controls than treated unitsAndrew Gelman Fitting and understanding multilevel models
- 110. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesInteractions as variance componentsUnit-level “error term” ηiFor control units, ηi persists from time 1 to time 2For treatment units, ηi changes:Subtractive treatment error (ηi only at time 1)Additive treatment error (ηi only at time 2)Replacement treatment errorUnder all these models, the before-after correlation is higherfor controls than treated unitsAndrew Gelman Fitting and understanding multilevel models
- 111. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesInteractions as variance componentsUnit-level “error term” ηiFor control units, ηi persists from time 1 to time 2For treatment units, ηi changes:Subtractive treatment error (ηi only at time 1)Additive treatment error (ηi only at time 2)Replacement treatment errorUnder all these models, the before-after correlation is higherfor controls than treated unitsAndrew Gelman Fitting and understanding multilevel models
- 112. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesInteractions as variance componentsUnit-level “error term” ηiFor control units, ηi persists from time 1 to time 2For treatment units, ηi changes:Subtractive treatment error (ηi only at time 1)Additive treatment error (ηi only at time 2)Replacement treatment errorUnder all these models, the before-after correlation is higherfor controls than treated unitsAndrew Gelman Fitting and understanding multilevel models
- 113. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesInteractions as variance componentsUnit-level “error term” ηiFor control units, ηi persists from time 1 to time 2For treatment units, ηi changes:Subtractive treatment error (ηi only at time 1)Additive treatment error (ηi only at time 2)Replacement treatment errorUnder all these models, the before-after correlation is higherfor controls than treated unitsAndrew Gelman Fitting and understanding multilevel models
- 114. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesInteractions as variance componentsUnit-level “error term” ηiFor control units, ηi persists from time 1 to time 2For treatment units, ηi changes:Subtractive treatment error (ηi only at time 1)Additive treatment error (ηi only at time 2)Replacement treatment errorUnder all these models, the before-after correlation is higherfor controls than treated unitsAndrew Gelman Fitting and understanding multilevel models
- 115. EﬀectivenessUbiquityWay of lifeGeneral frameworkEstimating incumbency advantage and its variationInteractions in before-after studiesInteractions as variance componentsUnit-level “error term” ηiFor control units, ηi persists from time 1 to time 2For treatment units, ηi changes:Subtractive treatment error (ηi only at time 1)Additive treatment error (ηi only at time 2)Replacement treatment errorUnder all these models, the before-after correlation is higherfor controls than treated unitsAndrew Gelman Fitting and understanding multilevel models
- 116. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsSome new toolsBuilding and ﬁtting multilevel modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 117. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsSome new toolsBuilding and ﬁtting multilevel modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 118. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsSome new toolsBuilding and ﬁtting multilevel modelsDisplaying and summarizing inferencesAndrew Gelman Fitting and understanding multilevel models
- 119. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsBuilding and ﬁtting multilevel modelsA reparameterization can change a model(even if it leaves the likelihood unchanged)Redundant additive parameterizationRedundant multiplicative parameterizationWeakly-informative prior distribution for group-level varianceparametersAndrew Gelman Fitting and understanding multilevel models
- 120. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsBuilding and ﬁtting multilevel modelsA reparameterization can change a model(even if it leaves the likelihood unchanged)Redundant additive parameterizationRedundant multiplicative parameterizationWeakly-informative prior distribution for group-level varianceparametersAndrew Gelman Fitting and understanding multilevel models
- 121. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsBuilding and ﬁtting multilevel modelsA reparameterization can change a model(even if it leaves the likelihood unchanged)Redundant additive parameterizationRedundant multiplicative parameterizationWeakly-informative prior distribution for group-level varianceparametersAndrew Gelman Fitting and understanding multilevel models
- 122. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsBuilding and ﬁtting multilevel modelsA reparameterization can change a model(even if it leaves the likelihood unchanged)Redundant additive parameterizationRedundant multiplicative parameterizationWeakly-informative prior distribution for group-level varianceparametersAndrew Gelman Fitting and understanding multilevel models
- 123. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsBuilding and ﬁtting multilevel modelsA reparameterization can change a model(even if it leaves the likelihood unchanged)Redundant additive parameterizationRedundant multiplicative parameterizationWeakly-informative prior distribution for group-level varianceparametersAndrew Gelman Fitting and understanding multilevel models
- 124. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant parameterizationData model: Pr(yi = 1) = logit−1β0 + βageage(i) + βstatestate(i)Usual model for the coeﬃcients:βagej ∼ N(0, σ2age), for j = 1, . . . , 4βstatej ∼ N(0, σ2state), for j = 1, . . . , 50Additively redundant model:βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Why add the redundant µage, µstate?Iterative algorithm moves more smoothlyAndrew Gelman Fitting and understanding multilevel models
- 125. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant parameterizationData model: Pr(yi = 1) = logit−1β0 + βageage(i) + βstatestate(i)Usual model for the coeﬃcients:βagej ∼ N(0, σ2age), for j = 1, . . . , 4βstatej ∼ N(0, σ2state), for j = 1, . . . , 50Additively redundant model:βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Why add the redundant µage, µstate?Iterative algorithm moves more smoothlyAndrew Gelman Fitting and understanding multilevel models
- 126. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant parameterizationData model: Pr(yi = 1) = logit−1β0 + βageage(i) + βstatestate(i)Usual model for the coeﬃcients:βagej ∼ N(0, σ2age), for j = 1, . . . , 4βstatej ∼ N(0, σ2state), for j = 1, . . . , 50Additively redundant model:βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Why add the redundant µage, µstate?Iterative algorithm moves more smoothlyAndrew Gelman Fitting and understanding multilevel models
- 127. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant parameterizationData model: Pr(yi = 1) = logit−1β0 + βageage(i) + βstatestate(i)Usual model for the coeﬃcients:βagej ∼ N(0, σ2age), for j = 1, . . . , 4βstatej ∼ N(0, σ2state), for j = 1, . . . , 50Additively redundant model:βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Why add the redundant µage, µstate?Iterative algorithm moves more smoothlyAndrew Gelman Fitting and understanding multilevel models
- 128. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant parameterizationData model: Pr(yi = 1) = logit−1β0 + βageage(i) + βstatestate(i)Usual model for the coeﬃcients:βagej ∼ N(0, σ2age), for j = 1, . . . , 4βstatej ∼ N(0, σ2state), for j = 1, . . . , 50Additively redundant model:βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Why add the redundant µage, µstate?Iterative algorithm moves more smoothlyAndrew Gelman Fitting and understanding multilevel models
- 129. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant parameterizationData model: Pr(yi = 1) = logit−1β0 + βageage(i) + βstatestate(i)Usual model for the coeﬃcients:βagej ∼ N(0, σ2age), for j = 1, . . . , 4βstatej ∼ N(0, σ2state), for j = 1, . . . , 50Additively redundant model:βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Why add the redundant µage, µstate?Iterative algorithm moves more smoothlyAndrew Gelman Fitting and understanding multilevel models
- 130. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant additive parameterizationModelPr(yi = 1) = logit−1β0+ βageage(i) + βstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Identify using centered parameters:˜βagej = βagej − ¯βage, for j = 1, . . . , 4˜βstatej = βstatej − ¯βstate, for j = 1, . . . , 50Redeﬁne the constant term:˜β0= β0+ ¯βage+ ¯βageAndrew Gelman Fitting and understanding multilevel models
- 131. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant additive parameterizationModelPr(yi = 1) = logit−1β0+ βageage(i) + βstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Identify using centered parameters:˜βagej = βagej − ¯βage, for j = 1, . . . , 4˜βstatej = βstatej − ¯βstate, for j = 1, . . . , 50Redeﬁne the constant term:˜β0= β0+ ¯βage+ ¯βageAndrew Gelman Fitting and understanding multilevel models
- 132. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant additive parameterizationModelPr(yi = 1) = logit−1β0+ βageage(i) + βstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Identify using centered parameters:˜βagej = βagej − ¯βage, for j = 1, . . . , 4˜βstatej = βstatej − ¯βstate, for j = 1, . . . , 50Redeﬁne the constant term:˜β0= β0+ ¯βage+ ¯βageAndrew Gelman Fitting and understanding multilevel models
- 133. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant multiplicative parameterizationNew modelPr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Identify using centered and scaled parameters:˜βagej = ξage(βagej − ¯βage), for j = 1, . . . , 4˜βstatej = ξstateβstatej − ¯βstate, for j = 1, . . . , 50Faster convergenceMore general model, connections to factor analysisAndrew Gelman Fitting and understanding multilevel models
- 134. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant multiplicative parameterizationNew modelPr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Identify using centered and scaled parameters:˜βagej = ξage(βagej − ¯βage), for j = 1, . . . , 4˜βstatej = ξstateβstatej − ¯βstate, for j = 1, . . . , 50Faster convergenceMore general model, connections to factor analysisAndrew Gelman Fitting and understanding multilevel models
- 135. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant multiplicative parameterizationNew modelPr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Identify using centered and scaled parameters:˜βagej = ξage(βagej − ¯βage), for j = 1, . . . , 4˜βstatej = ξstateβstatej − ¯βstate, for j = 1, . . . , 50Faster convergenceMore general model, connections to factor analysisAndrew Gelman Fitting and understanding multilevel models
- 136. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRedundant multiplicative parameterizationNew modelPr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Identify using centered and scaled parameters:˜βagej = ξage(βagej − ¯βage), for j = 1, . . . , 4˜βstatej = ξstateβstatej − ¯βstate, for j = 1, . . . , 50Faster convergenceMore general model, connections to factor analysisAndrew Gelman Fitting and understanding multilevel models
- 137. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsWeakly informative prior distribution for the multilevelvariance parameterRedundant multiplicative parameterization:Pr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Separate prior distributions on the ξ and σ parameters:Normal on ξInverse-gamma on σ2Generalizes and ﬁxes problems with the standard choices ofprior distributionsAndrew Gelman Fitting and understanding multilevel models
- 138. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsWeakly informative prior distribution for the multilevelvariance parameterRedundant multiplicative parameterization:Pr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Separate prior distributions on the ξ and σ parameters:Normal on ξInverse-gamma on σ2Generalizes and ﬁxes problems with the standard choices ofprior distributionsAndrew Gelman Fitting and understanding multilevel models
- 139. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsWeakly informative prior distribution for the multilevelvariance parameterRedundant multiplicative parameterization:Pr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Separate prior distributions on the ξ and σ parameters:Normal on ξInverse-gamma on σ2Generalizes and ﬁxes problems with the standard choices ofprior distributionsAndrew Gelman Fitting and understanding multilevel models
- 140. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsWeakly informative prior distribution for the multilevelvariance parameterRedundant multiplicative parameterization:Pr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Separate prior distributions on the ξ and σ parameters:Normal on ξInverse-gamma on σ2Generalizes and ﬁxes problems with the standard choices ofprior distributionsAndrew Gelman Fitting and understanding multilevel models
- 141. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsWeakly informative prior distribution for the multilevelvariance parameterRedundant multiplicative parameterization:Pr(yi = 1) = logit−1β0+ ξageβageage(i) + ξstateβstatestate(i)βagej ∼ N(µage, σ2age), for j = 1, . . . , 4βstatej ∼ N(µstate, σ2state), for j = 1, . . . , 50Separate prior distributions on the ξ and σ parameters:Normal on ξInverse-gamma on σ2Generalizes and ﬁxes problems with the standard choices ofprior distributionsAndrew Gelman Fitting and understanding multilevel models
- 142. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsDisplaying and summarizing inferencesDisplaying parameters in groups rather than as a long listAverage predictive eﬀectsR2 and partial pooling factorsAnalysis of varianceAndrew Gelman Fitting and understanding multilevel models
- 143. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsDisplaying and summarizing inferencesDisplaying parameters in groups rather than as a long listAverage predictive eﬀectsR2 and partial pooling factorsAnalysis of varianceAndrew Gelman Fitting and understanding multilevel models
- 144. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsDisplaying and summarizing inferencesDisplaying parameters in groups rather than as a long listAverage predictive eﬀectsR2 and partial pooling factorsAnalysis of varianceAndrew Gelman Fitting and understanding multilevel models
- 145. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsDisplaying and summarizing inferencesDisplaying parameters in groups rather than as a long listAverage predictive eﬀectsR2 and partial pooling factorsAnalysis of varianceAndrew Gelman Fitting and understanding multilevel models
- 146. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsDisplaying and summarizing inferencesDisplaying parameters in groups rather than as a long listAverage predictive eﬀectsR2 and partial pooling factorsAnalysis of varianceAndrew Gelman Fitting and understanding multilevel models
- 147. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRaw display of inferencemean sd 2.5% 25% 50% 75% 97.5% Rhat n.effB.0 0.402 0.147 0.044 0.326 0.413 0.499 0.652 1.024 110b.female -0.094 0.102 -0.283 -0.162 -0.095 -0.034 0.107 1.001 1000b.black -1.701 0.305 -2.323 -1.910 -1.691 -1.486 -1.152 1.014 500b.female.black -0.143 0.393 -0.834 -0.383 -0.155 0.104 0.620 1.007 1000B.age[1] 0.084 0.088 -0.053 0.012 0.075 0.140 0.277 1.062 45B.age[2] -0.072 0.087 -0.260 -0.121 -0.054 -0.006 0.052 1.017 190B.age[3] 0.044 0.077 -0.105 -0.007 0.038 0.095 0.203 1.029 130B.age[4] -0.057 0.096 -0.265 -0.115 -0.052 0.001 0.133 1.076 32B.edu[1] -0.148 0.131 -0.436 -0.241 -0.137 -0.044 0.053 1.074 31B.edu[2] -0.022 0.082 -0.182 -0.069 -0.021 0.025 0.152 1.028 160B.edu[3] 0.148 0.112 -0.032 0.065 0.142 0.228 0.370 1.049 45B.edu[4] 0.023 0.090 -0.170 -0.030 0.015 0.074 0.224 1.061 37B.age.edu[1,1] -0.044 0.133 -0.363 -0.104 -0.019 0.025 0.170 1.018 1000B.age.edu[1,2] 0.059 0.123 -0.153 -0.011 0.032 0.118 0.353 1.016 580B.age.edu[1,3] 0.049 0.124 -0.146 -0.023 0.022 0.104 0.349 1.015 280B.age.edu[1,4] 0.001 0.116 -0.237 -0.061 0.000 0.052 0.280 1.010 1000B.age.edu[2,1] 0.066 0.152 -0.208 -0.008 0.032 0.124 0.449 1.022 140B.age.edu[2,2] -0.081 0.127 -0.407 -0.137 -0.055 0.001 0.094 1.057 120B.age.edu[2,3] -0.004 0.102 -0.226 -0.048 0.000 0.041 0.215 1.008 940B.age.edu[2,4] -0.042 0.108 -0.282 -0.100 -0.026 0.014 0.157 1.017 170B.age.edu[3,1] 0.034 0.135 -0.215 -0.030 0.009 0.091 0.361 1.021 230B.age.edu[3,2] 0.007 0.102 -0.213 -0.039 0.003 0.052 0.220 1.019 610B.age.edu[3,3] 0.033 0.130 -0.215 -0.029 0.009 0.076 0.410 1.080 61B.age.edu[3,4] -0.009 0.109 -0.236 -0.064 -0.005 0.043 0.214 1.024 150B.age.edu[4,1] -0.141 0.190 -0.672 -0.224 -0.086 -0.003 0.100 1.036 270B.age.edu[4,2] -0.014 0.119 -0.280 -0.059 -0.008 0.033 0.239 1.017 240B.age.edu[4,3] 0.046 0.132 -0.192 -0.024 0.019 0.108 0.332 1.010 210B.age.edu[4,4] 0.042 0.142 -0.193 -0.022 0.016 0.095 0.377 1.015 160B.state[1] 0.201 0.211 -0.131 0.047 0.172 0.326 0.646 1.003 960B.state[2] 0.466 0.252 0.008 0.310 0.440 0.603 1.047 1.001 1000Andrew Gelman Fitting and understanding multilevel models
- 148. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsRaw graphical display80% interval for each chain R−hat−4 −2 0 2 1 1.5 2+1 1.5 2+1 1.5 2+B.0 qqqb.female qqqb.black qqqb.female.black qqqB.age[1] qqq[2] qqq[3] qqq[4] qqqB.edu[1] qqq[2] qqq[3] qqq[4] qqqB.age.edu[1,1] qqq[1,2] qqq[1,3] qqq[1,4] qqq[2,1] qqq[2,2] qqq[2,3] qqq[2,4] qqq[3,1] qqq[3,2] qqqB.state[1] qqq[2] qqq[3] qqq[4] qqq[5] qqq[6] qqq[7] qqq[8] qqq[9] qqq[10] qqqB.region[1] qqq[2] qqq[3] qqq[4] qqq[5] qqq**medians and 80% intervalsB.000.20.40.6qqqb.female−0.3−0.2−0.100.1qqqb.black−2.5−2−1.5−1qqqb.female.black−1−0.500.5qqqB.age−0.4−0.200.20.4qqq111111111qqq222222222qqq333333333qqq444444444B.edu−0.500.5qqq111111111qqq222222222qqq333333333qqq444444444B.age.edu−1−0.500.5qqq111111111111111111qqq222222222qqq333333333qqq444444444qqq222222222111111111qqq222222222qqq333333333qqq444444444qqq333333333111111111qqq222222222qqq333333333qqq444444444qqq444444444111111111qqq222222222qqq333333333qqq444444444B.state−4−202qqq111111111qqq222222222qqq333333333qqq444444444qqq555555555qqq666666666qqq777777777qqq888888888qqq999999999qqq101010101010101010qqq qqq121212121212121212qqqqqq141414141414141414qqq qqq161616161616161616qqq qqq181818181818181818qqq qqq202020202020202020qqq qqq222222222222222222qqq qqq242424242424242424qqq qqq262626262626262626qqq qqq282828282828282828qqq qqq303030303030303030qqq qqq323232323232323232qqq qqq343434343434343434qqq qqq363636363636363636qqq qqq383838383838383838qqq qqq404040404040404040*B.region−1−0.500.51qqq111111111qqq222222222qqq333333333qqq444444444qqq555555555Sigma.age00.20.40.6qqqSigma.edu00.51qqqSigma.age.edu00.10.20.3qqqBugs model at "C:/books/multilevel/election88/model4.bug", 3 chains, each with 2001 iterationsAndrew Gelman Fitting and understanding multilevel models
- 149. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsBetter graphical display 1: demographicsfemaleblackfemale x black18−2930−4445−6465+no h.s.high schoolsome collegecollege grad18−29 x no h.s.18−29 x high school18−29 x some college18−29 x college grad30−44 x no h.s.30−44 x high school30−44 x some college30−44 x college grad45−64 x no h.s.45−64 x high school45−64 x some college45−64 x college grad65+ x no h.s.65+ x high school65+ x some college65+ x college grad−2.5−2.5−2−2−1.5−1.5−1−1−0.5−0.5000.50.511Andrew Gelman Fitting and understanding multilevel models
- 150. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsBetter graphical display 2: within states−2.0 −1.0 0.00.00.40.8Alaskalinear predictorPr(supportBush)−2.0 −1.0 0.00.00.40.8Arizonalinear predictorPr(supportBush)−2.0 −1.0 0.00.00.40.8Arkansaslinear predictorPr(supportBush)−2.0 −1.0 0.00.00.40.8Californialinear predictorPr(supportBush)−2.0 −1.0 0.00.00.40.8Coloradolinear predictorPr(supportBush)−2.0 −1.0 0.00.00.40.8Connecticutlinear predictorPr(supportBush)−2.0 −1.0 0.00.00.40.8Delawarelinear predictorPr(supportBush)−2.0 −1.0 0.00.00.40.8District of Columbialinear predictorPr(supportBush)Andrew Gelman Fitting and understanding multilevel models
- 151. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsBetter graphical display 3: between statesNortheastR vote in prev electionsregressionintercept0.5 0.6 0.7−0.50.00.5CTDEMEMDMANHNJNYPARIVTWVMidwestR vote in prev electionsregressionintercept0.5 0.6 0.7−0.50.00.5ILINIAKSMIMNMONENDOHSDWISouthR vote in prev electionsregressionintercept0.5 0.6 0.7−0.50.00.5ALAR FLGAKY LAMSNCOKSCTNTXVAWestR vote in prev electionsregressionintercept0.5 0.6 0.7−0.50.00.5AKAZCA COHIIDMTNVNMORUTWAWYAndrew Gelman Fitting and understanding multilevel models
- 152. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsAverage predictive eﬀectsWhat is E(y | x1 = high) − E(y | x1 = low), with all other x’sheld constant?uvyAndrew Gelman Fitting and understanding multilevel models
- 153. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsAverage predictive eﬀectsWhat is E(y | x1 = high) − E(y | x1 = low), with all other x’sheld constant?In general, diﬀerence can depend on xAverage over distribution of x in the dataYou can’t just use a central value of xCompute APE for each input variable xMultilevel factors are categorical input variablesAndrew Gelman Fitting and understanding multilevel models
- 154. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsAverage predictive eﬀectsWhat is E(y | x1 = high) − E(y | x1 = low), with all other x’sheld constant?In general, diﬀerence can depend on xAverage over distribution of x in the dataYou can’t just use a central value of xCompute APE for each input variable xMultilevel factors are categorical input variablesAndrew Gelman Fitting and understanding multilevel models
- 155. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsAverage predictive eﬀectsWhat is E(y | x1 = high) − E(y | x1 = low), with all other x’sheld constant?In general, diﬀerence can depend on xAverage over distribution of x in the dataYou can’t just use a central value of xCompute APE for each input variable xMultilevel factors are categorical input variablesAndrew Gelman Fitting and understanding multilevel models
- 156. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsAverage predictive eﬀectsWhat is E(y | x1 = high) − E(y | x1 = low), with all other x’sheld constant?In general, diﬀerence can depend on xAverage over distribution of x in the dataYou can’t just use a central value of xCompute APE for each input variable xMultilevel factors are categorical input variablesAndrew Gelman Fitting and understanding multilevel models
- 157. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsAverage predictive eﬀectsWhat is E(y | x1 = high) − E(y | x1 = low), with all other x’sheld constant?In general, diﬀerence can depend on xAverage over distribution of x in the dataYou can’t just use a central value of xCompute APE for each input variable xMultilevel factors are categorical input variablesAndrew Gelman Fitting and understanding multilevel models
- 158. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsAverage predictive eﬀectsWhat is E(y | x1 = high) − E(y | x1 = low), with all other x’sheld constant?In general, diﬀerence can depend on xAverage over distribution of x in the dataYou can’t just use a central value of xCompute APE for each input variable xMultilevel factors are categorical input variablesAndrew Gelman Fitting and understanding multilevel models
- 159. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsAPE: why you can’t just use a central value of x−6 −4 −2 0 2 4 6 80.00.20.40.60.81.0vE(y|u,v)u = 1u = 0| || || ||| | |||| | ||| ||| | || || |||| ||| || ||| | | |avg pred effect = 0.03pred effect at E(v) = 0.24−6 −4 −2 0 2 4 6 80.00.20.40.60.81.0vE(y|u,v)u = 1u = 0||| | | || ||| ||| ||| | | |||| || | |||| ||| |||||| ||avg pred effect = 0.11pred effect at E(v) = 0.03Andrew Gelman Fitting and understanding multilevel models
- 160. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsUnderstanding sources of variationGeneralization of R2 (explained variance), deﬁned at eachlevel of the modelPartial pooling factor, deﬁned at each levelAnalysis of varianceSummarize the scale of each batch of predictorsGo beyond classical null-hypothesis-testing frameworkOpen question: how to construct models with deepinteraction structures?Andrew Gelman Fitting and understanding multilevel models
- 161. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsUnderstanding sources of variationGeneralization of R2 (explained variance), deﬁned at eachlevel of the modelPartial pooling factor, deﬁned at each levelAnalysis of varianceSummarize the scale of each batch of predictorsGo beyond classical null-hypothesis-testing frameworkOpen question: how to construct models with deepinteraction structures?Andrew Gelman Fitting and understanding multilevel models
- 162. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsUnderstanding sources of variationGeneralization of R2 (explained variance), deﬁned at eachlevel of the modelPartial pooling factor, deﬁned at each levelAnalysis of varianceSummarize the scale of each batch of predictorsGo beyond classical null-hypothesis-testing frameworkOpen question: how to construct models with deepinteraction structures?Andrew Gelman Fitting and understanding multilevel models
- 163. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsUnderstanding sources of variationGeneralization of R2 (explained variance), deﬁned at eachlevel of the modelPartial pooling factor, deﬁned at each levelAnalysis of varianceSummarize the scale of each batch of predictorsGo beyond classical null-hypothesis-testing frameworkOpen question: how to construct models with deepinteraction structures?Andrew Gelman Fitting and understanding multilevel models
- 164. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsUnderstanding sources of variationGeneralization of R2 (explained variance), deﬁned at eachlevel of the modelPartial pooling factor, deﬁned at each levelAnalysis of varianceSummarize the scale of each batch of predictorsGo beyond classical null-hypothesis-testing frameworkOpen question: how to construct models with deepinteraction structures?Andrew Gelman Fitting and understanding multilevel models
- 165. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsUnderstanding sources of variationGeneralization of R2 (explained variance), deﬁned at eachlevel of the modelPartial pooling factor, deﬁned at each levelAnalysis of varianceSummarize the scale of each batch of predictorsGo beyond classical null-hypothesis-testing frameworkOpen question: how to construct models with deepinteraction structures?Andrew Gelman Fitting and understanding multilevel models
- 166. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsUnderstanding sources of variationGeneralization of R2 (explained variance), deﬁned at eachlevel of the modelPartial pooling factor, deﬁned at each levelAnalysis of varianceSummarize the scale of each batch of predictorsGo beyond classical null-hypothesis-testing frameworkOpen question: how to construct models with deepinteraction structures?Andrew Gelman Fitting and understanding multilevel models
- 167. EﬀectivenessUbiquityWay of lifeBuilding and ﬁtting modelsDisplaying and summarizing inferencesConclusionsConclusionsMultilevel modeling is not just for grouped dataNew ideas needed to ﬁt, understand, display, and summarizeeach level of the modelGeneral framework for modeling treatment eﬀects that varyIt’s not just about “data ﬁtting” or “getting the rightstandard errors”Andrew Gelman Fitting and understanding multilevel models

No public clipboards found for this slide

×
### Save the most important slides with Clipping

Clipping is a handy way to collect and organize the most important slides from a presentation. You can keep your great finds in clipboards organized around topics.

Be the first to comment