Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
Impact evaluation in-depth: More on methods and example of impact evaluation ...CIFOR-ICRAF
Presented by Colas Chervier (CIRAD) at "Workshop on impact evaluation methods and research collaboration kick-off", Samarinda, Indonesia, on 10 October 2022
- Propensity score matching (PSM) and weighting methods can be used to estimate treatment effects when selection into a treatment is based on observable characteristics.
- PSM involves matching treated units to untreated units with similar propensity scores, which is the predicted probability of receiving treatment based on observables. Weighting assigns weights inversely proportional to the probability of receiving the actual treatment.
- Both methods rely on the assumption that conditioning on observables eliminates selection bias, but there may still be bias from unobservables. Sensitivity analysis is used to check the robustness of results.
Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
The document summarizes the regression discontinuity method used to evaluate the impact of Morocco's National Human Development Initiative (INDH) poverty reduction program. Key points:
- INDH targeted communities with poverty rates over 30% for additional funding. This threshold was used to compare outcomes just above and below the cutoff in a regression discontinuity design.
- Panel survey data from 2008, 2011, and 2013 was used to analyze economic outcomes like income, consumption, and assets at the household level around the threshold.
- Regression models found INDH caused a 12.5% increase in consumption in 2008 and 20.7% in 2011, but no significant effects on income or assets.
- The analysis is
Canonical correlation analysis was used to detect potential bias in faculty promotion scoring at American University of Nigeria (AUN). Three committees independently scored candidates based on teaching, research, and service. CCA discriminated between promotable and non-promotable candidates at the 90% confidence level, rejecting the hypothesis that it could not do so. CCA also found no significant differences in scoring between committees or evidence that individual assessors' scores overbearingly influenced outcomes, rejecting the hypotheses that it could not detect bias. The results suggest CCA is an effective tool for AUN to analyze scoring and ensure fairness in its promotion process.
This slide is about Analysis of Covariance. Analysis of covariance provides a way of statistically controlling the (linear) effect of variables one does not want to examine in a study.
ANCOVA is the statistical technique that combines regression and ANOVA.
Residuals represent variation in the data that cannot be explained by the model.
Residual plots useful for discovering patterns, outliers or misspecifications of the model. Systematic patterns discovered may suggest how to reformulate the model.
If the residuals exhibit no pattern, then this is a good indication that the model is appropriate for the particular data.
This document provides an overview of multinomial logistic regression. It discusses how multinomial logistic regression is used when the dependent variable has more than two nominal categories. An example is presented where voting behavior is predicted based on age, gender, economic beliefs, and religious beliefs, with the dependent variable having four categories for different candidates. The document walks through setting up and interpreting the results of a multinomial logistic regression analysis in SPSS for this example. Key results shown include the regression coefficients, odds ratios, goodness of fit statistics, and classification accuracy for each category of the dependent variable.
Impact evaluation in-depth: More on methods and example of impact evaluation ...CIFOR-ICRAF
Presented by Colas Chervier (CIRAD) at "Workshop on impact evaluation methods and research collaboration kick-off", Samarinda, Indonesia, on 10 October 2022
- Propensity score matching (PSM) and weighting methods can be used to estimate treatment effects when selection into a treatment is based on observable characteristics.
- PSM involves matching treated units to untreated units with similar propensity scores, which is the predicted probability of receiving treatment based on observables. Weighting assigns weights inversely proportional to the probability of receiving the actual treatment.
- Both methods rely on the assumption that conditioning on observables eliminates selection bias, but there may still be bias from unobservables. Sensitivity analysis is used to check the robustness of results.
Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
The document summarizes the regression discontinuity method used to evaluate the impact of Morocco's National Human Development Initiative (INDH) poverty reduction program. Key points:
- INDH targeted communities with poverty rates over 30% for additional funding. This threshold was used to compare outcomes just above and below the cutoff in a regression discontinuity design.
- Panel survey data from 2008, 2011, and 2013 was used to analyze economic outcomes like income, consumption, and assets at the household level around the threshold.
- Regression models found INDH caused a 12.5% increase in consumption in 2008 and 20.7% in 2011, but no significant effects on income or assets.
- The analysis is
Canonical correlation analysis was used to detect potential bias in faculty promotion scoring at American University of Nigeria (AUN). Three committees independently scored candidates based on teaching, research, and service. CCA discriminated between promotable and non-promotable candidates at the 90% confidence level, rejecting the hypothesis that it could not do so. CCA also found no significant differences in scoring between committees or evidence that individual assessors' scores overbearingly influenced outcomes, rejecting the hypotheses that it could not detect bias. The results suggest CCA is an effective tool for AUN to analyze scoring and ensure fairness in its promotion process.
This slide is about Analysis of Covariance. Analysis of covariance provides a way of statistically controlling the (linear) effect of variables one does not want to examine in a study.
ANCOVA is the statistical technique that combines regression and ANOVA.
Residuals represent variation in the data that cannot be explained by the model.
Residual plots useful for discovering patterns, outliers or misspecifications of the model. Systematic patterns discovered may suggest how to reformulate the model.
If the residuals exhibit no pattern, then this is a good indication that the model is appropriate for the particular data.
This document provides an overview of multinomial logistic regression. It discusses how multinomial logistic regression is used when the dependent variable has more than two nominal categories. An example is presented where voting behavior is predicted based on age, gender, economic beliefs, and religious beliefs, with the dependent variable having four categories for different candidates. The document walks through setting up and interpreting the results of a multinomial logistic regression analysis in SPSS for this example. Key results shown include the regression coefficients, odds ratios, goodness of fit statistics, and classification accuracy for each category of the dependent variable.
This document provides information about the second edition of the book "Propensity Score Analysis: Statistical Methods and Applications" by Shenyang Guo and Mark W. Fraser. It is part of the Advanced Quantitative Techniques in the Social Sciences book series. The book covers statistical methods for estimating causal treatment effects from observational data using propensity score analysis techniques, including propensity score matching, weighting, and subclassification. It discusses assumptions, models, software for implementation, and examples.
Survival analysis is a branch of statistics used to analyze time-to-event data, such as time until death or failure. It estimates the probability that an individual survives past a given time and compares survival times between groups. Objectives include estimating survival probabilities, comparing survival between groups, and assessing how covariates relate to survival time. Survival data can be complete or censored. The Kaplan-Meier estimator is used to estimate survival when there is censoring. The log-rank test compares survival curves between treatment groups, and Cox regression incorporates covariates to predict survival probabilities.
This document provides an overview of analysis of variance (ANOVA) techniques. It discusses one-way ANOVA, which evaluates differences between three or more population means. Key aspects covered include partitioning total variation into between- and within-group components, assumptions of normality and equal variances, and using the F-test to test for differences. Randomized block ANOVA and two-factor ANOVA are also introduced as extensions to control for additional variables. Post-hoc tests like Tukey and Fisher's LSD are described for determining specific mean differences.
The "Instrumental Variables" webinar, presented by Peter Lance, was the fifth and final webinar in a series of discussions on the popular MEASURE Evaluation manual, How Do We Know If a Program Made a Difference? A Guide to Statistical Methods for Program Impact Evaluation.
This document contains 26 slides presented by Dr. Rizwan S A on cohort studies. It defines cohort studies as prospective longitudinal studies that follow healthy populations over time to determine the causes of diseases. Key aspects covered include classifying cohort studies as prospective, retrospective or combined; describing the elements of cohort studies such as selecting and following subjects, measuring exposure and outcomes, and analyzing results using measures like relative risk, risk difference and attributable risk. Examples of famous cohort studies on smoking, heart disease and oral contraceptives are also provided.
This document discusses effect modification and how it differs from confounding. It defines effect modification as a change in the magnitude of the effect of an exposure on an outcome according to levels of a third variable. Effect modification provides a more detailed description of the relationship between exposure and outcome, whereas confounding is a bias to be eliminated. The document contrasts effect modification and confounding, and provides examples to illustrate the concepts. It also discusses testing for effect modification using tests of homogeneity and how the interpretation of effect modification depends on the choice of effect measure.
This document discusses multiple linear regression analysis conducted to assess staff satisfaction levels at an educational institution. A questionnaire was administered to staff across multiple locations. Factor analysis was used to identify the variables that best predict overall satisfaction. A regression model was developed using satisfaction as the dependent variable and questions regarding workplace expectations, resources, communication, recognition, development opportunities, and opinions as independent variables. The model was analyzed in SPSS and showed high explanatory power, with no issues of multicollinearity between predictors.
1. Multinomial logistic regression allows modeling of nominal outcome variables with more than two categories by calculating multiple logistic regression equations to compare each category's probability to a reference category.
2. The document provides an example of using multinomial logistic regression to model student program choice (academic, general, vocational) based on writing score and socioeconomic status.
3. The model results show that writing score significantly impacts the choice between academic and general/vocational programs, while socioeconomic status also influences general versus academic program choice.
The document provides an overview of design of experiments (DOE) and factorial experiments. It defines key terms like factors, levels, treatments, responses, and noise. It explains the objectives of conducting experiments and the different types of experiments. It provides examples of 2-factor and 3-factor factorial experiments and how to analyze them. It discusses the principles of replication, randomization, and blocking. Finally, it demonstrates how to set up and analyze a general full factorial design with factors having more than two levels.
The document provides an introduction to regression analysis and performing regression using SPSS. It discusses key concepts like dependent and independent variables, assumptions of regression like linearity and homoscedasticity. It explains how to calculate regression coefficients using the method of least squares and how to perform regression analysis in SPSS, including selecting variables and interpreting the output.
Probability Distributions for Continuous Variablesgetyourcheaton
The document discusses probability distributions for continuous variables, explaining that continuous variables can take any value within a range and probability distributions depict the relative likelihood of these values being observed, with examples given of uniform and normal distributions and how they are characterized by parameters like mean and standard deviation. It also provides examples of how uniform and normal distributions can model real-world scenarios involving continuous variables like time or test scores.
This document provides examples of using Eviews to estimate econometric models. It discusses how to:
1. Estimate bivariate regressions to determine optimal hedge ratios between spot and futures prices.
2. Estimate CAPM models by running regressions of stock returns against market returns.
3. Estimate APT-style models by regressing stock returns against macroeconomic variables.
4. Diagnose econometric models by testing for heteroscedasticity, autocorrelation, non-normality, and multicollinearity.
5. Construct ARMA models to forecast time series data and use information criteria to select optimal models.
6. Estimate simultaneous equation models and vector autoregressive (
Here are the key steps and results:
1. Load the data and run a multiple linear regression with x1 as the target and x2, x3 as predictors.
R-squared is 0.89
2. Add x4, x5 as additional predictors.
R-squared increases to 0.94
3. Add x6, x7 as additional predictors.
R-squared further increases to 0.98
So as more predictors are added, the R-squared value increases, indicating more of the variation in x1 is explained by the model. However, adding too many predictors can lead to overfitting.
This document contains slides from a presentation on simple linear regression and correlation. It introduces simple linear regression modeling, including estimating the regression line using the method of least squares. It discusses the assumptions of the simple linear regression model and defines key terms like the regression coefficients (intercept and slope), error variance, standard errors of the estimates, and how to perform hypothesis tests and construct confidence intervals for the regression parameters. Examples are provided to demonstrate calculating quantities like sums of squares, estimating the regression line, and evaluating the fit of the regression model.
The document provides an overview of chi-square tests, including chi-square tests for goodness of fit and tests of independence. It explains that chi-square tests are used with categorical or classified data rather than numerical data. For a chi-square test of goodness of fit, the null hypothesis specifies the expected proportions in different categories. Observed and expected frequencies are calculated and compared using the chi-square statistic. A chi-square test of independence examines whether two categorical variables are related by comparing observed and expected joint frequencies.
Regression analysis is a statistical technique used to investigate relationships between variables. It allows one to determine the strength of the relationship between a dependent variable (usually denoted by Y) and one or more independent variables (denoted by X). Multiple regression extends this to analyze the relationship between a dependent variable and multiple independent variables. The goals of regression analysis are to understand how the dependent variable changes with the independent variables and to use the independent variables to predict the value of the dependent variable. It requires the dependent variable to be continuous and the independent variables can be either continuous or categorical.
This presentation explores the strengths and weaknesses of ordinary least squares and propensity score matching. Matching alone cannot solve endogeneity problems faced by OLS. The presentation shows how PSM and OLS can be combined to yield less-biased estimators than either method alone.
Cronbach's alpha is a measure of internal consistency, which is used to determine if the items in a survey or questionnaire reliably measure the same concept. It ranges from 0 to 1, with higher numbers indicating greater reliability. An acceptable alpha is between 0.7-0.95. Cronbach's alpha measures how well items correlate with each other and with the total test. It is reported along with the mean to indicate the reliability of a scale. The reliability statistics table in SPSS shows the actual alpha value and whether removing any items would increase or decrease the value.
This document provides an overview of discrete choice models and conjoint analysis. It discusses:
- The differences between stated preference surveys and revealed preference data in choice modeling.
- How discrete choice models use logit and probit links to transform categorical dependent variables into continuous latent variables for regression analysis.
- Applications of discrete choice models like logistic regression, logit models, ordered models, and multinomial models.
- How conjoint analysis is used to study consumer preferences for product attributes through experimental designs and choice-based surveys. It decomposes overall choices to infer part-worth utilities of individual attributes.
Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
This document discusses causal inference and program evaluation. It notes that evaluating programs requires estimating the counterfactual outcome for participants in the absence of the program, which is difficult. Common problems in evaluation include selection bias if participants differ from non-participants in unobserved ways, spillover effects, and impact heterogeneity. Internal validity assesses if the true impact is measured, while external validity examines generalizability. Estimating average treatment effects requires addressing non-random selection into programs.
This document provides information about the second edition of the book "Propensity Score Analysis: Statistical Methods and Applications" by Shenyang Guo and Mark W. Fraser. It is part of the Advanced Quantitative Techniques in the Social Sciences book series. The book covers statistical methods for estimating causal treatment effects from observational data using propensity score analysis techniques, including propensity score matching, weighting, and subclassification. It discusses assumptions, models, software for implementation, and examples.
Survival analysis is a branch of statistics used to analyze time-to-event data, such as time until death or failure. It estimates the probability that an individual survives past a given time and compares survival times between groups. Objectives include estimating survival probabilities, comparing survival between groups, and assessing how covariates relate to survival time. Survival data can be complete or censored. The Kaplan-Meier estimator is used to estimate survival when there is censoring. The log-rank test compares survival curves between treatment groups, and Cox regression incorporates covariates to predict survival probabilities.
This document provides an overview of analysis of variance (ANOVA) techniques. It discusses one-way ANOVA, which evaluates differences between three or more population means. Key aspects covered include partitioning total variation into between- and within-group components, assumptions of normality and equal variances, and using the F-test to test for differences. Randomized block ANOVA and two-factor ANOVA are also introduced as extensions to control for additional variables. Post-hoc tests like Tukey and Fisher's LSD are described for determining specific mean differences.
The "Instrumental Variables" webinar, presented by Peter Lance, was the fifth and final webinar in a series of discussions on the popular MEASURE Evaluation manual, How Do We Know If a Program Made a Difference? A Guide to Statistical Methods for Program Impact Evaluation.
This document contains 26 slides presented by Dr. Rizwan S A on cohort studies. It defines cohort studies as prospective longitudinal studies that follow healthy populations over time to determine the causes of diseases. Key aspects covered include classifying cohort studies as prospective, retrospective or combined; describing the elements of cohort studies such as selecting and following subjects, measuring exposure and outcomes, and analyzing results using measures like relative risk, risk difference and attributable risk. Examples of famous cohort studies on smoking, heart disease and oral contraceptives are also provided.
This document discusses effect modification and how it differs from confounding. It defines effect modification as a change in the magnitude of the effect of an exposure on an outcome according to levels of a third variable. Effect modification provides a more detailed description of the relationship between exposure and outcome, whereas confounding is a bias to be eliminated. The document contrasts effect modification and confounding, and provides examples to illustrate the concepts. It also discusses testing for effect modification using tests of homogeneity and how the interpretation of effect modification depends on the choice of effect measure.
This document discusses multiple linear regression analysis conducted to assess staff satisfaction levels at an educational institution. A questionnaire was administered to staff across multiple locations. Factor analysis was used to identify the variables that best predict overall satisfaction. A regression model was developed using satisfaction as the dependent variable and questions regarding workplace expectations, resources, communication, recognition, development opportunities, and opinions as independent variables. The model was analyzed in SPSS and showed high explanatory power, with no issues of multicollinearity between predictors.
1. Multinomial logistic regression allows modeling of nominal outcome variables with more than two categories by calculating multiple logistic regression equations to compare each category's probability to a reference category.
2. The document provides an example of using multinomial logistic regression to model student program choice (academic, general, vocational) based on writing score and socioeconomic status.
3. The model results show that writing score significantly impacts the choice between academic and general/vocational programs, while socioeconomic status also influences general versus academic program choice.
The document provides an overview of design of experiments (DOE) and factorial experiments. It defines key terms like factors, levels, treatments, responses, and noise. It explains the objectives of conducting experiments and the different types of experiments. It provides examples of 2-factor and 3-factor factorial experiments and how to analyze them. It discusses the principles of replication, randomization, and blocking. Finally, it demonstrates how to set up and analyze a general full factorial design with factors having more than two levels.
The document provides an introduction to regression analysis and performing regression using SPSS. It discusses key concepts like dependent and independent variables, assumptions of regression like linearity and homoscedasticity. It explains how to calculate regression coefficients using the method of least squares and how to perform regression analysis in SPSS, including selecting variables and interpreting the output.
Probability Distributions for Continuous Variablesgetyourcheaton
The document discusses probability distributions for continuous variables, explaining that continuous variables can take any value within a range and probability distributions depict the relative likelihood of these values being observed, with examples given of uniform and normal distributions and how they are characterized by parameters like mean and standard deviation. It also provides examples of how uniform and normal distributions can model real-world scenarios involving continuous variables like time or test scores.
This document provides examples of using Eviews to estimate econometric models. It discusses how to:
1. Estimate bivariate regressions to determine optimal hedge ratios between spot and futures prices.
2. Estimate CAPM models by running regressions of stock returns against market returns.
3. Estimate APT-style models by regressing stock returns against macroeconomic variables.
4. Diagnose econometric models by testing for heteroscedasticity, autocorrelation, non-normality, and multicollinearity.
5. Construct ARMA models to forecast time series data and use information criteria to select optimal models.
6. Estimate simultaneous equation models and vector autoregressive (
Here are the key steps and results:
1. Load the data and run a multiple linear regression with x1 as the target and x2, x3 as predictors.
R-squared is 0.89
2. Add x4, x5 as additional predictors.
R-squared increases to 0.94
3. Add x6, x7 as additional predictors.
R-squared further increases to 0.98
So as more predictors are added, the R-squared value increases, indicating more of the variation in x1 is explained by the model. However, adding too many predictors can lead to overfitting.
This document contains slides from a presentation on simple linear regression and correlation. It introduces simple linear regression modeling, including estimating the regression line using the method of least squares. It discusses the assumptions of the simple linear regression model and defines key terms like the regression coefficients (intercept and slope), error variance, standard errors of the estimates, and how to perform hypothesis tests and construct confidence intervals for the regression parameters. Examples are provided to demonstrate calculating quantities like sums of squares, estimating the regression line, and evaluating the fit of the regression model.
The document provides an overview of chi-square tests, including chi-square tests for goodness of fit and tests of independence. It explains that chi-square tests are used with categorical or classified data rather than numerical data. For a chi-square test of goodness of fit, the null hypothesis specifies the expected proportions in different categories. Observed and expected frequencies are calculated and compared using the chi-square statistic. A chi-square test of independence examines whether two categorical variables are related by comparing observed and expected joint frequencies.
Regression analysis is a statistical technique used to investigate relationships between variables. It allows one to determine the strength of the relationship between a dependent variable (usually denoted by Y) and one or more independent variables (denoted by X). Multiple regression extends this to analyze the relationship between a dependent variable and multiple independent variables. The goals of regression analysis are to understand how the dependent variable changes with the independent variables and to use the independent variables to predict the value of the dependent variable. It requires the dependent variable to be continuous and the independent variables can be either continuous or categorical.
This presentation explores the strengths and weaknesses of ordinary least squares and propensity score matching. Matching alone cannot solve endogeneity problems faced by OLS. The presentation shows how PSM and OLS can be combined to yield less-biased estimators than either method alone.
Cronbach's alpha is a measure of internal consistency, which is used to determine if the items in a survey or questionnaire reliably measure the same concept. It ranges from 0 to 1, with higher numbers indicating greater reliability. An acceptable alpha is between 0.7-0.95. Cronbach's alpha measures how well items correlate with each other and with the total test. It is reported along with the mean to indicate the reliability of a scale. The reliability statistics table in SPSS shows the actual alpha value and whether removing any items would increase or decrease the value.
This document provides an overview of discrete choice models and conjoint analysis. It discusses:
- The differences between stated preference surveys and revealed preference data in choice modeling.
- How discrete choice models use logit and probit links to transform categorical dependent variables into continuous latent variables for regression analysis.
- Applications of discrete choice models like logistic regression, logit models, ordered models, and multinomial models.
- How conjoint analysis is used to study consumer preferences for product attributes through experimental designs and choice-based surveys. It decomposes overall choices to infer part-worth utilities of individual attributes.
Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
This document discusses causal inference and program evaluation. It notes that evaluating programs requires estimating the counterfactual outcome for participants in the absence of the program, which is difficult. Common problems in evaluation include selection bias if participants differ from non-participants in unobserved ways, spillover effects, and impact heterogeneity. Internal validity assesses if the true impact is measured, while external validity examines generalizability. Estimating average treatment effects requires addressing non-random selection into programs.
Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
Ragui Assaad- University of Minnesota
Caroline Krafft- ST. Catherine University
ERF Training on Applied Micro-Econometrics and Public Policy Evaluation
Cairo, Egypt July 25-27, 2016
www.erf.org.eg
Eduard Ponarin- Higher School of Economics, Russia
ERF Training Workshop on Opinion Poll Data Analysis Using Multilevel Models
Beirut, Lebanon August 22-23, 2016
www.erf.org.eg
Eduard Ponarin- Higher School of Economics, Russia
Veronica Kostenko- The National Research University
ERF Training Workshop on Opinion Poll Data Analysis Using Multilevel Models
Beirut, Lebanon August 22-23, 2016
www.erf.org.eg
This document provides guidance on how to conduct and publish research in economics. It discusses finding a research topic by exploring issues that interest you in the literature. It emphasizes developing a theoretical model and testing implications empirically. For writing, it recommends being concise and telling your idea in an introduction, body, and conclusion. When presenting, the focus should be advertising your idea through clear structure and examples. For publishing, it advises assessing journal fit and thoroughly addressing reviewer feedback through revisions. The overall message is pursuing rigorous yet accessible research and effectively communicating new contributions.
This document discusses three questions: 1) Is democracy important for development, especially in oil-dependent regions? 2) Why has the Arab Spring been a "late awakening"? 3) Why has the Arab Spring been so violent? Regarding the first question, the literature suggests democracy promotes development by increasing stability, equitable societies, and human capital investment. However, in socially and ethnically polarized regions like the Arab world, inclusive democracy is important to manage tensions. For the second question, the Arab world has experienced high persistence of autocracy and few democratic transitions due to factors like oil rents, conflicts, and country-specific determinants. For the third question, the violence is partly explained by social polarization and autocrats choosing repression over
Special Session on Transition in the Life Course in MENA: Discussion of Pape...Economic Research Forum
The document summarizes three papers presented at a special session on transitions in the life course in the Middle East and North Africa region.
The first paper by Assaad et al. examines the effect of early marriage (defined as before the median age at first marriage) on women's employment outcomes in Egypt, Jordan and Tunisia. It finds early marriage significantly reduces the probability of women working, particularly in private sector jobs.
The second paper by Nazim and Ramadan assesses the influence of pre-marital bargaining power on post-marital bargaining power, as measured by decision-making, across the same three countries. It finds the association is context-specific.
The third paper by Kra
Correspondence Studies on Gender, Ethnicity and Religiosity Discrimination in...Economic Research Forum
Haluk Levent- Istanbul Kemerburgaz University
Seyit Mümin Cilasun- Atılım University
Binnur Balkan- Bilkent University
ERF Workshop on The Political Economy of Contemporary Arab Societies
Beirut, Lebanon August 24-25, 2016
www.erf.org.eg
Veronica Kostenko- Higher School of Economics
Eduard Ponarin- Higher School of Economics
Musa Shteiwi- Center for Strategic Studies, University of Jordan
Olga Strebkova- Laboratory for Comparative Social Research
ERF Workshop on The Political Economy of Contemporary Arab Societies
Beirut, Lebanon August 24-25, 2016
www.erf.org.eg
Ishac Diwan- Paris Sciences et Lettres
Michele Tuccio- University of Southampton
Jackline Wahba- University of Southampton
ERF Workshop on The Political Economy of Contemporary Arab Societies
Beirut, Lebanon 24-25, 2016
www.erf.org.eg
This document discusses a study of religious fundamentalism and attitudes towards veiling in seven Middle Eastern and North African countries. The study used cross-national surveys to examine how illiberal values and religious fundamentalism relate to preferences for veiling among women in Egypt, Iraq, Lebanon, Morocco, Saudi Arabia, Tunisia, and the United Arab Emirates.
May Gadallah- Cairo University
Maia Sieverding- American University of Beirut
Rania Roushdy- Population Council Egypt
ERF Workshop on The Political Economy of Contemporary Arab Societies
Beirut, Lebanon August 24-25, 2016
www.erf.org.eg
The use of opinion polls data in the Arab Human Development Report 2016Economic Research Forum
Jad Chaaban- American University of Beirut
ERF Training Workshop on Opinion Poll Data Analysis Using Multilevel Models
Beirut, Lebanon August 22-23, 2016
www.erf.org.eg
This presentation is for educational purpose only. I do not own the rights to written material or pictures or illustrations used.
This is being uploaded for students who are in search of, or trying to understand how a quasi-experimental research design should look like.
Czarnitzki - Towards a portfolio of additionaliyu indicatorsinnovationoecd
This document discusses current practices in evaluating science, technology, and innovation (STI) policies using econometric methods and identifies areas for future improvement. It notes that methods like matching, difference-in-differences, and instrumental variables are now commonly used to establish control groups and estimate policy impacts. However, greater emphasis is needed on exploring heterogeneous treatment effects across different policy instruments, firm characteristics, and over time to better inform policy design. Indirect effects also need consideration. Improving identification strategies through techniques like regression discontinuity and exploiting natural experiments can further strengthen evaluations.
This document discusses evaluation research and problem analysis in policymaking. It explains that evaluation research seeks to evaluate the impact of interventions and policies by determining if the intended results were achieved. Problem analysis is used to help policymakers choose between alternative policy options. The document also outlines different evaluation research methods, such as randomized evaluation designs and quasi-experimental designs, and important considerations for conducting rigorous evaluations, such as clearly specifying goals and measuring outcomes.
International Food Policy Research Institute (IFPRI) organized a three days Training Workshop on ‘Monitoring and Evaluation Methods’ on 10-12 March 2014 in New Delhi, India. The workshop is part of an IFAD grant to IFPRI to partner in the Monitoring and Evaluation component of the ongoing projects in the region. The three day workshop is intended to be a collaborative affair between project directors, M & E leaders and M & E experts. As part of the workshop, detailed interaction will take place on the evaluation routines involving sampling, questionnaire development, data collection and management techniques and production of an evaluation report. The workshop is designed to better understand the M & E needs of various projects that are at different stages of implementation. Both the generic issues involved in M & E programs as well as project specific needs will be addressed in the workshop. The objective of the workshop is to come up with a work plan for M & E domains in the IFAD projects and determine the possibilities of collaboration between IFPRI and project leaders.
Lack of Transparency and Quality Barriers to Modeling Utilization.docxsmile790243
Lack of Transparency and Quality Barriers to Modeling Utilization
Modeling Challenges
· Does not always lead to choices that maximize public health and limited resources
· Is complicated and not as easily understood as traditional randomized clinical trials
· Represents a relatively new field
18 of 36
Questions for the
Decision-Makers
· Are the results helpful?
· Are the methods appropriate?
· Are the results valid?
· Do valid results apply to my decision context?
28 of 36
Transparency
· Indicates compliance with established quality standards
· Refers to clear description of the model structure, equations, parameters, and assumptions
32 of 36
Transparent Documentation
· Lay summary, accessible to any interested reader
· Model type and intended application; funding sources; model structure; model inputs, outputs, and components; and validation methods, results, and limitations
· Detailed technical document, allowing expert evaluation and potential recreation
35 of 36
While meeting technical and lay reporting guidelines does not ensure that the model is correct, validation is not possible without a clear understanding of the model.Defining a Study Question, Perspective, and Scope
What's a Good Question?
· Is well defined and in answerable form
· Clearly identifies the alternatives being compared
· Identifies the perspective from which the comparison is made
17 of 42
Examples of Bad Questions when alternative is specified
· Is an active PE policy intervention worth it?
· Will a smoking-cessation intervention do any good?
· How much does it cost to run our syringe-exchange program?
· What are the costs and outcomes of the school wellness policy?
33 of 42
Quality Checklist
Quality Dimension
Good Practice Attributes
Critical Appraisal Questions
S1: Statement of decision problem/objective
• A clearly stated decision problem
• Is the decision problem clearly stated?
• A defined evaluation and model objective
• Is the evaluation and model objective specified and consistent with the decision problem?
• A clearly stated primary decision-maker
• Is the primary decision-maker specified?
S1: Statement of scope/perspective
• A clearly stated model perspective (relevant
costs and consequences) and model inputs consistent with the stated perspective and overall model objective
• Is the model perspective stated clearly? Are the model inputs consistent with the stated perspective?
• A specified and justified decision model scope
• Is the model scope stated and justified?
• Model outcomes that reflect its perspective and scope and are consistent with the objective
• Are the model outcomes consistent with its perspective, scope, and overall objective?
36 of 42
A Good Question
From the perspective of (a) both the Ministry of Health and the Ministry of Community and Social Services budgets and (b) patients incurring out-of-pocket costs, is a chronic home care program preferable to the existing program of institutionalized, extended care in design ...
This document discusses different methods for conducting random effects meta-analyses when study effects are non-normally distributed. It simulates various non-normal distributions for true effects and compares the performance of fixed effects, DerSimonian-Laird, maximum likelihood, profile likelihood, permutations, and t-test methods. The results show that the performance of meta-analysis methods is robust to non-normal distributions. However, in the presence of heterogeneity, permutations and profile likelihood methods maintain accurate coverage even with small sample sizes, making them preferable choices.
This document provides an overview of a presentation on how to randomize participation and ensure regulatory compliance in impact evaluations using randomized control trials. It discusses options for the unit of randomization like individual vs group levels. It also covers real-world constraints to consider like resources, politics, contamination, and logistics. Methods of randomization presented include basic lotteries, phase-in designs where the treatment is rolled out over time, and encouragement designs for situations where full randomization is not possible. The document also discusses multi-arm RCTs, varying treatment levels, and stratification.
International Food Policy Research Institute (IFPRI) organized a three days Training Workshop on ‘Monitoring and Evaluation Methods’ on 10-12 March 2014 in New Delhi, India. The workshop is part of an IFAD grant to IFPRI to partner in the Monitoring and Evaluation component of the ongoing projects in the region. The three day workshop is intended to be a collaborative affair between project directors, M & E leaders and M & E experts. As part of the workshop, detailed interaction will take place on the evaluation routines involving sampling, questionnaire development, data collection and management techniques and production of an evaluation report. The workshop is designed to better understand the M & E needs of various projects that are at different stages of implementation. Both the generic issues involved in M & E programs as well as project specific needs will be addressed in the workshop. The objective of the workshop is to come up with a work plan for M & E domains in the IFAD projects and determine the possibilities of collaboration between IFPRI and project leaders.
This document summarizes a research study on complex problem solving using a computer simulation called Syntex. The study involved 54 students from Zhejiang University divided into 18 groups. The groups made management decisions for a simulated company each month. The researchers found differences between the "best" and "worst" groups based on company performance. Best groups increased capital and hired more employees over time compared to worst groups. The researchers also analyzed decision making processes and information gathering between the groups. They explored perspectives including the problem solving process, group interactions, and cross-cultural differences. The document discusses research design, validity, variables measured, and limitations of generalizing the results.
This document provides an introduction to quantitative impact evaluation methods. It discusses why impact evaluations are important, how to design an evaluation, and common evaluation tools and methodologies. Key points include: impact evaluations measure a program's causal effects, require a comparison group to estimate counterfactual outcomes, and use methods like randomization, matching, regression discontinuity, and difference-in-differences to construct valid comparisons. The goals of evaluations are to measure impacts, assess cost-effectiveness, and explain which program components are most effective.
(1) Impact evaluations should focus more on understanding dynamics of change and people's responses to incentives, rather than just registering outcomes. (2) Evaluation provides critical information on policy relevance and is a progressive learning exercise. (3) Novel approaches are needed to evaluate new development strategies.
1. The document discusses staffing decisions and processes, including recruiting, selecting, promoting, and separating employees. It covers conceptual issues, the staffing process, outcomes evaluation, practical issues, and legal considerations.
2. Key aspects of staffing discussed include validity, establishing cut scores, combining information from multiple predictors, and addressing adverse impact and potential discrimination in the staffing process.
3. Staffing aims to match applicant attributes to job demands while avoiding discrimination, and considerations include large versus small staffing projects and selection versus placement decisions.
This document discusses assessing risk of bias during systematic reviews. It defines bias as systematic error that deviates from the truth and can lead to over or underestimating effects. Assessing bias in included studies is important because results may be consistent due to flaws. There are seven domains for assessing bias: selection, performance, detection, attrition, reporting, and other biases. Risk of bias is assessed by reviewing study methods, looking for missing information, and making judgments on pre-specified criteria about the likelihood studies were affected by bias in each domain. Tools like risk of bias tables are used to categorize judgments of low, high, or unclear risk of bias in individual studies.
This document introduces causal inference methods for determining whether a treatment or experience causes an outcome. It discusses five econometric methods: controlled regression, regression discontinuity design, difference-in-differences, fixed effects regression, and instrumental variables. For each method, it provides an example, explanation of the identifying assumptions, and tips for checking the internal validity of the method. The document emphasizes the importance of experimental design and testing assumptions to make causal inferences from non-experimental data.
This document provides an outline of a conference presentation on evaluating moderation and mediation in personalized therapies. The presentation aims to: 1) introduce key concepts of personalized therapy/stratified medicine, causal effects, confounding and RCTs; 2) recap and develop ideas on correct and incorrect approaches to evaluating treatment effect moderation and using interactions to study mediation; and 3) briefly describe the research program evaluating efficacy and mechanisms of complex interventions funded by the MRC Methodology Research Programme.
Chapter 10
Data Interpretation Issues
Learning Objectives
• Distinguish between random and
systematic errors
• State and describe sources of bias
• Identify techniques to reduce bias at the
design and analysis phases of a study
• Define what is meant by the term
confounding and provide three examples
• Describe methods to control confounding
Validity of Study Designs
• The degree to which the inference drawn
from a study, is warranted when account it
taken of the study, methods, the
representativeness of the study sample,
and the nature of the population from
which it is drawn.
Validity of Study Designs
• Two components of validity:
– Internal validity
– External validity
Internal Validity
• A study is said to have internal validity
when there have been proper selection of
study groups and a lack of error in
measurement.
• Concerned with the appropriate
measurement of exposure, outcome, and
association between exposure and
disease.
External Validity
• External validity implies the ability to
generalize beyond a set of observations to
some universal statement.
• A study is externally valid, or
generalizable, if it allows unbiased
inferences regarding some other target
population beyond the subjects in the
study.
Sources of Error in
Epidemiologic Research
• Random errors
• Systematic errors (bias)
Random Errors
• Reflect fluctuations around a true value of
a parameter because of sampling
variability.
Factors That Contribute to
Random Error
• Poor precision
• Sampling error
• Variability in measurement
Poor Precision
• Occurs when the factor being measured is
not measured sharply.
• Analogous to aiming a rifle at a target that
is not in focus.
• Precision can be increased by increasing
sample size or the number of
measurements.
• Example: Bogalusa Heart Study
Sampling Error
• Arises when obtained sample values
(statistics) differ from the values
(parameters) of the parent population.
• Although there is no way to prevent a
non-representative sample from
occurring, increasing the sample size
can reduce the likelihood of its
happening.
Variability in Measurement
• The lack of agreement in results from
time to time reflects random error
inherent in the type of measurement
procedure employed.
Bias (Systematic Errors)
• “Deviation of results or inferences
from the truth, or processes leading to
such deviation. Any trend in the
collection, analysis, interpretation,
publication, or review of data that can
lead to conclusions that are
systematically different from the
truth.”
Factors That Contribute to
Systematic Errors
• Selection bias
• Information bias
• Confounding
Selection Bias
• Refers to distortions that result from procedures
used to select subjects and from factors that
influence participation in the study.
• Arises when the relation between exposure and
disease is different for th ...
This document discusses key data gaps in labor supply and demand in North Africa. For labor supply, it notes that while youth unemployment rates exist, they are not sufficiently highlighted. For labor demand, the biggest gap is data on job creation and losses within business sectors, including gains and losses from new, expanding, contracting, and closing establishments. It also outlines statistical development efforts in Egypt to improve labor force and establishment surveys to better measure employment, unemployment, wages, and the reconciliation of survey data.
The document discusses microsimulation techniques used at the Institut des politiques publiques (IPP) research center in Paris. It provides background on IPP, which uses microsimulation models like TAXIPP, TAXIPP-LIFE, and TAXIPP-FIRM to evaluate policies. These models use administrative data at the individual/household level and simulate policies. The document outlines the history and advantages of microsimulation, and how IPP utilizes administrative data and open-source tools in its microsimulation methodology.
Session 3 m.a. marouani, structual change, skills demand and job qualityEconomic Research Forum
This document discusses structural changes in labor demand and skills mismatches in the Middle East and North Africa region. It explores how the expansion of less knowledge-intensive industries has led to weak demand for educated labor compared to a lack of skill-biased technical change. The dynamics of skilled versus unskilled labor demand, empirical measures of these concepts, and the impact on inequality are examined. Education to job mismatches and overeducation are also discussed, along with their determinants and effects on wages and job satisfaction.
This document discusses bridging micro and macro approaches to understanding labor market outcomes. At the micro level, surveys and censuses are used to characterize behaviors and distributions. Meso analysis uses sector-wide data. Macro hypotheses about forces affecting equilibria are difficult to show causality from to micro observations. To bridge micro and macro, identification techniques like event studies and instrumental variables are needed. Examples from the MENA region show politically connected sectors associate with less job creation. Future research avenues include examining the impacts of cronyism, education quality and access, technical change, gender norms, and rentierism on labor markets. Causally linking micro behaviors to macro phenomena remains a challenge.
This document provides a framework for a World Bank report on economic transformation, job creation, and market contestability in the Middle East and North Africa region. The report will focus on how to spur job creation through increasing demand in the private sector. It will explore how technology and digital adoption can create new jobs and drive structural transformation away from traditional sectors. The report aims to establish facts about these issues, generate new data, and highlight case studies of successful reforms to inform policy discussions.
The document summarizes insights from Sudan on labor market data availability. It discusses structural problems in Sudan's labor market like inconsistent sector distribution, low participation rates, and gender disparities. It then evaluates Sudan's ability to calculate various labor market measures according to international definitions. Many measures like unemployment rates, earnings, social protection coverage, and occupational safety cannot be accurately calculated due to limited data availability. The document concludes there is a need for more updated labor market data and a new comprehensive labor force survey to provide indicators and learn from other countries' experiences.
This document outlines the availability of data in Egypt for measuring labor market outcomes according to 6 categories: 1) labor underutilization, 2) type of employment, 3) regularity of employment and working time, 4) earnings and non-wage benefits, 5) social protection, and 6) safety and health at work. It finds that most indicators can be measured using Egypt's Labor Force Surveys or Labor Market Panel Surveys, but some data like fatal occupational injuries are not available. It concludes by identifying ways to improve data collection, such as making the LFS more consistent over time and collecting additional information on earnings, benefits, and union membership.
This document discusses using administrative and survey data from Algeria to measure labor market outcomes based on an expert group meeting questionnaire. It analyzes the ability to calculate various labor market measures using available Algerian data sources. For many measures, the labor force survey and household surveys can provide data to calculate definitions. However, some measures would require adding new questions to collect additional information, such as on earnings, occupational injuries, collective bargaining, and union membership. Administrative records from social security and unemployment insurance organizations also provide some supplemental data.
According to the document:
- Nearly half of Tunisia's working age population is inactive, with 28% working in informal employment, 16% in formal sector jobs, and 7% unemployed.
- Unemployment rates are highest among youth, women, those with a secondary education or less, and those with technical or social science degrees.
- Long-term unemployment is the most prevalent, and the employed population is dominated by informal wage work and self-employment.
- Labor market transitions for youth aged 15-34 are inefficient, and prior to the 2010 revolution most new jobs were created in low-productivity sectors.
This document discusses the need to move beyond just measuring unemployment rates when assessing labor market outcomes in North Africa. It proposes measuring seven additional indicators: 1) labor underutilization, 2) type of employment, 3) regularity of employment, 4) earnings and benefits, 5) social protection, 6) safety and health, and 7) industrial relations. These provide a more comprehensive view of the challenges faced by different groups. Stylized facts about North African labor markets show very low female participation rates, declining participation for both men and women, high unemployment, and a large increase in youth unemployment after the Arab Spring.
The document discusses an expert group meeting on jobs and growth in North Africa. It notes that while unemployment rates decreased and growth indicators were positive in the decade before the Arab Spring, this growth did not necessarily improve access to jobs or working conditions. The group aims to better understand how economies can reach their full potential and make good use of their workforce. Key questions are discussed around the role of the state, impact of public and private investment, education systems, and financing of productive projects. A proposed 4-year work plan includes annual regional reports on jobs and growth, calls for research papers on selected issues, and conferences to discuss findings and define future research agendas.
Aly Rashed - Economic Research Forum
ERF 25th Annual Conference
Knowledge, Research Networks & Development Policy
10-12 March, 2019
Kuwait City, Kuwait
The Future of Jobs is Facing the Biggest Policy Induced Price Distortion in H...Economic Research Forum
The document discusses how barriers to low-skilled labor mobility between countries create one of the largest price distortions in history. This motivates innovation that displaces low-skilled labor through technology. It shows data that the wage gains from mobility into rich countries for low-skilled workers from places like Yemen and Nigeria would be over 1000%. Border barriers to labor are two orders of magnitude higher than any tariffs. Technological change is often biased toward replacing low-skilled jobs. Developing countries face challenges employing youth and generating exports with very low-skilled labor forces against these trends.
Massoud Karshenas - University of London
ERF 25th Annual Conference
Knowledge, Research Networks & Development Policy
10-12 March, 2019
Kuwait City, Kuwait
Rediscovering Industrial Policy for the 21st Century: Where to Start?Economic Research Forum
Rohinton P. Medhora - Centre for International Governance & Innovation
ERF 25th Annual Conference
Knowledge, Research Networks & Development Policy
10-12 March, 2019
Kuwait City, Kuwait
Rana Hendy - Doha Institute
Mahmoud Mohieldin - World Bank
ERF 25th Annual Conference
Knowledge, Research Networks & Development Policy
10-12 March, 2019
Kuwait City, Kuwait
Ibrahim Elbadawi - Economic Research Forum
ERF 25th Annual Conference
Knowledge, Research Networks & Development Policy
10-12 March, 2019
KuwaitCity, Kuwait
United Nations World Oceans Day 2024; June 8th " Awaken new dephts".Christina Parmionova
The program will expand our perspectives and appreciation for our blue planet, build new foundations for our relationship to the ocean, and ignite a wave of action toward necessary change.
Jennifer Schaus and Associates hosts a complimentary webinar series on The FAR in 2024. Join the webinars on Wednesdays and Fridays at noon, eastern.
Recordings are on YouTube and the company website.
https://www.youtube.com/@jenniferschaus/videos
How To Cultivate Community Affinity Throughout The Generosity JourneyAggregage
This session will dive into how to create rich generosity experiences that foster long-lasting relationships. You’ll walk away with actionable insights to redefine how you engage with your supporters — emphasizing trust, engagement, and community!
AHMR is an interdisciplinary peer-reviewed online journal created to encourage and facilitate the study of all aspects (socio-economic, political, legislative and developmental) of Human Mobility in Africa. Through the publication of original research, policy discussions and evidence research papers AHMR provides a comprehensive forum devoted exclusively to the analysis of contemporaneous trends, migration patterns and some of the most important migration-related issues.
karnataka housing board schemes . all schemesnarinav14
The Karnataka government, along with the central government’s Pradhan Mantri Awas Yojana (PMAY), offers various housing schemes to cater to the diverse needs of citizens across the state. This article provides a comprehensive overview of the major housing schemes available in the Karnataka housing board for both urban and rural areas in 2024.
Potential Solutions to the Fundamental Problem of Causal Inference: An Overview
1. Potential Solutions to the
Fundamental Problem of Causal
Inference: An Overview
Day 1, Lecture 2
By Caroline Krafft
Training on Applied Micro-Econometrics and
Public Policy Evaluation
July 25-27, 2016
Economic Research Forum
2. The fundamental problem
• In program evaluation, want to know the impact of the
program (“treatment”) on participant outcomes
• In the real world, participation in programs and the impact
of public policies is difficult to identify
• Participation is likely to be related to characteristics that also affect
outcomes
• Endogeneity: assignment to treatment is not random
• Not only depends on observables, but may also depend on
unobservables
• Both observables and unobservables may affect the outcome
2
3. Solutions
• Random assignment
• Quasi-experimental solutions
• Type I: Conditional exogeneity of placement
• Difference-in-difference
• Panel data (fixed and random effects)
• Propensity score matching
• Type II: Rules or instruments of placement
• Control function and instrumental variables techniques
• Regression discontinuity design
3
5. Random experiments
• Random experiments are often referred to as Randomized Controlled
Trials (RCTs)
• Random allocation of intervention to program beneficiaries such that
all units (within a defined set) have equal chance ex ante of receiving
the treatment
• Assignment process creates treatment and control groups that are
directly comparable
• Should not have any observable or unobservable differences
• By eliminating selection bias, randomization allows direct comparison
of participants and non-participants to detect impact of program
• Observed ex post differences in mean outcomes between treatment
and control group can be attributed to program
5
6. Problems with Experimental Designs
• Ethical and political obstacles
• Difficult to randomize at level of individual beneficiaries
• Those assigned to treatment group may decline to participate
or participate in a partial manner
• This is referred to as selective compliance
• Selection bias gets introduced through this self-selection process
• Those not selected and assigned to the control group may try
to find alternative ways to get benefit of program
• “Contamination” of control group
6
7. Case Study:
The Labor Market Impact of
Youth Training in the Dominican
Republic: Evidence from a
Randomized Evaluation
Card et al. (2007)
7
8. Case study of a randomized evaluation
• From 2001 to 2005 the government of the Dominican
Republic operated a subsidized training program,
Juventud y Empleo (JE)
• Targeted low-income youth (18-29) with less than a secondary
education in urban areas
• Several weeks of classroom training (basic skills & vocational
skills) by private training institutions
• Followed by an internship at a private sector firm
• Program was evaluated in:
• Card, David, Pablo Ibarraran, Ferdinando Regalia, David Rosas, Yuri Soares
(2007). “The Labor Market Impact of Youth Training in the Dominican Republic:
Evidence from a Randomized Evaluation.” National Bureau of Economic Research
Working Paper 12883.
8
9. Structure of the evaluation
• JE program was unique in incorporating a randomized
design
• Each time 30 eligible applicants were recruited, 20 of the 30 were
assigned to training (treatment), 10 to control.
• Up to 5 individuals from control could be re-assigned to treatment if
those assigned to treatment failed to show up for training (no-
shows) or dropped out in the first two weeks (dropouts)
• Evaluation looks at the second cohort of the JE program
• Trained in early 2004
• Baseline data from registration form (prior to randomization)
• Follow up survey in summer 2005 (~1 year after training)
9
10. Sample of the evaluation
• Second cohort consisted of 8,391 eligible applicants
• 5,802 (69.1%) assigned to treatment
• 1,011 dropouts or no shows
• 2,589 (30.9%) controls
• 966 reassigned
• Led to realized treatment group of 5,757 and realized control
group of 1,623
• Only these groups have follow-up data
• Evaluation based on stratified sampling of realized treatment and
control
• Problem of missing post-program data on no-shows and
dropouts
• Will bias results if this group is non-random.
10
11. Outcomes
• Labor market outcomes examined:
• Employment
• Hours of work
• Hourly wages
• Job with health insurance
11
12. • Table 1: comparison
of mean
characteristics of
realized treatment
and control groups
• Compared to labor
force survey of 2004
data for comparable
sample
• Some differences in
education
12
13. 13
• Check the initial
assignment and
re-assignment
to see if realized
status is “as
good as
random”
• Multinomial logit
model for being
in each group
shows some
significant age
effects but small
explanatory
power
• Present re-
weighted
(“balanced”)
results as well
as unadjusted
14. • Examine employment
rates
• See no impact on
participant
employment rate
• 57% of treatment v.
56% of controls
• No differences among
sub-groups
14
15. 15
• No impact on employment, hours of work, some
differences in monthly earnings
• Some marginally significant impact on hourly wages of
about 10%
• No significant differences in health insurance
16. Lessons from Card et al. 2007
• Randomization is the “gold standard” but reality of
randomization usually less than perfect
• Imperfect compliance
• “Contamination” (reassignment) of controls
• Potential selection bias due to no-shows and drop-outs
• Potential dilution of impact for partial participation
• Still have to check assumptions and correct for selection
in many randomized evaluations.
16
18. Causal Effects in Non-
Experimental Evaluations
• We want to identify the causal impact of a program or policy
• Typically we do not have experimental data (undertaking a non-
experimental evaluation)
• Referred to as quasi-experiments
• To estimate a causal effect in non-experimental evaluations we
need “identifying assumptions”
• Non-experimental methods can be classified into two types
depending on the identification assumptions they make.
• Type I: “conditional exogeneity of placement” or “conditional
exogeneity of placement to changes in outcomes”
• Type II: instrumental variables or discontinuities that can explain
placement can be found
18
19. Non-Experimental Methods
Type I Non-Experimental Methods
• 1- Regression Methods
• 2- Propensity Score Methods
• 3- Difference in Difference Methods
• 4- Panel data (fixed or random effects) models
• Type II Non-Experimental Methods
• 4- Instrumental Variable Methods
• 5- Regression Discontinuity Design Methods (RD or RDD)
19
20. Causal Inference in Type I Non-
Experimental Methods
• Type I non-experimental methods make the
following identification assumptions:
• Conditional exogeneity of placement (i.e. that
placement only depends on exogenous observable
characteristics X and not on unobservables)
• Often referred to as “selection on observables”
OR
• Exogeneity of placement with respect to changes in
outcomes (i.e. that unobservable factors affecting
changes in outcomes do not affect the probability of
placement)
• Unobservables that determine placement can affect initial
conditions but are assumed not to affect changes in outcomes
over time
20
21. Type I Methods: First and second
differences
• Under “conditional exogeneity of placement”, all we
need to do is compare outcomes for a treatment and
control group at one point in time controlling for
observables X
– This is called a first difference approach (see D(X) estimator)
• Under the weaker “exogeneity of placement to
changes in outcomes” we need to compare the
difference from before and after the program for a
treatment group to the same difference for a control
group.
– This is called difference-in-difference or a second difference
approach
21
22. Type I Methods: Propensity Score
Matching & Weighting
• Assumes conditional exogeneity of placement (selection on
observables)
• Models that selection process with a probit or logit model to
predict the probability of participation, Pr(T=1) based on
observable characteristics (X)
• Creates “matched” treatment and control groups
• After matching or weighting, no observable differences between
groups
• Can then estimate program impacts by looking at mean
differences (ATT) between matched/weighted T and C groups
22
23. Type I Methods: Random and Fixed
Effects
• Often concerned about unobservables that are going to be
related to an observable unit (school, family, city)
• Panel data models assume that after controlling for the effect of
that unit, the remainder of selection is fully observable
• Random effects (RE) models assume the unobservable effects
have some underlying (normal) distribution
• REs assumed to be unrelated to observable X
• Fixed effects (FE) models do not require parametric
assumptions
• FEs can be related to observable X
23
24. Causal Inference in Type II methods
• Identifying assumptions:
– There exists at least one (instrumental) variable (IV) that affects
participation (placement) but that does not affect the outcome
conditional on participation and other covariates (X))
– i.e. that the IV can be excluded from the outcome regression without
causing omitted variable bias. This is called an “identifying restriction”
– To be valid this IV must be exogenous
– This called the instrumental variables approach
• Regression discontinuity design (RD or RDD) is based on a
similar assumption.
• The instrument is some cutoff for eligibility/participation in the
program
• RD focuses on the differences in outcomes around that cutoff to
model program impacts
24