Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

IDS Impact, Innovation and Learning Workshop March 2013: Day 2, Paper session 4 Barbara Befani


Published on

Published in: Health & Medicine, Technology
  • Be the first to comment

  • Be the first to like this

IDS Impact, Innovation and Learning Workshop March 2013: Day 2, Paper session 4 Barbara Befani

  1. 1. 27:03:2013Set-theoretic, diagnosticand Bayesian approachesto impact evidenceBarbara BefaniImpact, Innovation and Learning: Towards a Research and Practice Agenda for the FutureBrighton, 26-27 March 2013
  2. 2. OutlineSet-theoretic Methods (e.g. QCA) and the newchallenges– Uncertainty (equifinality)– Causal contribution (multiple-conjuncturalcausality)– Causal asymmetry (necessity and sufficiency)Diagnostic and Bayesian approaches– Uncertainty (can be quantified with probabilities)– The strength of qualitative evidence can bemeasured
  3. 3. Defining & explaining events with Set TheoryIn uncertain and emergent contexts, we cannotdefine “impact” (or success) preciselyAn Impact “space” of possible events, all desirable– All compatible with given values and goalsSuccess is likely to look like ANY of a number ofevents = a LOGICAL UNIONSuccess looks more like “being on the right track”than achieving a specific goalBeing “on the right track” means avoiding a numberof pitfalls / dead endsSets can be defined as NEGATION of other setsThe three main operators in set theory are– Negation, union, intersection
  4. 4. Causal Asymmetry and Contribution AnalysisWhat is a sufficient causal package (a branch ofblue nodes)Principal Contributory Cause = INUS, a necessarypart of the (sufficient) combination (each blue nodeof a given branch)In Set Theory terminology, a causal package is anINTERSECTION of contributory causesA combination of necessary causes (necessarywithin that causal package)Set Theory provides the mathematical basis for1. analyzing causal contribution2. dealing with uncertainty (particularly Fuzzy Sets)
  6. 6. ConfIDCold ChainIntegrity(necessary)Vaccine (V)(intervention)HealthSystemQuality(HSQ)Success(S)(reduction inspecificmorbidity)No.CasesA 1 1 1 1 3B 1 1 0 1 1C 1 0 1 1 2D 1 0 0 0 2E 0 1 1 1/0 0F 0 1 0 1/0 0G 0 0 1 1/0 0H 0 0 0 1/0 0Why are Causal Combinations important?Impact is contingent on the contextFinding ONE counterfactual is not enoughQCA helps finding many counterfactuals through systematic cross-case comparison
  7. 7. Probability and Diagnosis: what the evidencesaysRealm of “unknown knowns”General problem of the strength / quality ofevidence: how to assess it?In clinical practice, physicians use tests– Specificity• Probability that absence of the disease willreturn negative evidence on that test– Sensitivity• Probability that presence of the disease willreturn positive evidence on that test– (Positive) Predictive Power• Probability that positive evidence signalspresence of the disease
  8. 8. When is evidence strong?When it is sensitive and specific– Sensitive: P ( Evidence | Impact ) high– Specific: <= P ( Evidence ) low• false positives are low– Predictive: P ( Impact | Evidence ) high– The latter can be calculated with the Bayesformula P ( I | E ) = P ( I ) * P ( E | I ) / P ( E )Two important principles of high-quality evidence– of all kinds, quali, quanti, etc.Evidence is strong when:– The prior probability of observing positiveevidence P ( E ) is LOW (~specificity)– The probability of observing positive evidence IFthe intervention was successful / had an impactP ( E | I ) is HIGH (sensitivity)
  9. 9. Impact ‘I’PredictiveEvidenceWeakEvidenceStrongEvidenceSensitiveEvidenceSpecificEvidenceEvidence ‘E’ (of ‘I’)
  10. 10. REALITY about success ofinterventionInterventionSuccessful (I)InterventionUnsuccessful (~I)EVIDENCE aboutsuccess ofinterventionEvidencePositive (E) True Positive False PositiveEvidenceNegative (~E) False Negative True NegativeSeeking evidence of impact: diagnostic tests
  11. 11. REALITY about success ofinterventionInterventionSuccessful (I)InterventionUNsuccessful(~I)EVIDENCEabout successof interventionEvidencePositive (E) True Positive False PositivePositivepredictivevalue =Σ True PositiveΣ Evidence PositiveEvidenceNegative (~E) False Negative True NegativeNegativepredictivevalue =Σ TrueNegativeΣ Evidence NegativeSensitivity =Σ True PositiveΣ InterventionSuccessfulSpecificity =Σ TrueNegativeΣ InterventionUNsuccessful
  12. 12. When articulated ToCs explaining impact aresupported by evidence IT IS STRONGEVIDENCE OF IMPACTThe prior probability of observing a sophisticated ToCwith several components is LOW, becauseThe probability of a combination is the product of theprobability of components (a very SMALL #)P (a, b, c, ... N ) = P (a) * P (b) * P (c) * ... * P (n)When ToCs with many components are confirmed, itis strong evidence of impact, because:– the chances of all components being observedsimultaneously were LOW• P ( E ) low, specificity high– If the ToC explaining impact holds true, theprobability of observing evidence of allcomponents is HIGH• P ( E | I ) = sensitivity high
  13. 13. Conclusion: let’s use other branches ofmathematicsIn the form of SET THEORY or PROBABILITYTHEORY (used differently than in frequentist statistics)They provide new ways of dealing with uncertaintySET THEORY helps with:– Defining success in a more flexible, open andinclusive way (being “on the right track”)– Explaining success by defining and identifyingcontributory causes rigorously through dataanalysis (eg. with QCA)PROBABILITY THEORY helps with:– Assessing the strength of evidence in terms ofsensitivity, specificity and predictive value– Qualitative evidence CAN be strong if a numberof conditions are met– Carefully weigh each piece of evidence as in a court of law,using conditional and subjective probabilities
  14. 14. ReferencesBefani, B. (2013) “Between Complexity and Rigour:addressing evaluation challenges with QCA” inEvaluation (forthcoming)Befani, B. (2013) “What were the chances?Diagnostic Tests and Bayesian Tools to Assess theStrength of Evidence in Impact Evaluation”, CDIPractice Paper (forthcoming)John Mayne (2013) “Making Causal Claims”(presentation to this event)Bruno Marchal (2013) “Conceptual distinctions:Complexity and Systems – Making sense ofevaluation of complex programmes” (presentation tothis event)