Successfully reported this slideshow.
Your SlideShare is downloading. ×

Lessons learned in using process tracing for evaluation

More Related Content

Slideshows for you

Similar to Lessons learned in using process tracing for evaluation

More from MEASURE Evaluation

Related Audiobooks

Free with a 30 day trial from Scribd

See all

Lessons learned in using process tracing for evaluation

  1. 1. Lessons learned in using process tracing for evaluation Emily A. Bobrow, PhD, MPH Data for Impact Webinar 17 October 2019
  2. 2. • Generate strong evidence for program and policy decision making • Build individual and organizational capacity • Enhance the use of data for global health programs and policies D4I works to:
  3. 3. • Background on process tracing as an innovative qualitative method for evaluations • Two evaluations as examples: 1. Process tracing of causal mechanisms implemented in the Partnership for HIV-Free Survival (PHFS) in Uganda 2. Learning agenda study of health information system (HIS) strengthening in Madagascar • Conclusions What this presentation will cover
  4. 4. • The work performed for the two case studies of process tracing in Uganda and Madagascar was performed under the MEASURE Evaluation project, Phase IV. • MEASURE Evaluation is funded by the United States Agency for International Development (USAID). A word about the work
  5. 5. • Process tracing is a qualitative, case-based approach used to describe a linear causal chain with steps from a conceptual model or theory of change* • This qualitative method can be used to answer whether, why, and how an intervention causes a health outcome • Process tracing is not common in public health evaluation Process tracing *Better Evaluation. (2016, April 28). Process tracing. Retrieved from http://betterevaluation.org/evaluation-options/processtracing
  6. 6. Process tracing method 2. Developing testable hypotheses 3. Identifying evidence required to test the hypotheses 1. Developing theory (causal mechanism) 4. Collecting data 5. Analyzing data and applying testsA method for assessing causal inference within a single case design
  7. 7. Generalized conceptual model or theory Strategy #1 Entity (Stake- holder) Strategy #2 Entity (Stake- holder) Strategy #3 Entity (Stake- holder) Outcome Improved Health System
  8. 8. • Think about causality—how each step could cause the next one • Only include necessary steps • Make sure that the steps are measurable • Think ahead to generalizability oLanguage should be used that is potentially applicable to other situations and contexts Strategies/steps to test
  9. 9. By testing a theorized causal mechanism, process tracing methods allow for within- case analysis to provide more broadly generalizable results that can be applied to programs in various contexts* Generalizability *Collier, D. (2011). Understanding process tracing. Political Science and Politics, 44(4): 823–30. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1856702
  10. 10. Process tracing method 2. Developing testable hypotheses 3. Identifying evidence required to test the hypotheses 1. Developing theory (causal mechanism) 4. Collecting data 5. Analyzing data and applying tests Core hypothesis Alternative hypothesis Bonus hypothesis
  11. 11. 1. What would we expect to observe if the hypothesis is true? • Improvement • No relevant change • Worsening 2. Which observations would be very unlikely unless the hypothesis is true? • Which observations would practically prove the hypothesis because they are extremely unlikely under any other circumstance? Two guiding questions
  12. 12. Like to see Love to see Expect to see Process tracing method 2. Developing testable hypotheses 3. Identifying evidence required to test the hypotheses 1. Developing theory (causal mechanism) Process tracing tests 4. Collecting data 5. Analyzing data and applying tests Highsensitivity Evidenceisnecessaryforh Hoop Double-decisive
  13. 13. •Straw in the wind test •Hoop test •Smoking gun test •Doubly decisive test Four process tracing tests Like to see Expect to see Love to see ighsensitivity videnceisnecessaryforh Hoop Double-decisive
  14. 14. • What evidence would you “like to see”? • Evidence for this test is weak o Neither necessary nor sufficient to prove the hypothesis o Low specificity and low sensitivity • However, it helps move you incrementally toward greater confidence in the hypothesis when considered alongside other evidence Straw in the wind test
  15. 15. • What evidence would you “expect to see” if the hypothesis were true? • This evidence is necessary to keep the hypothesis under consideration o Low specificity and high sensitivity • If we don’t see evidence, the hypothesis can be discarded Hoop test
  16. 16. • What evidence would you “love to see”? • This evidence is sufficient to prove the hypothesis o High specificity and low sensitivity • If we see it, we have proven the hypothesis beyond reasonable doubt Smoking gun test
  17. 17. • Evidence for this test is rare • The test is passed when the evidence confirms the hypothesis and strongly supports causality o High specificity and high sensitivity Doubly decisive test High specificity Evidence is sufficient for Lowsensitivity Evidenceisnotnecessaryforh Low specificity Evidence is insufficient for h Highsensitivity Evidenceisnecessaryforh Hoop Double-decisive Straw in the wind Smoking gun
  18. 18. High specificity Evidence is sufficient for h Lowsensitivity Evidenceisnotnecessaryforh Low specificity Evidence is insufficient for h Highsensitivity Evidenceisnecessaryforh Hoop Double-decisive Straw in the wind Smoking gun High specificity Evidence is sufficient for h Lowsensitivity Evidenceisnotnecessaryforh Low specificity Evidence is insufficient for h Highsensitivity Evidenceisnecessaryforh Hoop Double-decisive Straw in the wind Smoking gun Visual of the 4 process tracing tests h= hypothesis
  19. 19. • Process tracing tests reflect the probability of observing a particular piece of evidence if the hypothesis under consideration is true • Researchers weigh evidence according to how much the evidence increases the probability that a hypothesis is true. . . • . . . or how much not finding the evidence increases the probability that the hypothesis is false Process tracing tests
  20. 20. Process tracing method 2. Developing testable hypotheses 3. Identifying evidence required to test the hypotheses 1. Developing theory (causal mechanism) 4. Collecting data 5. Analyzing data and applying tests A method for assessing causal inference within a single case design
  21. 21. In process tracing, the unit of analysis is a case, which consists of the following: • Effect under investigation (i.e., observed outcome) • Hypothesized cause (i.e., program component or intervention) • Hypothesized processes that link the hypothesized cause and the effect* Analysis *Punton, M., & Welle, K. (2015). Applying process tracing in five steps. Brighton, UK: Institute of Development Studies. Retrieved from https://www.semanticscholar.org/paper/Applying-Process-Tracing-in-Five- Steps/c1540ce636740524a07a02a5399a69c0011eca3b
  22. 22. Goal of process tracing To estimate the level of confidence that a particular intervention has caused or contributed to a particular outcome in a particular stepwise, linear fashion as laid out in the causal mechanism
  23. 23. Evaluation 1 Process tracing of causal mechanisms implemented in the Partnership for HIV-Free Survival (PHFS) in Uganda
  24. 24. • Innovative project designed to prevent and eliminate mother-to-child transmission of HIV (PMTCT and eMTCT) • Brought together proven practices from PMTCT, quality improvement (QI), nutrition, and community outreach to improve health outcomes for mothers living with HIV and their HIV-exposed infants • Supported by the United States Agency for International Development and the United States President’s Emergency Plan for AIDS Relief, PHFS was active between 2012 and 2016 in six countries in sub-Saharan Africa: Kenya, Lesotho, Mozambique, South Africa, Tanzania, and Uganda PHFS
  25. 25. • PHFS legacy evaluation report https://www.measureevaluation.org/resources/publications/tr-18-314 • Country-specific briefs • Outcome evaluation of PHFS in Uganda • A Practical Way to Eliminate Mother- to-Child Transmission of HIV: Learning from the Partnership for HIV-Free Survival (PHFS) PHFS evaluations by MEASURE Evaluation
  26. 26. • Our MEASURE Evaluation team designed an additional evaluation of PHFS in Uganda using process tracing • Because of institutional review board delays, we did not complete the data collection • We created a guide and sample protocol as a resource https://www.measureevaluation.org/resources/publications/ms-19-179 Background
  27. 27. Retrospective theory of change created during the legacy evaluation of PHFS
  28. 28. Intervention Step 1 Step 2 Step 3 Step 4 Step 5 Outcome Designated “clinic” days for PMTCT mothers with HIV-exposed infants (M-B pairs) at M-B care points Health facilities scheduled designated clinic days for PMTCT M-B pairs (separate from clinic days for HIV- negative mothers) PMTCT M-B pairs attended the clinics on designated PMTCT clinic days PMTCT mothers felt less stigmatized for receiving PMTCT services AND formed informal support networks at PMTCT clinic days PMTCT mothers were more satisfied with their experiences at the health facilities M-B pairs returned for follow-up appointments Increased retention in care for PMTCT M-B pairs, compared with combined under-5 clinic days (April 2013– August 2015) Causal mechanism focused on “mother-baby (M-B) pair clinic days” contributing to increased retention
  29. 29. Data source: Focus group discussions with health facility staff: midwives and maternal and child health (MCH) nurses Core hypothesis: 1.1. Health facility staff scheduled mothers for their appointments on M-B pair clinic days Alternative hypothesis: 1.2. Health facility staff did not schedule mothers for appointments on M-B pair clinic days Step 1. Health facilities scheduled designated clinic days for PMTCT M-B pairs
  30. 30. Data sources: 1. Quantitative data from outcome evaluation (% of M-B pairs attending appointments on scheduled M-B pair clinic days) 2. Health facility staff focus groups 3. Mother focus groups Core hypothesis: 2.1. M-B pairs attended designated M-B pair clinic days because their appointments were scheduled for those days Step 2. PMTCT M-B pairs attended the clinics on designated PMTCT clinic days
  31. 31. Alternative hypothesis: 2.2. M-B pairs did not attend the M-B care points on M-B pair clinic days, because of time conflicts, personal preference, etc Bonus hypothesis: 2.3. M-B pairs attended designated M-B pair clinic days for separate incentives/ programs that coincided with clinic days (e.g., nutrition demonstrations, food assistance) Step 2: PMTCT M-B pairs attended the clinics on designated PMTCT clinic days
  32. 32. Retrospective theory of change created during the legacy evaluation of PHFS
  33. 33. Causal mechanism focused on quality improvement (QI) supervision and coaching contributing to improved and sustained QI work Intervention Step 1 Step 2 Step 3 Step 4 Step 5 Outcome QI supervision and coaching to health facilities by regional and district QI coaches QI coaches made contact with assigned facility- based teams QI coaches provided initial and ongoing supervision, technical support, and motivation to the facility- based teams around key QI issues Facility- based QI team members gained QI skills, felt account- able to the QI coaches, and felt motivated to do QI work Facility- based teams performed QI work Facility- based QI teams saw improve- ment in defined indicators and patient outcomes and felt motivated to continue QI work Improved and sustained QI work on PMTCT over time throughout PHFS
  34. 34. Data sources: 1. Focus group with district QI coaches 2. Focus group with regional QI coaches 3. Focus group with facility-based QI team members 4. QI journals (look for frequency of meetings, notation of change ideas and action plan, tracking of indicators/ projects, overall completeness of QI journals) Core hypothesis: 4.1. With skills, motivation, and continued coaching, facility-based teams performed QI work at their facilities Alternative hypothesis: 4.2. Despite skills and motivation to do QI work, facility-based teams did not perform QI work because of conflicting work priorities Step 3. Facility-based teams performed QI work
  35. 35. Bonus hypotheses: • 4.3. QI teams performed QI work because they feared repercussions of not complying with their responsibilities • 4.4. QI teams were able to perform QI work because they received additional QI resources (i.e., better journals, posters, worksheets) • 4.5 QI teams performed QI work because they were motivated by the learning sessions (i.e., by competition or inspiration) Step 3. Facility-based teams performed QI work
  36. 36. Evaluation 2 Prospective study of health information system (HIS) strengthening in Madagascar: Integrated routine and surveillance systems with a focus on malaria
  37. 37. • The health ministry and U.S. government made a commitment to reduce reporting redundancies through elimination of vertical systems and/or integration in the health management information system • At the same time, a malaria-specific surveillance system is required for active detection of cases • MEASURE Evaluation Phase IV work began on this current mandate in February 2017 with the drafting and acceptance of the Road Map for the Subcommittee of the Health Information System (HIS) • Road Map has nine HIS strengthening strategies Madagascar context
  38. 38. • Answers key questions about investments in HIS • Identifies evidence-based packages of HIS interventions • Builds the evidence base of what works to strengthen HIS MEASURE Evaluation Learning Agenda
  39. 39. Study objectives 1. To document the system strengthening process, resulting interventions, and efforts to monitor and assess their implementation 2. To understand and verify the process by which changes in key HIS interventions result in changes in HIS performance (data quality and use) and service delivery Learning Agenda study of HIS strengthening in Madagascar
  40. 40. •Implement cascade trainings for central, regional, and district staff on use of the data collection platform and data quality assurance tools •Conduct a national HIS assessment to inform planning for HIS strengthening •Create a data- quality assurance protocol and supervision tools •Support development of an HIS strengthening joint action plan •Support creation of a technical working group to standardize national data quality assurance practices •Support the transition of Access-based health information software to a web- based portal that communicates with DHIS 2* •Support the DLP to produce ongoing monthly malaria bulletin Improved stakeholder coordination, along with the development of standardized protocols for data analysis, presentation, and review, will improve availability of high-quality data and lead to better programmatic decisions, especially in the face of disease outbreaks. *An electronic platform for the collection and analysis of health data
  41. 41. • Strategy 1. Institutional strengthening of HIS (governance: structure, standards and procedures, strategic documents / HIS) • Strategy 2. Establishment of an effective information technology (IT) platform for HIS support (availability of IT equipment, performance of IT tools/software) • Strategy 3. Development or updating of tools or guides for the management and use of information (management tools, management manual, training plan, supervision plan) • Strategy 4. Development of a data quality assurance system (monitoring and evaluation, supervision, verification, quality control, validation and transfer, and retro-information) Road Map
  42. 42. • Strategy 5. Enhanced competence of officers responsible for management, use of data, and use of information at all levels • Strategy 6. Creation of a culture of data use for decision making • Strategy 7. Creation of a platform for sharing and dissemination of information (Internet, periodic bulletin, periodic reviews) with easy access by all users • Strategy 8. Implementation of the DHIS2 software at the central level for data warehouses, fed periodically by the various official databases • Strategy 9. Mobilization of resources and sustainability Road Map
  43. 43. 1. Systematic tracking of HIS strengthening and integration activities related to the Road Map 2. Periodic and ongoing focus group discussions to assess perceptions and implementation of the road map 3. Qualitative data collection for process tracing method to describe the causal chain between the intervention activities and the relationships with the HIS performance outcomes Data collection
  44. 44. Causal mechanism (macro-level) focused on implementation of the Road Map for the HIS sub- committee to create an efficient, integrated HIS Intervention 1 2 3 4 5 6 Outcome Implementation of the Road Map for the HIS Sub- committee Design and implement procedures and mechanisms for institutional strength- ening of HIS Lead the process for development , updating and launch of tools, guides, training plans, supervision plans, DQA system Develop and implement plans to enhance competence of officers responsible for manage- ment, use of data, and use of information at all levels Increasingly engage officers in data demand and use Implement strategies to create a culture of data use for decision - making Craft and shape use of communication platform, including availability of dashboards, bulletins, regular data sharing meetings, etc. An efficient, unique, and integrated HIS
  45. 45. Data source: Focus groups with implementers of HIS Road Map Core hypothesis: 1.1. The stakeholder workshop for the implementation of the HIS Sub-committee galvanized efforts to put in place procedures and mechanisms for institutional strengthening of HIS Alternative hypotheses: 1.2. Establishment of mechanisms for institutional strengthening of HIS were not related to the stakeholder workshop: the government of Mozambique was already carrying out plans 1.3. Establishment of mechanisms for institutional strengthening of HIS were not related to the stakeholder workshop: implementing partners have been the primary motivator Step 1. Design and implement procedures and mechanisms for institutional strengthening of HIS
  46. 46. Evidence to support the core hypothesis: “ The workshop really was necessary! Because there were no standard or procedures, it’s as if everything was done blindly.” (FGD 2) “DHIS2 implementation was included in the Road Map . . . “ (FGD3) “The purpose of the Antsirabe workshop was to develop an operational plan to improve the HIS system.” (FGD 6) “There were other workshops before Antsirabe . . . in Antsirabe committees were established and we institutionalized everything.” Evidence against the core hypothesis: “ there were a lot of efforts made before this Road Map.” (FGD 3) “ . . . there are still many departments that do not yet know that these standards and procedures exist.” (FGD 2) Step 1. Evidence and test result
  47. 47. • Need to weigh the evidence • Look back at the hypotheses and the tests • Core hypothesis: 1.1. The stakeholder workshop for the implementation of the HIS Sub-committee galvanized efforts to put in place procedures and mechanisms for institutional strengthening of HIS • Tests • Straw in the wind = Like to see = evidence for this test is weak • Hoop test = Expect to see = Keeps hypothesis under consideration • Smoking gun = Love to see = Sufficient to prove the hypothesis • Conclusion = Smoking gun Step 1. Evidence and test result
  48. 48. Conclusions about the value and challenges of process tracing
  49. 49. • Allows in-depth examination of how and why an intervention influenced the outcome • Can add a lot of clarity but with extra work for the initial set-up • Can improve precision of existing data collection tools Overall: Systematic, transparent way to analyze qualitative data, identify and rule out alternative explanations, justify conclusions, and show where data are weakest Value of process tracing
  50. 50. • Detailed, analytical initial thought and work required • Need to avoid developing theory at too “micro” a level – making it difficult to test hypotheses • Need to be careful about evidence of absence vs. absence of evidence for hypotheses • Important to have an iterative process Challenges in process tracing
  51. 51. • USAID and PEPFAR for supporting this work • Colleagues at MEASURE Evaluation, particularly Heidi Reynolds, for inspiring our team to use process tracing • Colleagues in Uganda at the Makerere School of Public Health • Colleagues in Madagascar, particularly those on the Research and Evaluation Technical Working Group for the DEP • Experts on process tracing: Ir. Cecile Kusters at the Wageningen Centre for Development Innovation; Melanie Punton at Itad; and Gavin Stedman-Bryce at Pamoja Thanks to . . .
  52. 52. • Beach, D. & Pedersen, R. (2013) Process-tracing methods: Foundations and guidelines. Ann Arbor, MI, USA: University of Michigan Press. Retrieved from https://www.researchgate.net/publication/287260232_Process- Tracing_Methods_Foundations_and_Guidelines • Befani, B. & Mayne, J. (2014). Process tracing and contribution analysis: A combined approach to generative causal inference for impact evaluation. IDS Bulletin, 45(6): 17–36. Retrieved from https://onlinelibrary.wiley.com/doi/abs/10.1111/1759-5436.12110 • Befani, B. & Stedman-Bryce G. (2016). Process tracing and Bayesian updating for impact evaluation. Evaluation, 23(1): 42–60. Retrieved from https://journals.sagepub.com/doi/abs/10.1177/1356389016654584 • Better Evaluation. (2016, April 28). Process tracing. Retrieved from http://betterevaluation.org/evaluation-options/processtracing • Collier, D. (2011). Understanding process tracing. Political Science and Politics, 44(4): 823–30. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1856702 • Punton, M. & Welle, K. (2015). Applying process tracing in five steps. Brighton, UK: Institute of Development Studies. Retrieved from https://www.semanticscholar.org/paper/Applying-Process-Tracing-in-Five- Steps/c1540ce636740524a07a02a5399a69c0011eca3b References
  53. 53. This presentation was produced with the support of the United States Agency for International Development (USAID) under the terms of the Data for Impact (D4I) associate award 7200AA18LA00008, which is implemented by the Carolina Population Center at the University of North Carolina at Chapel Hill, in partnership with Palladium International, LLC; ICF Macro, Inc.; John Snow, Inc.; and Tulane University. The views expressed in this publication do not necessarily reflect the views of USAID or the United States government. www.data4impactproject.org

Editor's Notes

  • We began by doing a literature search on process tracing and reading and learning about the history and method of process tracing. We looked for examples of where process tracing had been used in public health evaluations and found very few. We contacted experts and had phone calls with them.

    In reading about process tracing, we found that the examples were often in the field of political science. Examples were also used for solving a murder, even Sherlock Holmes cases. It is good to know this background as we describe some of the terminology used in the process tracing method.
  • The steps in the process tracing method are not unusual ones for public health research and evaluation. The 5 steps are on the slides (read them).

    Keep in mind that the purpose of process tracing is to assess causal inference within a single case design, and to think ahead to generalizability

    The next set of slides will walk through each step in the method.

    The first step is developing theory about how and why the intervention leads to an observed outcome. This is a specific type of theory, sometimes called a causal mechanism. This involves elaborating all the steps between A (the intervention), and B (the outcome), opening up the black box and breaking down what is inside into the smallest feasible number of STEPS or PARTS. (See next slide.)
  • Here is a general example of a conceptual model to test using process tracing.

    This gives detail on how and why an intervention leads to an observed outcome. In this causal mechanism you must elaborate all the steps between A (the intervention), and B (the outcome), opening up the black box and breaking down what is inside into the smallest feasible number of steps or parts.
  • For our work, we made very small, specific steps in our evaluation in Uganda and larger, macro steps in our evaluation in Madagascar.
  • The second step is to develop hypotheses to test each part of the mechanism. Develop three hypotheses for each step.

    We created detailed matrices and included each step, each hypothesis, and the source of the evidence needed. This level of detail was deemed necessary in order to apply the process tracing tests during analysis.
  • The third step is to identify the evidence needed to test the hypotheses.
  • This is actually quite an intuitive process—it’s something we do all the time in day-to-day life.

    Constructing tests is highly analytical and contextual: your decision about whether something is a hoop test or a straw in the wind test is based on your subjective understanding of what a piece of evidence means in a particular context.

    There are clearly risks of subjectivity here: what one researcher might see as a hoop test, someone else might see as a straw in the wind. The important thing is to be transparent about the reasoning behind particular tests, which allows others to scrutinize your reasoning.
  • The fourth step is to collect the data in order to test the hypotheses. We decided to use focus group discussions as our main method of data collection, because we thought that having groups discuss their thoughts would give us a better sense of the general probability that one step might lead to the next one.

    The fifth step is to analyze the data and apply the tests. This has taken a tremendous amount of time—possibly because this is a new method for us.
  • A legacy evaluation of PHFS conducted by a team from MEASURE Evaluation in 2018 identified compelling lessons from the different ways the project was implemented in the participating countries. These lessons are broadly applicable to countries and facilities looking to reduce mother-to-child transmission of HIV, increase antiretroviral therapy retention, support better nutrition practices, and improve patients’ health-seeking behaviors.
    Our evaluation team then created a guide to provide direct and practical input for identifying and implementing applicable activities in the local context. The guide includes descriptions of key lessons as well as an extensive checklist to help potential decision makers and implementers understand how and why to launch and sustain the critical activities of the PHFS approach.
  • Note that the guide and the sample protocol that can be adapted to evaluate PMTCT programs in other countries using process tracing, including specific language from our protocol on evaluating PHFS using process tracing.

    Natural audiences for this guide are evaluators or researchers interested in the innovative method of process tracing in public health evaluations, in partnership with other stakeholders, such as government and nongovernmental implementers of PMTCT programs. We highly recommend that investigators develop protocols in a participatory manner, involving partners at the local, national, and international levels, and in conjunction with donors.

    Next, I will walk through how we designed the evaluation of PHFS in Uganda using process tracing.
  • The evaluation identified three main categories of PHFS activities that helped improve program performance: (1) service delivery, (2) quality improvement practices, and (3) stakeholder engagement, which, in turn, has subcategories for oversight and implementation.

    In each category/subcategory, there is a set of core activities that are linked within and/or across the different categories, which ultimately lead to important outcomes.

    Taken together, the categories/subcategories and the linked activities are the basis for our PHFS retrospective theory of change.

    To be clear, a retrospective theory of change shows what was done to achieve results as opposed to the more common use of a theory of change to show what is planned. The retrospective theory of change also incorporates findings from all six countries, demonstrating the commonalities across them.

    We started with this retrospective theory of change for PHFS and honed in on specific areas to test using process tracing. We created two specific mechanisms to test.
    (1) The first was in the area of service delivery and specifically about designed “clinic days” to PMTCT mothers and babies to receive care. The outcome was on increasing retention in care.
  • This is an example of the information we wrote down for each step. This step is pretty straightforward. No bonus hypotheses were written for this step.
  • This is an example of the information we wrote down for each step. This step is pretty straightforward. No bonus hypotheses were written for this step.
  • Note that you can use quantitative data sources as evidence in process tracing.


  • For the alternative hypothesis: Note: Other reasons for why they did not attend clinic days may be found; process can be iterative; we can add another hypothesis if necessary.

  • We also created a second mechanism to test based on the retrospective theory of change
    2) We focused on the QI component of PHFS: specifically on QI coaching and mentoring. The outcome focused on improved QI.
  • Note that you can use existing data, like QI journals, as evidence in process tracing.
  • Note that you can use existing data, like QI journals, as evidence in process tracing.
  • There can be multiple bonus hypotheses
  • The government of Madagascar’s key focus areas are economic recovery, infrastructure, education, energy, and health. As noted in the Madagascar National HIS Strengthening Strategic Plan (2013–2017), the Ministry of Health (MOH), in collaboration with key health partners, has developed and implemented interventions to build an integrated electronic health management information system (or HMIS) to strengthen the reporting of health information at regional and district levels. The goal is to reduce reporting redundancies at all levels by either eliminating vertical disease reporting systems or integrating them into the HMIS. For the immediate future, the country needs a surveillance system to ensure rapid alert of notifiable diseases, such as malaria, plague, polio, and hemorrhagic fevers.

    As Madagascar moves toward pre-elimination strategies for malaria, a malaria-specific surveillance system is required for close monitoring of the number of cases and to document detailed information on the cases. This promotes more active detection of cases.
  • Point to make is that this HIS Strengthening Model is the foundation for all our work. We can organize the Madagascar activities to the HISSM elements, and it informs the LA study about how those activities will result in improved HIS performance. The HISSM:

    Articulates the project’s current understanding of health information system (HIS) strengthening
    Guides ongoing learning on how HIS in low- and middle-income countries (LMICs) are designed, developed, and implemented to support health systems and to improve health outcomes
    Focus is on HIS strengthening at the country level
    The model can guide countries in assessment, planning, and improving HIS

    HISSM objectives:
    Promote HIS as an essential function of a health system
    Define HIS strengthening
    Measure HIS performance
    Monitor and evaluate HIS interventions


    (From HISSM slide deck):

    This PowerPoint presentation provides an overview of MEASURE Evaluation’s Health Information System Strengthening Model, or the HISS Model. The following slides describe the purpose of the model and each of the model’s areas and sub-areas.

    The main purpose MEASURE Evaluation’s HISS model is to articulate the project’s current understanding and guide ongoing learning in how health information systems (HIS) in low- and middle-income countries are designed, developed, and implemented over time to support health systems and improve health outcomes.

    MEASURE Evaluation engages in HIS strengthening primarily at the country level; this is central to the design of the model. Our intention is for the model to be useful for countries at both national and subnational levels as a guide in their assessment, planning, and improvement of health information systems.
  • The MEASURE Evaluation Madagascar team, along with colleagues, mapped their interventions to the HISSM.

    This served as our conceptual model, but so did the Road Map.
  • In February 2018
    9 FGDs total with directors and technical staff from partners working on implementing the Road Map
    3 FGDs in the morning with directors
    6 FGDs in the afternoon with technical staff

    November 2018
    2 FGDs to pilot the process tracing mechanism and data collection tool

    In July 2019
    7 FGDs with directors and technical staff from partners working on the implementation of the Road Map

  • In February 2018
    9 FGDs total with directors and technical staff from partners working on implementing the Road Map
    3 FGDs in the morning with directors
    6 FGDs in the afternoon with technical staff

    November 2018
    2 FGDs to pilot the process tracing mechanism and data collection tool

    In July 2019
    7 FGDs with directors and technical staff from partners working on the implementation of the Road Map

  • This is an example of the information we wrote down for each step. This step is pretty straightforward. No bonus hypotheses were written for this step.
  • Extremely useful for understanding how an intervention works and where weak points might be. So ultimately, it comes down to what insights are useful for the evaluation. If there are micro-level insights about how and why an intervention works that would be useful, process tracing can help generate robust evidence that the intervention worked in a particular way.

    Perhaps the biggest advantage of process tracing is around defining very specific hypotheses to test various elements of your theory. This does take some up front time and thought. But it can help ensure that the data collection tools are collecting the right evidence to test the theory. And with quite a contained, relatively simple intervention like this one, it requires a relatively small amount of extra up front work. Unlikely to be the case for larger more complex interventions!

    Structuring data collection around hypotheses and then specifically testing them against the evidence you expect to see, would love to see and would like to see – This makes the process of analysing qualitative data and coming to conclusions very transparent and systematic, and helps show where evidence is weaker. We can see which hypotheses are better supported, and where the gaps are. In a more iterative process, this would enable us to go back and look to strengthen the evidence around those gaps, and investigate new emerging hypotheses.
  • The process takes time! Do you need such an in-depth process to work out what is happening within the course of a single interaction?

    Need to be careful to frame your theory in a way that is testable and that will make sense in interviews and observations. It is difficult to design the guides well and to train the data collectors well. So need to be careful to frame your theory in a way that is testable and that will make sense in focus groups, interviews, and observations. If causal mechanisms are breaking down a single interaction into component parts, testing these parts might be difficult, because in reality people don’t think this way.

    Need to be careful not to take absence of evidence (for example something not being said in an interview) as evidence that the hypothesis is not true. Important to attempt to systematically test hypotheses as far as possible. This requires care and precision in how tools are used, particularly qualitative interview guides that are more open. It also requires well-trained local consultants who are often the data collectors.

    Finally: real challenge is that it might work best as an iterative process. Testing hypotheses, ruling some out, identifying evidence gaps and new hypotheses, then going back out to collect more evidence.

×