• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Methodological Challenges in Evaluating Malaria Control Program Impact: How do we ever find out what worked?
 

Methodological Challenges in Evaluating Malaria Control Program Impact: How do we ever find out what worked?

on

  • 534 views

Presented by Tom Smith, Swiss Tropical and Public Health Institute, as part of a symposium organized by MEASURE Evaluation and MEASURE DHS at the 6th MIM Pan-African Malaria Conference.

Presented by Tom Smith, Swiss Tropical and Public Health Institute, as part of a symposium organized by MEASURE Evaluation and MEASURE DHS at the 6th MIM Pan-African Malaria Conference.

Statistics

Views

Total Views
534
Views on SlideShare
534
Embed Views
0

Actions

Likes
0
Downloads
0
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Methodological Challenges in Evaluating Malaria Control Program Impact: How do we ever find out what worked? Methodological Challenges in Evaluating Malaria Control Program Impact: How do we ever find out what worked? Presentation Transcript

    • Dept. Epidemiology & Public Health Methodological challenges in evaluating malaria control program impact: how do we ever find out what worked? Thomas Smith, Melissa Penny, Nakul Chitnis
    • Issues in monitoring and evaluation According to the Global Fund to Fight AIDS, Tuberculosis and Malaria (GFATM) “As a general guideline, it is expected that 5 to 10 per cent of the national program budget be allocated for Monitoring and Evaluation”*. Many malaria control programs spend far less than this in M & E 36/45 countries have insufficient data to assess trends (Noor, Monday; WMR 2012)  Sufficient data or  Sufficient representative data?  Alignment of data collection and health systems response? Which components are working well? Which components need improving? Which components are poor value? *GFATM Guidelines for budgeting in global fund grants (2012)
    • Malaria has decreased in many places..... Fajara, Gambia Malaria admissions and deaths 1999-2007 Ceesay Lancet 2008
    • What factors contribute? We have seen substantial reductions in malaria in many endemic countries What are the contributions to this of:  Improved drugs (ACT)?  Improved delivery systems?  LLIN scale-up?  Improvements in housing?  Urbanisation?  Climate (drought in Eastern Africa)?  Evolutionary changes in parasites or vectors?
    • Limitations of analysis of routine data In real situations multiple interventions occur at the same time. Can trends be attributed to interventions? 5
    • Limitations of analysis of routine data Improvements in child survival, Tanzania Between 1999 and 2004: Important improvements in Tanzania's health system included:  doubled public expenditure on health;  decentralisation and sector-wide basket funding Increased coverage of key child-survival interventions:  integrated management of childhood illness  insecticide-treated nets  vitamin A supplementation  Immunisation  exclusive breastfeeding. Non-malaria interventions can also be relevant and will impact malaria mortality because of disease interactions Masanja et al, Lancet 371, 2008
    • Limitations of plausibility designs Climate change Economic development Vector-proof Housing Urbanisation Source reduction Improved case management With a complex system, plausibility can be a misleading criterion IRS Malaria infection LLINs Access to care Larviciding Vaccines (?) Morbidity Mortality ACT treatment Enhancement Reduction 7
    • Use of field trial data to estimate impact Field trials tell us the efficacy of an intervention: - Coverage data are often available from surveys (e.g. DHS) - Can be combined with efficacy data from field trials - Example: Cochrane review of ITN impact: LIST model (Eisele et al) Lengeler, Cochrane Review, 2009
    • Methodological and Policy Limitations of Quantifying the Saving of Lives: A Case Study of the Global Fund’s Approach 9 McCoy et al, Plos Med 2013
    • Estimate of reduction in mortality from LIST Eisele et al, Malar J. 2012; 11: 93.
    • Use of static models for estimating impact In general: meta-analyses of clinical trials do not capture:  Variations between settings in the impact of interventions  Long term temporal dynamics  Effects of loss of immunity  Effects of reduction of infectious reservoir  Interactions between different interventions
    • Complex systems are characterised by non-linearities that make the knock-on effects of changes unpredictable Limitations of static models Climate change Economic development Vector-proof Housing Urbanisation Source reduction Improved case management IRS Malaria infection LLINs Access to care Larviciding Vaccines (?) Morbidity Mortality ACT treatment Enhancement Reduction 12
    • Simulation of impact of LLIN campaign Different models suggest different effects on subsequent clinical incidence Briet et al, in preparation 13
    • Statistics from World Malaria Report, 2011 Intervention scale-up Scale up of ACTs has occurred at the same time as scale-up of LLINs. Which accounts for the reduction in burden? Disease burden
    • Use of static models for estimating impact Example:  WHO estimates of malaria mortality burden are adjusted for LLIN coverage, using the figure of 17% mortality reduction  WHO estimates of malaria mortality burden are not adjusted for ACT scale-up Implication:  Evaluations of the impact of ITN scale-up based on WHO burden statistics will concur with the value 17%  Evaluations of the impact of ACT scale-up based on WHO burden statistics will attribute minimal impact
    • Simulation modeling  Simulation modelling can be used to support decisions about which interventions are likely to be most cost effective in given settings, and to predict what a program should be achieving.  Models used to project disease burden need to consider transmission dynamics  Such modelling should be seen as a complement to, rather than a substitute, for the capture of data on coverage or access from the field.  Circular reasoning should be avoided: don’t use a model to estimate disease burden that assumes specific intervention impacts, and then use the outputs of this to estimate intervention impacts.
    • Secondary and incidental explanations Changes in other diseases that interact with malaria Climate change Evolution of resistance/insensitivity  Drug resistance  Insecticide resistance  (Vaccine insensitivity) Evolution of behavioural resistance Evolution of life-histories (Ferguson et al, 2012) => In general these are highly unpredictable 17
    • CDC light trap catches of Anopheles & monthly rainfall patterns (Tanga region, Tanzania) Masaika 1998-2001 Kirare, 2004-2009 “Decline(s) in the density of malaria mosquito vectors …. during both study periods despite the absence of organized vector control”. Meyrowitsch et al, Malar J. 2011
    • Conclusions so far Collection of representative data from programs is key for assessing malariological trends  Attribution of effects to specific interventions is problematical even if comprehensive data are collected Mathematical models can help attribute effects to specific interventions but models should not be used uncritically:  Circular logic should be avoided: don’t use coverage data to simultaneously estimate trends in burden, and in intervention impact  Models of intervention impact need to allow for temporal dynamics  Some causes of malariological trends may be inherently unpredictable and hard to model
    • Trial design for empirical estimation of impact Individually-randomised RCTs are feasible only for estimating drug efficacy. For testing intervention combinations in the real world, RCTs are infeasible: Instead: Plausibility designs? Before-and-after studies? One-against-one trials? Intervention time Control time Intervention zone Control zone Comparisons of small numbers of villages/districts? Intervention zones Control zones
    • Factors that can account for variation between zones Climate change Economic development Vector-proof Housing Urbanisation Source reduction Improved case management IRS Malaria infection LLINs Access to care Larviciding Vaccines (?) Morbidity Mortality ACT treatment Enhancement Reduction 21
    • Feasibility of massive group- randomised effectiveness studies Example: Schellenberg et al, Mal J, 2011 22
    • Stepwise introduction of interventions The inclusion of elements of randomisation in the order of introduction to different Trial of odour-based traps critical for inferring causality from such data, and the geographical areas, is on Rusinga Island, Kenya (population 25,000) should be stressed to program managers. importance of this
    • Conclusions Collection of representative data from programs is key for assessing malariological trends  Attribution of effects to specific interventions is problematical even if comprehensive data are collected Mathematical models can help attribute effects to specific interventions but models should not be used uncritically:  Circular logic should be avoided: don’t use coverage data to simultaneously estimate trends in burden, and in intervention impact  Models of intervention impact need to allow for temporal dynamics Direct estimation of the impact of complex intervention programs is challenging but possible:  Large numbers of clusters need to be evaluated  Stepwise roll-out should be considered  Randomisation of the order of roll-out is critical for generating interpretable data and is possible even in programme settings.