Evaluating Systems
Change
1
Overview
Feedback from SE on evaluating systems change
Types of evaluation
Generic evaluation timetable
Steps for generic evaluation
What are our responsibilities toward the evaluation partner?
Questions for bidders
How do we sample?
How do we measure the impact & quality of the evaluation itself?
How should we procure?
2
Feedback from SE on evaluating systems
change
• Consider uniqueness of systems change, but still need to be able to
baseline it and evaluate change over time and what's caused it
• Articulate how the programme expects to bring about change so that the
evaluation partner can evaluate whether & how it's occurring
• Define the initial outcomes & target cohorts you want to baseline &
evaluate
• Provide access to the evaluation partner to the people & organisations who
most influence change and the residents
• Ensure that the evaluation is part of the process of change and needs to
support & inform any changes to the programme as it develops
• Build in ways that the evaluation approach itself can be adapted to suit the
changing needs of the programme
3
Applying insight to test ways to help people get more active
& be more self sufficient
4
We will use an iterative approach to make sure that we can move quickly from
using research & analysis to testing opportunities for behaviour change with
residents & staff and supporting residents to self report their behaviour digitally
to track impact.
1. Develop evaluation framework, baseline
& deep dive to develop hypotheses
Development of evaluation framework & tools & systems change tracking
• Develop framework to manage risk, finance & evaluation & escalation, Develop tools for measuring & tracking with Board and SE evaluation partner,
Baseline systems change indicators to co-design maturity model for the programme, Map how stakeholders influence & support each other, identify
influencers and assess leverage points to test interventions on, social network analysis of partners to demonstrate effectiveness of system, Develop
maturity model for neighbourhood-level systems change, Baseline & regularly review readiness by practitioners to deliver systems change, Align
performance monitoring for programme to borough outcomes framework
• Outcome: Evaluation framework, systems change indicators and maturity model for systems change and social network analysis of partners to
identify how to intervene in the system
Data analysis to develop baseline & analyse data on cohorts:
• Baseline, systematic review, mapping & analysing datasets, predictive modelling to identify those at risk of physical inactivity, linking of relevant data
to produce baseline, framework developed to enable service providers to collect, integrate & analyse data to help residents to be more active
• Outcome: Produce baseline & analysis of cohorts at risk of inactivity & models to enable partners to analyse data at the point of contact
Qualitative research to dig deeper from the baseline to surface hypotheses to test interventions
• Design survey, map & analyse user journeys & touchpoints, service demand, effectiveness of service provision, cost drivers, map factors that
influence physical inactivity, model potential interventions, segmentation & stratification of residents, identification of hypotheses, priority cohorts
& geographic areas to test interventions, use research to start testing interventions
• Outcome: Produce segmentation of residents & hypotheses to test interventions based on understanding of needs, motivations & opportunities for
change amongst residents and assessment of service demand, effectiveness of provision & cost drivers
5
2. Support residents & communities to track
their own behaviour
Development of peer research
• Develop criteria for selecting characteristics for residents to be identified & trained to carry out research
amongst their social networks, Residents identified & trained to carry out research, Support developed &
provided for residents carrying out research, Trained researchers carry out research and support others to carry
out research in their networks
• Outcome: Residents supported to carry out research and support others to carry out research in their networks
Development & delivery of digital self reporting
• Identify & test what methods & tools work best to enable people to self-report their activity, motivations &
needs, development of digital tool, testing of tools with residents, use of data produced by tool to recommend
how to get more active
• Outcome: Digital tool that enables residents & practitioners to better understand the impact of their behaviour
and become more active as a result co-designed with target audiences & service providers so it meets their
needs
6
Survey Questions for Residents
• Demographics
• Overall needs, motivations & behaviours
• Levels & types of physical activity and motivation to get people active
• Select standardised questions from relevant national surveys (Values
Mode Analysis, Understanding Population & Active Lives)
• Test survey with Residents Panel to validate
• Peer review with other local areas and IFF
7
Types of evaluation
Type of Evaluation Situation
Iterative evaluation When working on complex situations or early stage innovation. Where there is a
degree of uncertainty about what will work and how. Where new challenges,
opportunities & activities continue to emerge.
Formative evaluation When the core elements of the programme are taking shape. Where the outcomes
are becoming predictable. Where the context is increasingly well know and
understood.
Summative evaluation Where the programme's activities are well-established. Where people involved are
increasingly more confident about "what works". Where the interventions are
significant enough for more intensive scrutiny.
8
Generic Evaluation Timetable
Phase Design & Methodology Planning Implementation Review & Follow Up
Major Output Evaluation Specification
Terms of Reference
Evaluation Plan Evaluation Report Action Plan
Activities Purpose, scope & objectives
Context
Target Cohorts
Questions
Type of Evaluation
Criteria
Level of rigour
Data sources
Sampling
Data collection
Data quality
Evaluation Partner selected
Roles & Responsibilities
Design of baseline &
review of Year 1
Test data collection
methods & tools with
cohorts & partners
Sampling
Training
Involvement of cohorts & partners
Collection of data
Analysis of data
Conclusions
Recommendations
Learning
Publication & sharing
Response by programme to
recommendation & action plan
9
Steps for Evaluation
1. Orientation
• Learn about the systems change initiative’s goals and strategies. Discover the basics of the collaborative
implementing the systems change initiative. Collect basic information about a system’s programs and services.
2. Evaluation planning
• Decide on priorities for the evaluation. Identify research questions. Develop a data collection plan.
3. Develop data collection instruments
• Develop instruments for baseline data collection. Develop instruments for collecting data at follow-up periods
for subsequent data collection.
4. Collect baseline data
• Collect data on key systems change indicators (i.e. effectiveness of programmes & services, collaboration)
10
5. Collect follow-up data
• At a follow-up period: - Collect data on systems change. Ask stakeholders about their perceptions of the systems
changes and changes that have occurred between baseline and follow-up.
6. Describe change between baseline and follow-up
• Compare the follow-up and baseline data to assess the extent to which systems changes have taken place. Account for
stakeholder arguments about extent and type of change.
7. Analyze how the programme contributed to change between baseline and follow-up
• Assess how the initiative was able to improve on the systems change indicators.
8. Develop recommendations
• Identify ways that programs and services can continue to improve. Identify ways that the structure might be
reconfigured to facilitate systems change. Identify ways that the system might contribute to a better structure and to
improved programs and services. Identify ways to improve structures and processes to facilitate implementation.
11
What are our responsibilities toward the
evaluation partner?
Outline:
• How we expect to bring about systems change
• Purpose, geographical scope, context, target cohorts, governance & objectives of the programme
• Scope & objectives of evaluation
• What we want to evaluate
• Assess outcome/impact/document good practice/ understand causal relations/what works and why/formation of future programme
• Evaluation/baseline questions for measuring impact on residents
• Evaluation/Baseline questions for measuring impact of process of systems change)
• Why
• How it should be organised
• How the evaluation process needs to inform improvements in the programme
• Governance of the evaluation
• Reporting requirements, information governance (including GDPR) & data quality requirements and any generic requirements for supplying to Ealing Council
• Deliverables needed, milestones, depdndencies, budget
Provide access to:
• Decision makers, practitioners & target cohorts
• Relevant insights & data sources on the target cohorts that can be shared
• Pre-exisitng insights by ourselves and partners on the context and conditions for systems change in the local area
12
Questions for bidders
• What is the most effective evaluation design for the experimental context of this programme?
• What data collection & analysis tools & methods would you plan to use?
• What sampling method would you plan to use?
• How would you make sure that target cohorts & people carrying out the research have been involved in the
design of the evaluation?
• How would you assure the quality & ethics of the data collection & analysis?
• How would you plan to share & review emerging lessons learned from the evaluation so that the key
influencers of the system can learn and apply these lessons to make systems change happen?
• How would you be able to adapt your evaluation methods to adjust to the nature & scale of the activities
that emerge from the programme?
• How would you evaluate how the system has changed over time and explain what is contributing to this
change?
• What additional support from our organisation & partners would help you deliver an effective evaluation?
• What costs & resources are required for each activity within your plan?
• How would you ensure that you develop expertise on the context of the local area in scope?
13
How do we sample?
• What sampling methods do we use? Or do we ask the bidders?
• 1. Randomised sample with boost on target cohorts to enable specific analysis
• 2. Stratified sample to dig deeper in issues / cohorts identified in the randomised sample
• What is the group of cohorts from which we want to draw a sample?
• Target cohorts agreed in Corporate Board presentation
• How many do we need in our sample?
• Dependent on budget and how specific we make cohorts
• More specific the cohorts, bigger the sample
• How will these be selected?
• Use providers with existing national database
• Baseline sample of residents to evaluate future impact of programme on residents’ outcomes
• Baseline sample of staff to evaluate future impact of programme on systems change
See Overview of Sampling in Annexes
14
How do we measure the impact & quality of
the evaluation itself?
• Relevance of evaluation design & methods to the purpose &
objectives of the programme
• Effectiveness of delivery of the evaluation
• Credibility of the findings
• Impact of the recommendations from the evaluation on systems
change
• Understanding of the conditions of systems change in the local area
and the factors & patterns influencing it
• Understanding of the factors influencing inactivity and opportunities
for behaviour change amongst the target cohorts
15
How should we procure?
Expertise
• Expertise in iterative evaluation, in particular of systems change
• Understanding of dynamics of the system of the local area in scope
Types of contract
• As part of Analysis & Evaluation lot
• Retainer free contract
• Stepwise funding (to enable extension of contract)
• Prescribe proportion of funding for specific outcomes and allow rest for
contingencies
16
Actions
• Develop specification for evaluation according to overall
commissioning timetable in Activity Plan
• Research Team to draft specification for Evaluation and for other
Analysis & Evaluation work streams with support from Procurement
to include evaluating (inc. baselining) residents & systems change
• Recommend randomised sample to baseline residents with boost of
target cohorts to ensure specific analysis of these groups
• Followed by deep dive qualitative & quantitative analysis & service
design to start testing behaviour change
• Task & Finish Group to review & agree
17

Evaluating Systems Change

  • 1.
  • 2.
    Overview Feedback from SEon evaluating systems change Types of evaluation Generic evaluation timetable Steps for generic evaluation What are our responsibilities toward the evaluation partner? Questions for bidders How do we sample? How do we measure the impact & quality of the evaluation itself? How should we procure? 2
  • 3.
    Feedback from SEon evaluating systems change • Consider uniqueness of systems change, but still need to be able to baseline it and evaluate change over time and what's caused it • Articulate how the programme expects to bring about change so that the evaluation partner can evaluate whether & how it's occurring • Define the initial outcomes & target cohorts you want to baseline & evaluate • Provide access to the evaluation partner to the people & organisations who most influence change and the residents • Ensure that the evaluation is part of the process of change and needs to support & inform any changes to the programme as it develops • Build in ways that the evaluation approach itself can be adapted to suit the changing needs of the programme 3
  • 4.
    Applying insight totest ways to help people get more active & be more self sufficient 4 We will use an iterative approach to make sure that we can move quickly from using research & analysis to testing opportunities for behaviour change with residents & staff and supporting residents to self report their behaviour digitally to track impact.
  • 5.
    1. Develop evaluationframework, baseline & deep dive to develop hypotheses Development of evaluation framework & tools & systems change tracking • Develop framework to manage risk, finance & evaluation & escalation, Develop tools for measuring & tracking with Board and SE evaluation partner, Baseline systems change indicators to co-design maturity model for the programme, Map how stakeholders influence & support each other, identify influencers and assess leverage points to test interventions on, social network analysis of partners to demonstrate effectiveness of system, Develop maturity model for neighbourhood-level systems change, Baseline & regularly review readiness by practitioners to deliver systems change, Align performance monitoring for programme to borough outcomes framework • Outcome: Evaluation framework, systems change indicators and maturity model for systems change and social network analysis of partners to identify how to intervene in the system Data analysis to develop baseline & analyse data on cohorts: • Baseline, systematic review, mapping & analysing datasets, predictive modelling to identify those at risk of physical inactivity, linking of relevant data to produce baseline, framework developed to enable service providers to collect, integrate & analyse data to help residents to be more active • Outcome: Produce baseline & analysis of cohorts at risk of inactivity & models to enable partners to analyse data at the point of contact Qualitative research to dig deeper from the baseline to surface hypotheses to test interventions • Design survey, map & analyse user journeys & touchpoints, service demand, effectiveness of service provision, cost drivers, map factors that influence physical inactivity, model potential interventions, segmentation & stratification of residents, identification of hypotheses, priority cohorts & geographic areas to test interventions, use research to start testing interventions • Outcome: Produce segmentation of residents & hypotheses to test interventions based on understanding of needs, motivations & opportunities for change amongst residents and assessment of service demand, effectiveness of provision & cost drivers 5
  • 6.
    2. Support residents& communities to track their own behaviour Development of peer research • Develop criteria for selecting characteristics for residents to be identified & trained to carry out research amongst their social networks, Residents identified & trained to carry out research, Support developed & provided for residents carrying out research, Trained researchers carry out research and support others to carry out research in their networks • Outcome: Residents supported to carry out research and support others to carry out research in their networks Development & delivery of digital self reporting • Identify & test what methods & tools work best to enable people to self-report their activity, motivations & needs, development of digital tool, testing of tools with residents, use of data produced by tool to recommend how to get more active • Outcome: Digital tool that enables residents & practitioners to better understand the impact of their behaviour and become more active as a result co-designed with target audiences & service providers so it meets their needs 6
  • 7.
    Survey Questions forResidents • Demographics • Overall needs, motivations & behaviours • Levels & types of physical activity and motivation to get people active • Select standardised questions from relevant national surveys (Values Mode Analysis, Understanding Population & Active Lives) • Test survey with Residents Panel to validate • Peer review with other local areas and IFF 7
  • 8.
    Types of evaluation Typeof Evaluation Situation Iterative evaluation When working on complex situations or early stage innovation. Where there is a degree of uncertainty about what will work and how. Where new challenges, opportunities & activities continue to emerge. Formative evaluation When the core elements of the programme are taking shape. Where the outcomes are becoming predictable. Where the context is increasingly well know and understood. Summative evaluation Where the programme's activities are well-established. Where people involved are increasingly more confident about "what works". Where the interventions are significant enough for more intensive scrutiny. 8
  • 9.
    Generic Evaluation Timetable PhaseDesign & Methodology Planning Implementation Review & Follow Up Major Output Evaluation Specification Terms of Reference Evaluation Plan Evaluation Report Action Plan Activities Purpose, scope & objectives Context Target Cohorts Questions Type of Evaluation Criteria Level of rigour Data sources Sampling Data collection Data quality Evaluation Partner selected Roles & Responsibilities Design of baseline & review of Year 1 Test data collection methods & tools with cohorts & partners Sampling Training Involvement of cohorts & partners Collection of data Analysis of data Conclusions Recommendations Learning Publication & sharing Response by programme to recommendation & action plan 9
  • 10.
    Steps for Evaluation 1.Orientation • Learn about the systems change initiative’s goals and strategies. Discover the basics of the collaborative implementing the systems change initiative. Collect basic information about a system’s programs and services. 2. Evaluation planning • Decide on priorities for the evaluation. Identify research questions. Develop a data collection plan. 3. Develop data collection instruments • Develop instruments for baseline data collection. Develop instruments for collecting data at follow-up periods for subsequent data collection. 4. Collect baseline data • Collect data on key systems change indicators (i.e. effectiveness of programmes & services, collaboration) 10
  • 11.
    5. Collect follow-updata • At a follow-up period: - Collect data on systems change. Ask stakeholders about their perceptions of the systems changes and changes that have occurred between baseline and follow-up. 6. Describe change between baseline and follow-up • Compare the follow-up and baseline data to assess the extent to which systems changes have taken place. Account for stakeholder arguments about extent and type of change. 7. Analyze how the programme contributed to change between baseline and follow-up • Assess how the initiative was able to improve on the systems change indicators. 8. Develop recommendations • Identify ways that programs and services can continue to improve. Identify ways that the structure might be reconfigured to facilitate systems change. Identify ways that the system might contribute to a better structure and to improved programs and services. Identify ways to improve structures and processes to facilitate implementation. 11
  • 12.
    What are ourresponsibilities toward the evaluation partner? Outline: • How we expect to bring about systems change • Purpose, geographical scope, context, target cohorts, governance & objectives of the programme • Scope & objectives of evaluation • What we want to evaluate • Assess outcome/impact/document good practice/ understand causal relations/what works and why/formation of future programme • Evaluation/baseline questions for measuring impact on residents • Evaluation/Baseline questions for measuring impact of process of systems change) • Why • How it should be organised • How the evaluation process needs to inform improvements in the programme • Governance of the evaluation • Reporting requirements, information governance (including GDPR) & data quality requirements and any generic requirements for supplying to Ealing Council • Deliverables needed, milestones, depdndencies, budget Provide access to: • Decision makers, practitioners & target cohorts • Relevant insights & data sources on the target cohorts that can be shared • Pre-exisitng insights by ourselves and partners on the context and conditions for systems change in the local area 12
  • 13.
    Questions for bidders •What is the most effective evaluation design for the experimental context of this programme? • What data collection & analysis tools & methods would you plan to use? • What sampling method would you plan to use? • How would you make sure that target cohorts & people carrying out the research have been involved in the design of the evaluation? • How would you assure the quality & ethics of the data collection & analysis? • How would you plan to share & review emerging lessons learned from the evaluation so that the key influencers of the system can learn and apply these lessons to make systems change happen? • How would you be able to adapt your evaluation methods to adjust to the nature & scale of the activities that emerge from the programme? • How would you evaluate how the system has changed over time and explain what is contributing to this change? • What additional support from our organisation & partners would help you deliver an effective evaluation? • What costs & resources are required for each activity within your plan? • How would you ensure that you develop expertise on the context of the local area in scope? 13
  • 14.
    How do wesample? • What sampling methods do we use? Or do we ask the bidders? • 1. Randomised sample with boost on target cohorts to enable specific analysis • 2. Stratified sample to dig deeper in issues / cohorts identified in the randomised sample • What is the group of cohorts from which we want to draw a sample? • Target cohorts agreed in Corporate Board presentation • How many do we need in our sample? • Dependent on budget and how specific we make cohorts • More specific the cohorts, bigger the sample • How will these be selected? • Use providers with existing national database • Baseline sample of residents to evaluate future impact of programme on residents’ outcomes • Baseline sample of staff to evaluate future impact of programme on systems change See Overview of Sampling in Annexes 14
  • 15.
    How do wemeasure the impact & quality of the evaluation itself? • Relevance of evaluation design & methods to the purpose & objectives of the programme • Effectiveness of delivery of the evaluation • Credibility of the findings • Impact of the recommendations from the evaluation on systems change • Understanding of the conditions of systems change in the local area and the factors & patterns influencing it • Understanding of the factors influencing inactivity and opportunities for behaviour change amongst the target cohorts 15
  • 16.
    How should weprocure? Expertise • Expertise in iterative evaluation, in particular of systems change • Understanding of dynamics of the system of the local area in scope Types of contract • As part of Analysis & Evaluation lot • Retainer free contract • Stepwise funding (to enable extension of contract) • Prescribe proportion of funding for specific outcomes and allow rest for contingencies 16
  • 17.
    Actions • Develop specificationfor evaluation according to overall commissioning timetable in Activity Plan • Research Team to draft specification for Evaluation and for other Analysis & Evaluation work streams with support from Procurement to include evaluating (inc. baselining) residents & systems change • Recommend randomised sample to baseline residents with boost of target cohorts to ensure specific analysis of these groups • Followed by deep dive qualitative & quantitative analysis & service design to start testing behaviour change • Task & Finish Group to review & agree 17

Editor's Notes

  • #4 Response from Sport England to enquiries on evaluating the process of systems change   We also discussed a desire from the LDP team for some guidance on how to frame an evaluation plan for whole systems change. From a national evaluation point of view, we do not want to be too prescriptive about how LDPs approach their evaluations, and learning about each LDP has approached evaluation will be useful for SE to consider for future programmes. We can look to provide more detailed guidance on evaluating whole systems change but our initial thinking is as follows:   Although systems are invariably complex, we can consider a system to be just like any other object of evaluation, in the sense that we want to know what it is like at different time‐points, describe how it has changed over time, and explain what has contributed to this change. In this way, an evaluation of systems change is not so different from standard programme evaluation. A key role for your local evaluation partner will be to undertake process evaluation. We would expect this process evaluation to explore the approach that your LDP takes to whole system change and to help you to track progress in systems change by reviewing the state of the system at regular points.   Some key things that will underpin an evaluation of system change in Southall are: You need to be able to articulate for your evaluation partner how you expect to bring about systems change – what are they key things you think you need to do and how you will know that systems are changing (there may be a number of ways that change will be observed)? This will enable your evaluation partner to consider and plan how best to observe whether systems change is occurring. View the local evaluation as part of the process of system change, rather than something that happens alongside it. Use the evaluation process to support the LDP in learning, reflection and future planning. Build in plenty of feedback points for emerging findings into your evaluation plan. You need to be open with your evaluation partner and ensure they have access to key decision-makers, to the decision-making process and to the relevant elements of the system.  Systems change may occur in new and unexpected ways as the context and initiative evolves. This suggests a need to build in touchpoints to review your evaluation approach with your partner and to update on any changes in thinking from your point of view.   Be clear in the brief that the evaluation approach may need to be adjusted over time. This implies an initial plan that begins with a scoping stage to immerse the evaluation partner in your approach. It may be useful to scope out a framework of key activities that might take place every year, e.g. community survey, qualitative work, workshops with LDP stakeholders, with a broad annual budget. However, there needs to be some flexibility in what exactly happens in each year with the scope to adjust the nature and scale of key activities to suit pilot needs.  The other side to your evaluation approach is your impact evaluation. Once you have a clear idea of what success looks like for the Southall LDP, you will be in a good position to develop a specification for work to assess whether and how that has been achieved. IFF will be developing some template documents and guidance on how to write and evaluate a research tender in the near future. In the meantime, some useful things for you to consider may be:   The outcomes and impacts that you identify will suggest whether you are looking for qualitative and quantitative approaches to monitor them. That will then guide you to the skills you need to find in an evaluation partner i.e. they may need capacity across telephone/online surveys, qualitative interviews/focus groups, analysis of existing datasets. Do the partners you have been working with to date have the kind of expertise and capacity at the scale required for this evaluation?   You also mentioned an appetite for a partner able to demonstrate interesting creative approaches, so you may wish to create a spec that is more methodologically open and ask bidders for their ideas on how to answer your evaluation questions (within budget).   There was also some mention made of cost-benefit analysis or social return on investment e.g. the idea of being able to show the wider savings to the local economy from the work that you are doing and/or an increase in physical activity. If this is required, this will need to be included in the specification as it requires a particular type of expertise that not all potential tenderers will have in-house.   We spoke a little about baselining your outcomes measures so that change over time is detectable. One thing to consider is whether you have the capacity to do that yourselves or is that one of the first tasks for the evaluator during scoping?  aselining will require a mix of approaches including: secondary analysis of existing datasets, surveys of relevant populations, potentially some qualitative interviews. This could be started by yourselves and completed by the evaluation partner, depending on how procurement timeframes go.   We also briefly discussed support with research procurement. Aside from template documents and guidance which will be available to all LDPs, we have an allocation of a small number of days to spend with Ealing specifically on evaluation procurement. We would welcome any feedback from you on what form this might take.
  • #10 The chart below describes the key steps in a baseline or evaluation study process: from designing the study, through planning and implementing it, to using the results. In the design phase, we clarify the purpose of the survey, define the objectives and questions, and decide on the design and methodology. It is here that you need to consider issues like the level of rigour, data sources, sampling and data collection, and data quality. There are different evaluation models and designs. The planning and implementation processes include discussing how stakeholders will be involved, preparing a budget and timeline, and forming the survey team. It is also here that we should think about the management process, which includes defining roles and responsibilities for the evaluation team, and preparing the terms of reference and contracts. In the inception phase, the evaluation partner would then carry out more detailed planning and test the data collection tools and other methods of measurement, involving stakeholders and residents. During the implementation of the study, in addition to the data collection, input, cleaning, analysis and storage, writing of the evaluation report should also start. Finally, it is important to take time and use resources to communicate and disseminate the findings, plan follow up(s) on any recommendations and prepare the response to the evaluation and the management action plan. This should also include ways to ensure that learning informs future project or programme planning, design and implementation. 
  • #14 When we commission an evaluation we want to find out if there have been any changes (gross effect), but also if the changes are produced by the programme. This means: can the changes be attributed to the programme? If yes, how much of these changes are actually due to the programme activities (net effect). We need to distinguish between the evaluation of the programme and the evaluation of any interventions that are commissioned through the programme. For the purposes of commissioning the evaluation framework, we are focused on evaluating the overall programme as we can’t yet specify what interventions we want to evaluate until the insight has helped define what interventions we should do! The decision on what evaluation design to select will depend on elements such as the available resources (some designs are more resource-intensive than others) and the implementation stage of the programme. We need to baseline before the programme activities actually start. The evaluation design needs to be based on the hypotheses we want to test out and scaleable enough that it could be used in other places or cohorts. This depends on the quality of the sampling, the margin of error we are dealing with and the control of potential bias. It also depends on the availability of standardised indicators/survey questions that can enable us to benchmark to other areas or cohorts. These are far more widely available when it comes to evaluating impact on residents than on systems change, the latter being much more subjective.