Successfully reported this slideshow.
Your SlideShare is downloading. ×

Evaluation methodology: How do we know what we know? (5th DHG, Helsinki)

Evaluation methodology: How do we know what we know? (5th DHG, Helsinki)

Download to read offline

Overview of the scoping study on research methodology used in scholarship evaluation, conducted by the Commonwealth Scholarship Commission in the UK.

Presented at the 5th Annual Forum of the Donor Harmonisation Conference, 11-13 June, 2014, Helsinki.

Overview of the scoping study on research methodology used in scholarship evaluation, conducted by the Commonwealth Scholarship Commission in the UK.

Presented at the 5th Annual Forum of the Donor Harmonisation Conference, 11-13 June, 2014, Helsinki.

More Related Content

Related Books

Free with a 30 day trial from Scribd

See all

Related Audiobooks

Free with a 30 day trial from Scribd

See all

Evaluation methodology: How do we know what we know? (5th DHG, Helsinki)

  1. 1. Evaluation methodology: How do we know what we know?
  2. 2. Aims …identify trends …identify omissions / ambiguities …catalyse dialogue
  3. 3. 67Documents
  4. 4. Methodology Methods Variables Data analysis Thematic issues: • Counterfactuals • Value for money • Harmonisation
  5. 5. Trends …ex-post evaluation …mixed-methods dominant …similar topics analysed
  6. 6. Variables: Socio-demographics Return home rates Employment trajectory Skill use
  7. 7. Challenges …data analysis strategy …comparisons …donor harmonisation
  8. 8. Scholarship Evaluation Scholarship Evaluation Baseline Counterfactual analysis Growth analysis
  9. 9. Dr Matt Mawer Programme Officer (Evaluation) matt.mawer@cscuk.org.uk evaluation@cscuk.org.uk 5th Conference of the DHG Helsinki, fi 12.06.2014

Editor's Notes

  • Update colleagues on a piece of analysis that the CSC has been conducting for the last 6 months examining the research methodologies used in evaluating international scholarship schemes for higher education.


    General notes:
    Not everyone will run the same kind of Scholarship schemes as us, so try to avoid implying that everyone in the audience is a scholarship provider that hosts full-award scholarships in the donor country.
  • Purpose of the work:

    Identify current trends in evaluation design within the sector
    Note any ambiguities, omissions or contradictions, with the intention of informing our evaluation work and
    Offer a baseline review for wider dialogue amongst scholarship providers

    It is, essentially, an analysis of how we know what we know about scholarship outcomes
  • Involved review mainly of evaluation reports: 67 documents, most published reports, some internal documents, few conference papers and journal articles

    Looking at:
    Methodology – by which we mean research design or overarching framework –
    Methods of data collection,
    Variables examined in the report,
    Data analysis strategies,
    Three issues that we identified early on as important topics in the sector: counterfactuals, value for money, and harmonisation between scholarship providers

    Share a brief overview of some of the key findings, report is now out so can have a read in more detail if desired
  • Involved review mainly of evaluation reports: 67 documents, most published reports, some internal documents, few conference papers and journal articles

    Looking at:
    Methodology – by which we mean research design or overarching framework –
    Methods of data collection,
    Variables examined in the report,
    Data analysis strategies,
    Three issues that we identified early on as important topics in the sector: counterfactuals, value for money, and harmonisation between scholarship providers

    Share a brief overview of some of the key findings, report is now out so can have a read in more detail if desired
  • Overwhelming majority of evaluation conducted is ex-post: conceived and conducted after recipients have finished their scholarships.
    Tracer study: identifies alumni, gets in contact (through a variety of means) and asks them to complete a survey instrument detailing their experiences after completing the scholarship
    Little investment into planned comparison designs
    Even when tracer studies have been conducted repeatedly (e.g. Fiji), not followed same people repeatedly: no panel studies
    Of course the degree to which ex-post studies are successful is largely reliant on the effectiveness alumni tracing within scholarship programmes and there has been quite a bit of commentary on this in the field as it can be very difficult!

    Research methodology is actually not discussed very widely: most commentary at methods level: how will data be collected.

    Mixed-methods is the dominant approach
    Surveys dominant instrument: makes sense given global reach
    Particularly popular are likert-style questions for reflecting on achievements and application of skills and knowledge.
    Trove of self-report data, but with usual health warnings
    Qualitative fieldwork common: more expensive and smaller in scale. Interviews with stakeholders, usually recipients of scholarships, but also employers, government officials and others.
    Evaluators tried to approach employers with surveys, typically poor response rates: interviews much better for that. One of the main drivers for mixed methods designs

    Evaluations have generally looked at very similar topics: policy objectives of most of our scholarship schemes overlap greatly
    Socio-demographics, return home rates, post-scholarship employment trajectory, application of knowledge and skills learned during the scholarship, and links maintained to hosts / donors
    CSC looks at involvement in international development, e.g. contribution to government policy in agriculture, but actually this is uncommon across the sector
    Some topics are much more complicated – return home: depending on how one defines ‘home’ and for how long one expects scholars to return. Reintegration understudied, particularly social and community reintegration as distinct from reintegration into the labour market.
  • Overwhelming majority of evaluation conducted is ex-post: conceived and conducted after recipients have finished their scholarships.
    Tracer study: identifies alumni, gets in contact (through a variety of means) and asks them to complete a survey instrument detailing their experiences after completing the scholarship
    Little investment into planned comparison designs
    Even when tracer studies have been conducted repeatedly (e.g. Fiji), not followed same people repeatedly: no panel studies
    Of course the degree to which ex-post studies are successful is largely reliant on the effectiveness alumni tracing within scholarship programmes and there has been quite a bit of commentary on this in the field as it can be very difficult!

    Research methodology is actually not discussed very widely: most commentary at methods level: how will data be collected.

    Mixed-methods is the dominant approach
    Surveys dominant instrument: makes sense given global reach
    Particularly popular are likert-style questions for reflecting on achievements and application of skills and knowledge.
    Trove of self-report data, but with usual health warnings
    Qualitative fieldwork common: more expensive and smaller in scale. Interviews with stakeholders, usually recipients of scholarships, but also employers, government officials and others.
    Evaluators tried to approach employers with surveys, typically poor response rates: interviews much better for that. One of the main drivers for mixed methods designs

    Evaluations have generally looked at very similar topics: policy objectives of most of our scholarship schemes overlap greatly
    Socio-demographics, return home rates, post-scholarship employment trajectory, application of knowledge and skills learned during the scholarship, and links maintained to hosts / donors
    CSC looks at involvement in international development, e.g. contribution to government policy in agriculture, but actually this is uncommon across the sector
    Some topics are much more complicated – return home: depending on how one defines ‘home’ and for how long one expects scholars to return. Reintegration understudied, particularly social and community reintegration as distinct from reintegration into the labour market.
  • A few challenges for us to consider as part of the discussion

    Data analysis: are we making the best use of the data we collect?
    Most evaluation includes numeric data: extensive descriptive analysis has been published: percentages of alumni in specific sectors etc.
    Inferential statistics less common, only handful of 67 papers used any inferential tests of correlation or differences between means. Generally level of statistical analysis we are conducting is quite basic.
    Whether it fulfils our purposes is a slightly different question, but there are opportunities for additional avenues of evaluation if we use multivariate analysis more widely
    More of a problem though, qualitative data analysis often very vague: not clear how evaluators in sector transform dialogues with interviewees into research findings.

    Lack of comparative data is a concern
    Widely discussed, but not widely done
    Ex-post studies tell persuasive stories, but without counterfactual or baseline data cannot give insight into: 1) extent of individual capacity growth, or 2) relative benefit of participation versus non participation
    Many scholarships ‘catching up’ with alumni, often because emphasis on evaluation more recent and so ex-post evaluation is being conducted. Obviously this prevents effective baseline data collection, but it is a legacy issue: we can revise (some have done already) to improve evaluation going forward
    Counterfactuals comparing between scholarship recipients and non-recipients seen some use in evaluation (e.g. GMS, USAID LAC), although not widespread, and there is some disagreement over how far comparisons are valid. There are two other types of counterfactual we could consider: comparison between performance of different scholarship schemes or projects with very similar aims (Erasmus Mundus got close to this), and comparison between scholarship schemes and other interventions with similar aims.

    Harmonisation is nascent, more investment in coordination and joint reporting could be beneficial
    I mean coordination between scholarship providers, both in scholarship administration generally and evaluation specifically
    Best description for harmonisation currently is ‘nascent, emerging’. Some evidence of coordination around award administration and joint evaluation strategies (e.g. in Pacific), and DHG of course, but joint provision and evaluation seems to be at a very early stage.
    Some major gains to be made in this area with more substantial cooperation.
    Can only address interference and synergy through collaborative analysis (at country level, for instance). All run scholarships aimed at similar objectives, often in same geographical space and / or labour market, but we do not have overarching analysis to show us the compound effect of our work
    The Fiji example is instructive: According to AusAID, thousands of scholarship places, under dozens of schemes, funded by numerous donors. There must be other such examples in Africa too.
  • So a few thoughts on the state of the sector and some of the challenges we face to gather reliable and detailed evaluation data about the outcomes of our scholarships. Intention of the exercise has been to catalyse dialogue, so in addition to discussing what our evaluation has told us as part of the remaining discussion I’d be interested to hear thoughts on how we can improve the quality and coverage of our evaluation so it can tell us more.

    Full report available at CSC website (and maybe paper copies?): encourage to read it and get in touch with any feedback

×