How successfully are evaluations
contributing to learning by market
development practitioners?
Douglas Pearman, MSc dissertation
October 2013
2
Research Contents
1
1 Introduction
2 Noteworthy findings
3
4
Framework: influences on learning in market development
Discussion of the findings
Introduction
1) There is a gap in the systemic market development literature relating to how
effectively the est. 10-15% budget expenditure on monitoring and evaluation
expenditure on projects is being used.
2) Significant effort has been applied in market development to enhancing the
content of these evaluations.
3) However, it is not clear from previous literature whether (a) content of
evaluations, (b) sharing of evaluations, (c) access to evaluations, (d) incentives
to learn from evaluations, (e) time to learn from evaluations, (f) time to meet with
evaluators, (g) a lack of participation in evaluations (or any of a number of other
causes) are the greatest barriers to learning from evaluations in market
development
4) 25 practitioners views were captured using a MaFI LinkedIn site online
quantitative questionnaire methodology in summer 2013.
Noteworthy findings
1) All participants (n = 25) considered their learning environment at work to be
either average or above average. Mean score = 4/5
2) Both the quality of project evaluations and the method in which the results of
those evaluations are disseminated were considered significant limiters on
learning by respondents.
3) Involvement in the evaluation process was considered of major importance for
learning – and practitioners generally reported that they had the opportunity both
to discuss the results of evaluations with their colleagues (76%) and to conduct
reviews themselves (68%)
4) Some participants lacked basic learning incentives, e.g. time allocated for
learning (23%); learning being reviewed as part of the appraisal process (25%)
5) Practitioners considered that although evaluations are predominantly used to
satisfy donors’ administrative requirements, those donors also commonly learn
from the evaluations.
Framework: influences on learning in market development
A proposed framework to identify where budget would be spent to encourage
learning among either donor, practitioner or beneficiary groups.
Discussion of the findings
1) Practitioners outside the evaluating organisation find results hard to use as they
are interested in considering the long-term impact of interventions (e.g. stable
changes in income generation) before applying the results to their work. Such
long-term impact assessment requires usually falls outside project timelines, and
is thus not completed (CGDev, 2006).
2) Evaluators should be incentivised based on the extent to which the lessons from
their evaluations have been taken on board by others.
3) Local meta-evaluation could be done through internal IT systems which
incorporate electronic feedback loops into each evaluation report that has been
written internally; identifying how many individuals have accessed it, and allowing
users the opportunity to feed back on the quality of written materials.
4) Sector-wide meta-evaluation could be done by a regular ‘pulse’ research initiative
collating a pilot data pool about the major learning initiatives that took place
across the sector in the last year, and asking participants to provide feedback on
the effectiveness of each learning initiative they have engaged with.
CGDev (2006) When will we ever learn? Improving lives through impact evaluation. Retrieved from
http://international.cgdev.org/sites/default/files/7973_file_WillWeEverLearn.pdf
Discussion of the findings
1) Practitioners outside the evaluating organisation find results hard to use as they
are interested in considering the long-term impact of interventions (e.g. stable
changes in income generation) before applying the results to their work. Such
long-term impact assessment requires usually falls outside project timelines, and
is thus not completed (CGDev, 2006).
2) Evaluators should be incentivised based on the extent to which the lessons from
their evaluations have been taken on board by others.
3) Local meta-evaluation could be done through internal IT systems which
incorporate electronic feedback loops into each evaluation report that has been
written internally; identifying how many individuals have accessed it, and allowing
users the opportunity to feed back on the quality of written materials.
4) Sector-wide meta-evaluation could be done by a regular ‘pulse’ research initiative
collating a pilot data pool about the major learning initiatives that took place
across the sector in the last year, and asking participants to provide feedback on
the effectiveness of each learning initiative they have engaged with.
CGDev (2006) When will we ever learn? Improving lives through impact evaluation. Retrieved from
http://international.cgdev.org/sites/default/files/7973_file_WillWeEverLearn.pdf

Are evaluations contributing to learning by market development practitioners? Summary slides. D.J. Pearman Oct13

  • 1.
    How successfully areevaluations contributing to learning by market development practitioners? Douglas Pearman, MSc dissertation October 2013
  • 2.
    2 Research Contents 1 1 Introduction 2Noteworthy findings 3 4 Framework: influences on learning in market development Discussion of the findings
  • 3.
    Introduction 1) There isa gap in the systemic market development literature relating to how effectively the est. 10-15% budget expenditure on monitoring and evaluation expenditure on projects is being used. 2) Significant effort has been applied in market development to enhancing the content of these evaluations. 3) However, it is not clear from previous literature whether (a) content of evaluations, (b) sharing of evaluations, (c) access to evaluations, (d) incentives to learn from evaluations, (e) time to learn from evaluations, (f) time to meet with evaluators, (g) a lack of participation in evaluations (or any of a number of other causes) are the greatest barriers to learning from evaluations in market development 4) 25 practitioners views were captured using a MaFI LinkedIn site online quantitative questionnaire methodology in summer 2013.
  • 4.
    Noteworthy findings 1) Allparticipants (n = 25) considered their learning environment at work to be either average or above average. Mean score = 4/5 2) Both the quality of project evaluations and the method in which the results of those evaluations are disseminated were considered significant limiters on learning by respondents. 3) Involvement in the evaluation process was considered of major importance for learning – and practitioners generally reported that they had the opportunity both to discuss the results of evaluations with their colleagues (76%) and to conduct reviews themselves (68%) 4) Some participants lacked basic learning incentives, e.g. time allocated for learning (23%); learning being reviewed as part of the appraisal process (25%) 5) Practitioners considered that although evaluations are predominantly used to satisfy donors’ administrative requirements, those donors also commonly learn from the evaluations.
  • 5.
    Framework: influences onlearning in market development A proposed framework to identify where budget would be spent to encourage learning among either donor, practitioner or beneficiary groups.
  • 6.
    Discussion of thefindings 1) Practitioners outside the evaluating organisation find results hard to use as they are interested in considering the long-term impact of interventions (e.g. stable changes in income generation) before applying the results to their work. Such long-term impact assessment requires usually falls outside project timelines, and is thus not completed (CGDev, 2006). 2) Evaluators should be incentivised based on the extent to which the lessons from their evaluations have been taken on board by others. 3) Local meta-evaluation could be done through internal IT systems which incorporate electronic feedback loops into each evaluation report that has been written internally; identifying how many individuals have accessed it, and allowing users the opportunity to feed back on the quality of written materials. 4) Sector-wide meta-evaluation could be done by a regular ‘pulse’ research initiative collating a pilot data pool about the major learning initiatives that took place across the sector in the last year, and asking participants to provide feedback on the effectiveness of each learning initiative they have engaged with. CGDev (2006) When will we ever learn? Improving lives through impact evaluation. Retrieved from http://international.cgdev.org/sites/default/files/7973_file_WillWeEverLearn.pdf
  • 7.
    Discussion of thefindings 1) Practitioners outside the evaluating organisation find results hard to use as they are interested in considering the long-term impact of interventions (e.g. stable changes in income generation) before applying the results to their work. Such long-term impact assessment requires usually falls outside project timelines, and is thus not completed (CGDev, 2006). 2) Evaluators should be incentivised based on the extent to which the lessons from their evaluations have been taken on board by others. 3) Local meta-evaluation could be done through internal IT systems which incorporate electronic feedback loops into each evaluation report that has been written internally; identifying how many individuals have accessed it, and allowing users the opportunity to feed back on the quality of written materials. 4) Sector-wide meta-evaluation could be done by a regular ‘pulse’ research initiative collating a pilot data pool about the major learning initiatives that took place across the sector in the last year, and asking participants to provide feedback on the effectiveness of each learning initiative they have engaged with. CGDev (2006) When will we ever learn? Improving lives through impact evaluation. Retrieved from http://international.cgdev.org/sites/default/files/7973_file_WillWeEverLearn.pdf