Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Hull University Business School<br />Towards a New Framework for Evaluating Systemic and Participative Methods<br />Gerald...
Context<br />An initial framework was originally developed in the context of a large research program in New Zealand, whic...
This talk will cover:<br />The need to evaluate systems approaches and the current state of the evidence that systems meth...
The Need for the Evaluation of Methods<br />We need to learn as much as we can from our systems practice, and evaluations ...
The State of the Evidence Base<br />Most evidence for the value of systems methods takes the form of practitioner reflecti...
A Paradigm Conflict (1)<br />There is a paradigm conflict between evaluators advocating for ‘universal’ and ‘local’ approa...
Paradigm Conflict (2): ‘Universal’ Evaluations<br />‘Universal’ evaluations assume that:<br />Criteria of relevance to all...
Paradigm Conflict (3): ‘Local’ Evaluations<br />‘Local’ evaluations assume that:<br />Locally relevant perspectives should...
Paradigm Conflict (4): Purposes of the Two Evaluation Approaches<br />‘Universal’ evaluations assume that the purpose of a...
Can We Move Beyond the Paradigm Conflict?<br />To do so, we need a framework that can support the pursuit of both purposes...
Context (1)<br />If we want to think systemically about context, we can identify some general questions that it is worth a...
Context (2): Practitioner Identity <br />Within our analysis of context we can ask:<br />How are the researchers or practi...
Purposes (1)<br />There may be just one or (more often) multiple purposes being pursued by participants in an intervention...
Purposes (2): Practitioner Purposes<br />Whether or not there is a good ‘fit’ between participant and practitioner purpose...
Methods (1): Theoretical/Cultural Assumptions & Process of Application<br />A method may carry theoretical assumptions abo...
Methods (2): The Practitioner’s Skills and Preferences<br />It is important to ask whether the results of using a systemic...
Outcomes (1)<br />Questions concerning outcomes might include: <br />What plans, actions or changes has the method enabled...
Outcomes (2)<br />It is important to differentiate between outcomes for:<br />The participants <br />Others affected<br />...
An Evaluation Questionnaire<br />We have developed a questionnaire that captures data on process and short-term outcome cr...
The Questionnaire Contains:<br />A simple quantification of usefulness (5 point scale) plus open questions about what part...
Two Approaches to Comparing Methods using a Questionnaire<br />The majority of researchers who use a questionnaire evaluat...
Process of Developing and Testing the Questionnaire<br />With international collaborators, we identified a range of partic...
Test Cases<br />Facilitating consultation with land owners and community interest groups as part of a feasibility study fo...
Strengths of the Overall Framework<br />By focusing on the context-purposes-method-outcome relationship, and by explicitly...
A limitation of the Overall Framework<br />As with most methodologies to support reflective practice, there is scope for t...
Limitations of the Questionnaire<br />It will be much easier to compare standard sets of methods (e.g., those associated w...
Limitations of the Questionnaire<br />The questionnaire has not yet been tested for validity and reliability, and this is ...
Limitations of the Questionnaire<br />The questionnaire does not evaluate non-participative systems approaches<br />This i...
Summary<br />The evidence base for the value of systemic and participative methods is inadequate.<br />Evaluation can help...
An International Invitation to Collaborate<br />Please give me your contact details if you want to: <br />Use this evaluat...
Acknowledgements <br />  International Collaborators:<br />John Brocklesby (Victoria University of Wellington, New Zealand...
Upcoming SlideShare
Loading in …5
×

Evaluating participative and systemic methods - Gerald Midgley - UKAIS Seminar - University of Salford

1,996 views

Published on

Slides from a free seminar run by the UK Academy for Information Systems (UKAIS) in conjunction with the Information Systems, Organisations and Society (ISOS) Research Group, University of Salford.

Evaluating participative and systemic methods - Wednesday 1st June 2011 - http://www.ukais.org.uk

Gerald Midgley is Professor of Systems Thinking at the University of Hull, UK. He also holds Adjunct Professorships at the University of Queensland, Australia; the University of Canterbury, New Zealand; and Victoria University of Wellington, New Zealand. From 2003-2010, he was a Senior Science Leader at the Institute of Environmental Science and Research (New Zealand), and from 1997-2003 he was Director of the Centre for Systems Studies in the Business School at the University of Hull. He has had over 300 papers on systems thinking and stakeholder engagement published in international journals, edited books and practitioner magazines, and has been involved in a wide variety of public sector, community development, technology foresight and resource management research projects. For more information please visit www2.hull.ac.uk/hubs/about/our-staff/allstaff/m/midgley-g.aspx

Published in: Education, Technology, Business
  • Be the first to comment

Evaluating participative and systemic methods - Gerald Midgley - UKAIS Seminar - University of Salford

  1. 1. Hull University Business School<br />Towards a New Framework for Evaluating Systemic and Participative Methods<br />Gerald Midgley<br />
  2. 2. Context<br />An initial framework was originally developed in the context of a large research program in New Zealand, which involved evaluating systemic and participative methods for community and multi-agency engagement in decision making on resource use<br />My colleagues and I then put in a major bid for funding to further develop the framework. This involved 100 systems practitioners from 22 different countries. It only narrowly missed being funded.<br />We are seeking feedback on the framework prior to further developing the funding bid for resubmission.<br />
  3. 3. This talk will cover:<br />The need to evaluate systems approaches and the current state of the evidence that systems methods add value <br />A paradigm conflict: two different approaches to evaluating methods<br />A new evaluation framework helping to move us beyond the paradigm conflict<br />A questionnaire for comparing methods<br />Limitations of the evaluation framework and the questionnaire<br />Summary and Acknowledgements<br />
  4. 4. The Need for the Evaluation of Methods<br />We need to learn as much as we can from our systems practice, and evaluations can support our learning<br />There is a renewed interest in systems thinking amongst decision makers in government and industry, and knowledge about good practice is important for people commissioning projects<br />So, evaluations can tell us something about what works, where systems thinking adds value, and where improvements are needed <br />
  5. 5. The State of the Evidence Base<br />Most evidence for the value of systems methods takes the form of practitioner reflections alone, but practitioner reflections can be unreliable<br />The majority of evaluations going beyond practitioner reflections are based on questionnaires filled in by participants. However, when designing questionnaires, it is important to counter ‘paradigm blindness’<br />Only a small minority of studies evaluate from multiple perspectives or triangulate data<br />Hardly any studies compare methods used by different practitioners across multiple case studies<br />
  6. 6. A Paradigm Conflict (1)<br />There is a paradigm conflict between evaluators advocating for ‘universal’ and ‘local’ approaches<br />
  7. 7. Paradigm Conflict (2): ‘Universal’ Evaluations<br />‘Universal’ evaluations assume that:<br />Criteria of relevance to all methods can be defined.<br />The achievement of success can be quantified using common metrics.<br />The effects of local contexts on applications can be eliminated by comparing across multiple case studies.<br />Universal knowledge about methods is therefore attainable.<br />
  8. 8. Paradigm Conflict (3): ‘Local’ Evaluations<br />‘Local’ evaluations assume that:<br />Locally relevant perspectives should inform the development of evaluation criteria, and emergent issues should be accounted for.<br />Quantification can be useful, but qualitative data is critical for locally relevant interpretation.<br />The effects of local context can never be eliminated, but cross case study learning is still possible.<br />Universal knowledge about methods is unattainable, but on-going improvement to systems practice is still possible.<br />
  9. 9. Paradigm Conflict (4): Purposes of the Two Evaluation Approaches<br />‘Universal’ evaluations assume that the purpose of an evaluation is to compare methods designed to achieve similar things to find out which is best<br />‘Local’ evaluations assume that the purpose of an evaluation is to enrich learning about practice in the context of a single intervention or a series of interventions<br />Both purposes are useful and legitimate<br />
  10. 10. Can We Move Beyond the Paradigm Conflict?<br />To do so, we need a framework that can support the pursuit of both purposes in a reasonably integrated manner. This will need to: <br />Support reflection on single case studies to produce useful local evaluations<br />Yield data that is useful for both local evaluations and comparisons between methods<br />
  11. 11.
  12. 12. Context (1)<br />If we want to think systemically about context, we can identify some general questions that it is worth asking:<br />What boundary and value judgments are being made, and are there significant processes of marginalization involved?<br />What stakeholder perspectives and assumptions are relevant?<br />What organizational, institutional, socio-economic and ecological systems may be facilitating or constraining people’s understandings and actions?<br />What feedback processes and networks within and across social and ecological systems may be playing an enabling or constraining role?<br />
  13. 13. Context (2): Practitioner Identity <br />Within our analysis of context we can ask:<br />How are the researchers or practitioners seen by themselves and others, and why?<br />Practitioner identities carry role expectations, so the ways in which practitioners are viewed by themselves and others can critically affect the progress of an intervention and hence how their methods are perceived<br />
  14. 14. Purposes (1)<br />There may be just one or (more often) multiple purposes being pursued by participants in an intervention <br />The ‘fit’ between a method and the purposes being pursued is critical to evaluation<br />Useful things to look out for include:<br />Articulated purposes<br />Hidden agendas<br />Conflicting purposes<br />Mismatches between the articulated purposes of a participant and ones attributed by others<br />
  15. 15. Purposes (2): Practitioner Purposes<br />Whether or not there is a good ‘fit’ between participant and practitioner purposes may affect the trajectory of an intervention using a systemic method<br />Even if practitioners believe that there is a good fit, participants may not always trust the practitioners’ expressions of purpose, and this can affect the evaluation of their methods<br />
  16. 16. Methods (1): Theoretical/Cultural Assumptions & Process of Application<br />A method may carry theoretical assumptions about (amongst other things) human relationships, knowledge and the nature of the situation. <br />A method may also assume culturalnorms, especially about participation, dialogue and disagreement. A key question is:<br />What is the fit between the cultural norms assumed by the method and those assumed by the people involved in the intervention?<br />The process of application also matters. Questions about process might include:<br />Was exploration sufficiently systemic?<br />Did the method facilitate effective participation in this context?<br />
  17. 17. Methods (2): The Practitioner’s Skills and Preferences<br />It is important to ask whether the results of using a systemic method are due to:<br />The method itself<br />The intellectual resources, skills and competencies (or lack of them) of the practitioner<br />The preferences of the practitioner for a certain process of application<br />A combination of the above<br />
  18. 18. Outcomes (1)<br />Questions concerning outcomes might include: <br />What plans, actions or changes has the method enabled people to produce?<br />Have both short-term and long-term outcomes been achieved?<br />How do the outcomes relate to people’s purposes?<br />Are there unanticipated outcomes, and what role did the method play in enabling these?<br />
  19. 19. Outcomes (2)<br />It is important to differentiate between outcomes for:<br />The participants <br />Others affected<br />The researcher<br />The fit between outcomes for participants, stakeholders and the practitioner may be significant<br />
  20. 20. An Evaluation Questionnaire<br />We have developed a questionnaire that captures data on process and short-term outcome criteria for use in both local evaluations and longer term comparisons between methods<br />It is to be filled in by participants immediately following a workshop<br />It is not the only tool needed to evaluate methods (for instance it cannot yield information about longer term outcomes)<br />The questionnaire must be used in the context of the wider framework, otherwise successes or failures that are just as much to do with context and purpose might erroneously be attributed to the method alone<br />
  21. 21. The Questionnaire Contains:<br />A simple quantification of usefulness (5 point scale) plus open questions about what participants liked and disliked. <br />15 questions (with 5 point scales) evaluating the method against key criteria derived from a review of the benefits of different systemic and participative methods.<br />13 questions (with 5 point scales) evaluating against criteria derived from a review of the drawbacks and potential negative side-effects of systemic and participative methods.<br />Open ended questions asking people to assess the method from their own cultural viewpoint.<br />A set of questions gathering basic demographic information.<br />
  22. 22. Two Approaches to Comparing Methods using a Questionnaire<br />The majority of researchers who use a questionnaire evaluate against a small number of criteria that all participative and systemic methods aspire to do well on<br />However, we evaluate against a wider range of criteria representing what a diverse range of methods aspire to deliver.<br />The latter approach allows us to identify complementarities between methods instead of setting up a competition between them<br />
  23. 23. Process of Developing and Testing the Questionnaire<br />With international collaborators, we identified a range of participative and systemic methods with different purposes<br />We reviewed the benefits and drawbacks of these to derive evaluation criteria and questions<br />We produced a questionnaire<br />We then tested it for usability in a set of diverse interventions, refining it on an on-going basis<br />
  24. 24. Test Cases<br />Facilitating consultation with land owners and community interest groups as part of a feasibility study for the construction of a new water storage dam.<br />Working with an Australian NGO and its stakeholders in exploring policy options to address the public injecting of illicit drugs.<br />Facilitating workshops with the police and other stakeholders to look at ethical issues associated with anticipated future developments of forensic DNA technologies.<br />Reviewing the process used by the New Zealand Ministry of Research, Science and Technology to develop ‘roadmaps’ for long-term investments in environment, energy, biotechnology and nanotechnology research.<br />Developing a new collaborative evaluation approach in partnership with a regional council facilitating community engagement in sustainability initiatives.<br />
  25. 25. Strengths of the Overall Framework<br />By focusing on the context-purposes-method-outcome relationship, and by explicitly recognising the influence of the practitioner, our framework offers a relatively nuanced (but still reasonably parsimonious) set of concepts and guidelines to work with<br />It incorporates a questionnaire that can support both locally meaningful evaluations and longer-term comparisons between methods<br />
  26. 26. A limitation of the Overall Framework<br />As with most methodologies to support reflective practice, there is scope for the practitioner to interpret events defensively and avoid unwelcome conclusions<br />But some methodological devices have been built in to minimize the avoidance of ‘bad news’:<br />Gathering participant voices through open ended questioning<br />Encouraging the exploration of context using multiple paradigmatic lenses<br />Focusing attention on the identity, purposes, skills and preferences of the practitioner means that s/he cannot avoid some of the most uncomfortable issues<br />
  27. 27. Limitations of the Questionnaire<br />It will be much easier to compare standard sets of methods (e.g., those associated with discrete systems methodologies) than it will be to compare hybrid and novel approaches that have not been widely applied<br />Hybrid and novel approaches can still be compared with others in robust qualitative comparisons using the framework<br />
  28. 28. Limitations of the Questionnaire<br />The questionnaire has not yet been tested for validity and reliability, and this is difficult to do in the field<br />Plans for this are currently under discussion<br />If new methods have new attributes, these may not be measured by the existing instrument<br />There is a need for a periodic review of the questionnaire<br />
  29. 29. Limitations of the Questionnaire<br />The questionnaire does not evaluate non-participative systems approaches<br />This is beyond the scope of our current research, but could be tackled in future<br />No single research group will be able to gather sufficient data to make robust comparisons<br />International collaboration is required<br />
  30. 30. Summary<br />The evidence base for the value of systemic and participative methods is inadequate.<br />Evaluation can help address this problem.<br />There is a paradigm conflict between those advocating ‘local’ (one-off) approaches to evaluation, and those preferring ‘universal’ (comparative) approaches.<br />Our aim is to transcend this paradigm conflict by:<br />Designing an evaluation framework that focuses on the context-purposes-method-outcome relationship, and explicitly recognises the influence of the practitioner<br />incorporating a questionnaire that can support both locally meaningful evaluations and longer-term comparisons between methods<br />
  31. 31. An International Invitation to Collaborate<br />Please give me your contact details if you want to: <br />Use this evaluation framework <br />Be part of a new international research program to enhance the evaluation of systems thinking<br />
  32. 32. Acknowledgements <br /> International Collaborators:<br />John Brocklesby (Victoria University of Wellington, New Zealand)<br />José Córdoba (University of Hull, UK)<br />Amanda Gregory (University of Hull, UK)<br />John Mingers (University of Kent, UK)<br />Leroy White (University of Bristol, UK)<br />Jennifer Wilby (University of Hull, UK)<br /> Colleagues in Victoria Uni:<br />John Brocklesby<br />Bob Cavana<br /> Colleagues in ESR:<br />Annabel Ahuriri-Driscoll<br />Virginia Baker<br />Jeff Foote<br />Wendy Gregory<br />Maria Hepi<br />Miria Lange<br />Johanna Veth<br />Ann Winstanley<br />David Wood<br /> Funders:<br />The Foundation for Research, Science and Technology<br />The Colonial Foundation Trust<br />The New Zealand Ministry for Research, Science & Technology<br />

×