Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Recent Outcomes Evaluations of Legal Aid Tech Projects


Published on

In this webinar we will talk about some of outcomes of recent projects along with the implications of those outcomes and how you can adopt some of the ideas into your own projects.

Published in: Law
  • Be the first to comment

  • Be the first to like this

Recent Outcomes Evaluations of Legal Aid Tech Projects

  1. 1. From Investment to Impact: Recent Outcomes Evaluations of Legal Aid Tech Projects August 17, 2016
  2. 2. Presenters Claudia Johnson, Pro Bono Net Moderator Keith Porcaro, SIMLab Valerie Oliphant, SIMLab Tara Saylor, Q2 Consulting
  3. 3. How we think of evaluations An unpleasant but necessary task A routine task a task, duty A waste of time and money—black box A fun helpful activity!
  4. 4. Why do we do evaluations? 1) We have to! 2) They made us do it to get the funding… 3) We love to do evaluations! 4) We find them helpful in figuring out how to use limited resources --guide future technology investments --do needs assessments --find out what we can do better --because we are curious and we love to learn and be smart on how we use what we have
  5. 5. Different types of evaluations • There are different types of evaluations— • In this call we will go over: –Different approaches to evaluations, borrowing from other fields –Tips on how to do resource constrained evaluations –As a way to find more money, save money
  6. 6. SIMLab M&E for Legal Aid
  7. 7. 01 Who we are: SIMLab is a nonprofit with a mission to realize a world where technology helps build societies that are equitable and inclusive.
  8. 8. What are “inclusive technologies”? 02 • Incorporate as many people as possible, including low-income and vulnerable populations. • Communications approaches that have a wide reach. • Capitalize on and integrate with existing tools and familiar communications channels, wherever possible. Think outside the app
  9. 9. What is M&E? 03 Monitoring refers to an on-going, periodic process of tracking implementation with the primary purpose of informing day-to-day project management decisions and tracking how an initiative is progressing. In some programs, monitoring includes “real-time” data and feedback from program participants that can inform immediate decisions. Evaluation is more of a discrete activity, which refers to the systematic and objective assessment of an ongoing or completed project or program which looks at its design, implementation and results. Evaluations may also aim to determine the worth of an activity, intervention or program.
  10. 10. M&E: What is it good for? 04 A monitoring and evaluation (M&E) process is put in place for 3 main purposes: ● As a management tool to drive change ● As an accountability tool ● To provide lessons and learning M&E may be used to: ● inform future funding decisions ● judge the performance of contractors or partners ● gather evidence to establish whether a particular approach is useful ● examine how a particular inclusive technology, or inclusive technology overall, contributes to wider programmatic goals.
  11. 11. Challenges to M&E 05 ● Technology adds an additional layer of complexity to an already complex project. ● Technology projects are frequently new operational partnerships and each partner may have different priorities in terms of things to measure. ● Technologist partners and implementing partners have different project cycles and working styles. ● Differences in what they would like to measure- the ‘business case’ - improvements to efficiency, effectiveness, and ease of communication - are only indirect contributions to the development-focused goal which traditional M&E focuses on. ● Limited capacity and resources.
  12. 12. SIMLab’s M&E Framework 06 ● OECD-DAC criteria are widely used by the international development community. ● ALNAP (the Active Learning Network for Accountability and Performance) later adapted these criteria to better fit complex humanitarian settings. ● SIMLab built upon ALNAP’s work and added additional considerations to make them more applicable to inclusive technology.
  13. 13. SIMLab’s M&E Criteria
  14. 14. Relevance The extent to which the technology choice is appropriately suited to the priorities, capacities and context of the target group or organization.
  15. 15. Effectiveness A measure of the extent to which an information and communication channel, technology tool, technology platform, or a combination of these attains its objectives.
  16. 16. Efficiency Efficiency measures the outputs -- qualitative and quantitative -- in relation to the inputs. It is an economic term which signifies that the project or program uses the least costly technology possible in order to achieve the desired results. This generally requires comparing alternative approaches (technological or non-technological) to achieving the same outputs, to see whether the most efficient tools, platforms, channels and processes have been adopted.
  17. 17. Impact The positive and negative changes produced by the introduction or change in a technology tool or platform on the overall development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the technology tool or platform on the local social, economic, environmental and other development indicators. The examination should be concerned with both intended and unintended results and must also include the positive and negative impact of external factors, such as changes in terms of trade and financial conditions and digital information and communication ecosystems.
  18. 18. Sustainability Sustainability is concerned with measuring whether the benefits of a technology tool or platform are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.
  19. 19. Coherence Coherence is related to the broader policy context (development, market, communication networks, data standards and interoperability mandates, national and international law) within which a technology was developed and implemented.
  20. 20. Feedback Mechanisms .org
  21. 21. Technology can be distortive 00
  22. 22. Use data to build a bigger picture of the world 16 ● Use tools that produce data ○ Auditable logs ○ Exportable data ● Use data you already have ○ Time tracking and case management ○ Email and phone records ○ Qualitative data and opinions ● Use data that exists out in the world ○ Court records ○ Other organizations (legal aid and otherwise) ○ Researchers
  23. 23. Document Everything 17 ● Document for replicability ○ Setup ○ Integration ○ Regular use ○ Deletion ● Document for errors ○ Practical difficulties ○ Mistaken input ○ Kludges, hacks, and compromises ● Document for the public ● Tools: we like Skitch for screenshots and markup, LICECap for animated GIF screenshots
  24. 24. Everything else 18 ● Budget: Expect to allocate 10-20% of the project cost for evaluation, especially if you’re hiring someone. ● Find an independent evaluator (we can help). Your technology provider should definitely not be evaluating the job they just did. ● A report that’s thrown in a drawer is a waste of money; use evaluations to learn things that are useful to your organization. ● SIMLab’s M&E Framework:
  25. 25. Thank you!
  26. 26. Tara Saylor, Ph.D. Q2 CONSULTING
  27. 27. Source: Wayne State University, Center for Urban Studies Logic Model
  28. 28. Expungement Example
  29. 29. Source: Wayne State University, Center for Urban Studies Logic Model
  30. 30. Outcomes Example Short-term: • Education for SRL • Improved technologies • Enhanced community partnerships Intermediate: • SRLs use technology • Efficiencies for SRLs, Courts, LASO Long-term: • Fewer criminal records impeding society reintegration • Improved job and housing outcomes for SRL • Improved education outcomes for SRL
  31. 31. Evaluation Design •Who we reach. •What we do and create. •Why we did this.
  32. 32. Know what to spend your evaluation budget on in a resource- constrained environment
  33. 33. Ideally, a PhD evaluator will: –Design your evaluation study using a logic model –Determine sampling plans –Create your instruments –Teach you how to collect data properly –Analyze your data
  34. 34. Reduce your Evaluation Budget by: –Collecting the data yourself –Recording the data in a format that works for your evaluator –Writing the final report
  35. 35. Questions? Tara Saylor, PhD 918-933-8198
  36. 36. THANK YOU FOR ATTENDING TODAY! Next up from PBN: Future-Proofing Your Projects: Maintenance, Continuity and Succession Planning September 14, 2016 More information at
  37. 37. Contact Information Brian Rowe ( or via chat on Don’t forget to take our feedback survey!