Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Webinar:
Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Simple Measures, Big Results: How to Collect, Analyze, and Share Program Impact Data
Tuesday, May 28
11 a.m. Pacific Time
Mention "evaluation" to a nonprofit leader and it may conjure up visions of complex measures, expensive consultants, privacy concerns, and high overhead. Yet there are lightweight, empirically sound outcome measures that can be collected, analyzed, and shared by virtually any organization. In this webinar, we will walk through selecting, collecting, and analyzing such measures and then creating dynamic dashboards in Power BI to share the results.
2. Using ReadyTalk
Chat to ask questions
All lines are muted
If you lose your Internet connection, reconnect
using the link emailed to you.
You can find upcoming and past webinars on
the TechSoup website:
www.techsoup.org/community/events-webinars
You will receive an email with this presentation,
recording, and links
Tweet us @TechSoup and use hashtag
#tswebinars
4. Acclivity
Adobe
Alpha Software
Atlas Business Solutions
Atomic Training
Autodesk
Azavea
BetterWorld
Bitdefender
Blackbaud
Bloomerang
Box
Brocade
Bytes of Learning
Caspio
CauseVox
CDI Computer Dealers
Cisco
Citrix
CitySoft
CleverReach
ClickTime
Closerware
Comodo
Connect2Give
Dell
Dharma Merchant Services
Digital Wish
Dolby
DonorPerfect
Efficient Elements
FileMaker
GoDaddy
GrantStation
Guide By Cell
Headsets.com
Horizon DataSys
HR Solutions Partners
Huddle
Idealware
InFocus
Informz
InterConnection
Intuit
JourneyEd
Litmos
Little Green Light
Mailshell
Microsoft
Mobile Beacon
NetSuite
Nielsen
NonProfitEasy
O&O Software
Quickbooks Made Easy
Reading Eggs
ReadyTalk
Red Earth Software
Sage Software
Shopify
Simple Charity Registration
Skillsoft
Smart Business Savings
Society for Nonprofit Organizations
Sparrow Mobile
Symantec
Tableau
TechBridge
Tech Impact
Teespring
Telosa
Tint
Ultralingua
Western Digital
Zoner
5. Presenters
Mike Yeaton
Chief Strategy Officer
Empire Health
Data and Innovation Consultant
TechSoup
Neetu Rohith
Digital Marketing Analyst
TechSoup
Katia Williams
Impact Data Intern
TechSoup
Sima Thakkar
Senior Manager, Content
TechSoup
Assisting with chat:
Zerreen Kazi, TechSoup
Katia Williams
Impact Data Intern
TechSoup
Neetu Rohith
Digital Marketing
Analyst
TechSoup
Mike Yeaton
Chief Strategy Officer
Empire Health
Data & Innovation Consultant,
TechSoup
Zerreen Kazi
Marketing
Associate
TechSoup
Sima Thakkar
Senior Manager,
Content
TechSoup
6. Simple Measures, Big Results
How to Collect, Analyze and Share Program Impact Data
10. Traditional Evaluation
• Intended to prove that A influences B
• e.g. A=health education, B=smoking
• Control group for comparison
• Data sharing agreements, IRB review
• Many outcome / process measures
• One time process / insights at end
• Third party evaluators / costly
• Funder driven
12. Program Example: Rising Strong
• Helps families at risk of child
removal
• Provides housing, treatment,
wraparound services
• Serves up to 30 families
concurrently
https://www.seattletimes.com/education-lab/facing-opioid-and-foster-care-crisis-spokanes-rising-strong-seeks-to-keep-families-together/
13. Program Example: Rising Strong
Evaluation Plan:
• Required to allocate 20% ($600k)
for evaluation
• Hired university research team
• Five-year study of ~100 families
• Matched comparison group (data
sharing agreement needed)
• 35+ outcome measures
14. The Problem with Traditional Evaluation
• Cost
• Getting the data
• One time results at the end
• Respecting historical context
• Replicability of results?
“We found that client drinking outcomes
were highly predictable from the extent to
which therapists had manifested empathy”
- Bill Miller, “Rediscovering Fire”
16. What is a Proxy?
• Substitutes something we can
measure for something we
can’t
• Proven relationship to
outcomes we are seeking
• Lightweight and low cost
• Supports ongoing monitoring of
changes
17. Program Example: Aging Services
Overview:
• Portfolio of programs to improve
health and well-being of seniors
• Variety of program models:
health coaching, telehealth, care
coordination
• Variety of community partners
and participants
• What measure did we choose?
18. Patient Activation Measure
Attributes:
• Measures confidence to manage
one’s own health (activation)
• Inexpensive
• Easy to administer
• Supports frequent collection
(every client every 3 months)
https://www.insigniahealth.com/products/pam-survey
Link to Outcomes:
• “Significantly more likely to perform self-management
behaviors, use self-management services, and report high
medication adherence.”
• “We found that higher patient activation predicted better
depression outcomes.”
• “Positively associated with higher functional
status, health care quality, and adherence to some
health behaviors.”
• “Each point increase in PAM score correlates to a 2%
decrease in hospitalization and 2% increase in medication
adherence.”
• Among cancer patients, “Higher activated patients are
more than 9 times more likely to feel their treatment
plans reflect their values, 4.5 times more likely to cope
with side effects, and almost 3.3 times more likely to
initiate a healthier diet after their diagnosis”
19. Even Simpler: Single Question Surveys
“A substantial body of international research
has reported the item to be significantly and
independently associated with specific health
problems, use of health services, changes in
functional status, recovery from episodes of ill
health, mortality, and sociodemographic
characteristics of respondents.”
https://jech.bmj.com/content/59/5/342
https://www.cdc.gov/brfss/index.html
23. Why Do We Get it Wrong?
People have erroneous intuitions
about the laws of chance.
In particular, they regard a sample
randomly drawn from
a population as highly
representative, that is, similar to
the population in all essential
characteristics.
- Kahneman and Tversky, “Belief in
the Law of Small Numbers” (1974)
25. Concept 1: Correlation
• Correlation (r) is a measure of the strength of
the relationship between two variables
• For example the round of PAM survey and the
client score
• Considers all the data not just the last round
• Correlation may be characterized:
• 0.5 - 1: Strong
• 0.3 – 0.5: Moderate
• Below 0.3: Weak or negative
Takeaway: Correlation (r) measures strength and the closer to 1 the better
26. Concept 2: Significance
• Significance (p) measures the validity of the
relationship
• Based on the amount of data and strength of
the result
• Measures the likelihood our results could
occur by chance
• A value of < 0.05 says there is less than a 5%
chance our result could be random chance
Takeaway: Significance (p) measures validity and the closer to 0 the better
27. In Conclusion…
Rightsizing Evaluation:
• Choose a lightweight proxy
measure with demonstrated
relationship to your objectives
• Collect data on continuous basis
• Analyze correlation(r) and
significance(p) in outcome data
• This can be done in 1 line of code
• Help is available!
• Share the results to demonstrate
impact and adjust strategies
Limitations:
• This is a simplified but (I believe)
useful model
• We are focused on outcomes not
'proving' causal relationship
• There is meaningful information about
people and communities that can
never be quantified
“All models are wrong, some are useful”
- George Box
30. • Create simple yet powerful visualizations
• Connect to flexible data sources : Files,
Content Packs, Databases
• Similar to Tableau
• View your dashboards and shared
dashboards using your Mobile devices:
iPhones, iPads, Android phones, tablets,
Windows 10 devices etc.
• Build dashboards easily in a short period of
time
• Choose who you want to share your reports
with – more secure
31. • Schedule Auto refreshes with Power
BI Service
• Build reports using the free desktop
version: https://powerbi.microsoft.com/en-
us/downloads/
• Nonprofit Power BI Pro cost: $3 per
license per month
33. Special Offer!
• The TechSoup Impact Data team is offering (3) free half-day
consultation / dashboard prototype sessions for registered nonprofits
• Sessions will be held in June at a time to be arranged
• If interested send email to myeaton@techsoup.org
• Tell us about your challenge and (if possible) include sample
spreadsheet or .csv file
• Do not include any private or sensitive data!
• You do not need a Power BI license to participate (but should have
free desktop version installed)
34. Q&A
This is your chance! Use the chat box to ask us
any questions you have about this
presentation.
35. Share and Learn
Chat in one thing that you learned in today’s
webinar.
Please complete our post-event survey. Your
feedback really helps.
Follow TechSoup on social media
(FB, Instagram, Twitter)
Visit the TechSoup Blog at blog.techsoup.org
36. Join us for our
upcoming webinars.
6/4
How to Select the Right Technology for
Fiscal Year-End
6/18
How Teach for America Uses Online
Surveys
Archived Webinars:
www.techsoup.org/community-events
37.
38. Thank you to our
webinar sponsor!
Please complete the post-event survey that will
pop up once you close this window.