I’m going to talk about the “Human Side of Data for Good” – it is a topic that I’m going to write about on a regular basis on the Markets for Good site
I’m defining “data for good” as data that people, nonprofits, communities, and movements use to make better decisions that lead to better outcomes and impact
When that happens it is powerful and that’s why I wrote “Measuring the Networked Nonprofit” ….
Let me tell you a personal story about why I’m so excited about the power of data to inspire change, leading to better outcomes.
Six months, I went into my doctor for my annual check up and got some test results back … My Tryglycerides were If you know these numbers .. You know that less than 150 is the healthy range to avoid heart disease …
I had been ignoring the data on my dashboard …. It wasn’t changing my behavior … and my doctor told me that I needed to watch my diet and get more exercise ….
I needed better data and I needed to collect and analyze it in a faster cycle to make better decisions .. So, I got a FitBit
I set some goals around walking 10,000 steps, a daily calorie deficit based on eating more healthy foods, and more active minutes
This is data … that helps inspire a change in behavior for better outcomes ..
So, that’s I am so excited about the power of data to make better decisions and achieve better outcomes … not just for individuals, but for organizations, communities and movements
But this doesn’t always happen …
Good data practice is not just about the technical skills. There is this human side ….
It includes organizational culture and its influence on decision-making – from consensus building on indicators, agility in responding to data with action, and sense-making.
Effective nonprofit measurement and data practice requires a balance of both the human side and the technical.
It is like yin and yang, because these seeming contrary ideas and skills sets are actually complementary, interconnected and interdependent.
Like data visualization and storytelling & spreadsheet data ….
Here are some ways I’m thinking about it ..
Technical Outcome Identification and OMMTransactional MetricsData Collection and Hygiene SpreadsheetsQuantitative DataRegression Analysis Impact
Human Consensus on outcomes and OMMTransformational MetricsBeing data informed and agile responsiveness Data visualization and Storytelling Effective use of qualitative data Sense-making, reflection Learning
I mentioned the human and technical sides of outcomes and one metric that matters … the technical is identifying them .. The human is getting consensus on them .. If you don’t have these perfectly balanced, this is what happens …..
How do nonprofits avoid a situation like this? They use design thinking and consensus building facilitation techniques and take the time to identify and discuss …
Having leadership support is critical …..
Here’s a few stories about the human side of the nonprofit measurement practice ..
Edutopia, a project of the George Lucas Educational Foundation, is an online web site that creates and curate content that is distributed through mobile, social media, video, and offline channels. They also have a robust online community. The ultimate goal is improve the quality of education. Their theory of change is about raising awareness of the issues and then inspiring, engaging and encouraging their audiences to take actions around this goal. Their dashboard already did a great job at tracking impact metrics about the reach and size of their audience, but they wanted to go deeper in tracking engagement and taking action. With a large staff producing and marketing content, they also wanted a way to capture data for ongoing feedback to improve their content.
They created a concept map of the different themes that emerged. While technical topics such data and measurement processes emerged, so did a lot culture change issues.
Next staff identified key impact metrics by creating a paper prototype of the dashboard on the wall, with sticky notes. Using a sticky dot voting process to identify metrics most important to senior management and the board and those most important to different staff departments, they were able to design different “views” – a high level for impact and more detailed version for “learning.”
What emerged from the conversation and a plan for impact reporting, but also a process for more intentional experimentation and learning linked to key metrics.
One of the conversations that came up was a discussion around perfectionism … and how there wasn’t an organizational process or acceptance for experimentation or “satisifcing” -a way to reduce time and complexity of delivery process …
Staff is now experimenting on a regular basis – using metrics around taking less time or those connected to the impact metrics .. Perfectionism is the enemy of learning and ultimately of getting improved impact. “We live in a culture of high quality and low risk tolerance and people want to be safe and not harmful. We have to look at perfect vs fast. If you go slower, you get less feedback and that won’t help you build a better product or program. You need to iterate towards perfection based on audience or stakeholder or customer feedback.”
GivingTuesday, a philanthropic movement to promote a national day of charitable giving that takes place the Tuesday after Thanksgiving, organized a convening of key stakeholders called “Measurepalooza.” The gathering followed on the heals of the “Best Practices Summit” where partners and participants came together to share and learn best practices and identified the need for the movement to also capture metrics beyond “dollars raised on the day” numbers.
In particular, they were interested in looking at transformational metrics such as donor engagement, building nonprofit capacity, and global reach. As a movement, GivingTuesday needed to address and get consensus on two big measurement questions:
What metrics should the movement as a whole measure? What should participants and participants measure each for their individual campaigns?
The session started with setting context on the accomplishments of the past year’s campaign and a summary of what was learned during the best practices summit. This lead to a discussion about the need to capture both “transactional” and “transformational” metrics related to specific outcomes as well as what and how to effectively use both quantitative and qualitative data for both movement level learning and for participating partners.
Through a facilitated design thinking process, small groups of participants created a draft of the Giving Tuesday movement level and partner level metrics. As a consensus building process, participants used “sticky dot” voting to identify the most important metrics (green for partners; red for the movement as hold). This allowed everyone to see visually what the group consensus was and hone in what was most important.
I just came from facilitating an innovation lab with community foundations at the Guide Star DonorEdge Learning Community They are all striving to become “knowledge centers” for their communities – using data to frame conversations about issues … and help donors make better investment decisions based on data ..
The Community Foundation for Greater New Haven is very sophisticated is doing this, but most agree that one of their challenges is how to effectively communicate with stakeholders about the value of shared data to influence behavior that will advance community-wide goals …
We went through a human-centered design process, generating many unconventional ideas for a communications strategy … let me share …
This campaign would select one hot issue in the community and then use data (“head”) vs. no data (“heart”) to demonstrate the effectiveness of decision making and solving a problem using data vs. no data.
Example using the issue of homelessness:
First, use an infographic, picture, or video of homeless people begging for money from drive-by cars or pedestrians. Show how “heart-driven” donations may be spent on beer/alcohol and misused by the homeless. Describe how tax payer’s money is wasted and that the homeless people remained homeless, continuing to seek shelter at bus stops. (No solution).
Next use an infographic, picture, or video of homeless people. Use data (“head-driven”) to indicate the known number of homeless. Describe how donors, the community foundation, giving days, corporate community, and other funders can work strategically, based on data, with strong nonprofits to provide services and housing for the homeless. Using data and collaboration, the homeless receive the shelter and care they need, resulting in significantly reduced homeless population, i.e. from 10,000 to 2,000.
The campaign would be promoted through community forums, social media (hashtag #datamatters), news stories, public service announcements, billboards, print campaign, and DonorEdge profiles
“Head vs. Heart” won for best detail. The presentation contained components of including infographics, theory of change, customization using the data, and the campaign was easily shareable.
This campaign was based on the game, Jeopardy. (Jeopardy theme song played in background during the presentation). The categories of “Nonprofit Jeopardy” were based on issue areas such as: “Dancing with Data” (Arts) “From A to F” (Education) “Golden Years of Data (Aging) “Data Check-up” (Health
The catch phrase of the campaign was, “How much do your community leaders now about your community?” The mayor, city councilmen, community leaders, donors, and nonprofits, would be invited to play “live” in studio. The winner would give a grant to the nonprofit of his or her choice.
The Jeopardy questions would be based on nonprofit data in the DonorEdge site. The community at large could play the game via a mobile app, digital game, and other social media channels.
The goals of the campaign were to:
Educate the community Give visibility to issues Showcase data tools
“Data Duel” won for best concept because the gamification made it fun. The campaign used a community wide engagement approach, it was “live” in studio, and had multiple distribution channels of DonorEdge, an App, digital game, and social media.
The human side of data for good is a rich area – I’ll be writing about it regularly markets for good site and it is at the center of my current work …
But we should remember that whether you are data to inform a digital strategy, build a philanthropic movement, or help donors make better investments in a community, it is important to remember that using data for good effectively begins with people – and that we have to always balance the technical skills of measurement with our humaness
1. Between the Dashboard and the Chair
The Human Side of Data for Good
Author, Speaker and Trainer
June, 2014 – Gates Foundation
Data that Inspires Change and
Leads To Better Outcomes
Triglycerides: Test Results
3. I was ignoring the data on my dashboard!!
4. I Needed Better Data and To Collect More Often
5. Dashboard To Collect Data Connected To Goals
Triglycerides: Test Results
7. The Human Side of Data for Good
Outcome Identification and OMM
Consensus on Outcome Identification and OMM
Data Collection and Hygiene
Data informed and
Visualization & Storytelling
9. What Happens When You Don’t Balance Human and
10. A Few Stories: Getting Consensus and Buy-In
11. Human and Technical
Data Practice Issues
12. A Paper Prototype of the Dashboard
13. What should the movement
as a whole measure?
What should partners and
participants measure for
their individual campaigns?