Assessing the Capacity of Community Coalitions to Advocate for Change

2,672 views

Published on

Research has shown that high-capacity coalitions are more successful in effecting community change. While a number of coalition assessment tools have been developed, documentation is scarce regarding how they are implemented, how the results are used, and whether they are predictive of coalition success in collaborative community change efforts. Developed for a health promotion initiative of a major health foundation, this tool is designed to assess coalition progress in eight key areas across twelve different community coalitions, over the course of a three year initiative.

On May 21, 2013, Veena Pankaj, Kat Athanasiades, Ann Emery, and Johanna Morariu gave a presentation titled "Assessing the Capacity of Community Coalitions to Advocate for Change." The panel was hosted by the Advocacy Planning and Evaluation Program (APEP) at the Aspen Institute in Washington, DC.

The session focused on a coalition assessment tool that was designed by Innovation Network to assess changes in coalition capacity over time. Presenters shared lessons learned from the first year of the initiative about developing and deploying the assessment tool, as well as what these tools can--and can't--tell you about a coalition's capacity in conducting community change work. In addition presenters shared how information collected from this assessment can be communicated back to the coalitions using data visualization approaches to effectively communicate the data.

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
2,672
On SlideShare
0
From Embeds
0
Number of Embeds
1,399
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

No notes for slide
  • VP
  • VP
  • VPLet’s connect, both during the session and afterwards! If you’re tweeting about this session, please use the hashtag #CATeval, which is short for Coalition Assessment Tool evaluation. You can also tweet to Innovation Network at @Innonet_Eval. A few of us are on Twitter too - @KatAthanasiades, @AnnKEmery, and @J_Morariu.
  • VP
  • VP
  • VP
  • VP
  • VP
  • KA
  • KA
  • KA
  • KA
  • KA
  • AE
  • AE
  • AE
  • KA
  • AE
  • AE
  • AE
  • AE
  • AE
  • AEThere were two factors that were statistically significantly correlated with the overall coalition assessment scores: the length of time the community member had spent in the coalition, and the number of members in each coalition. In other words, communities where members had been in the coalition for longer tended to score higher, and communities with more people in their coalition tended to score higher. So, there’s something special going on in coalitions with lots of people who have been in the(Possible audience participation question if attention is lacking: What do you think is going on there? Is this what you expected to see? Or do these findings surprise you?)
  • AENext, we’re going to talk about the strengths and challenges of developing and administering the coalition assessment tool. The “strengths” section could easily be named “what’s awesome about the CAT” because each of us here on the panel had aspects of the project that really interested us. But, before we tell you about some of our favorite parts of this tool, we’re going to share some respondent perspectives. As Kat explained, the coalition assessment tool had 7 sections. When we administered the survey on Zoomerang, we added a big text box after each section so that they could share additional comments. Here are a few of those open-ended comments. They add useful contextual details and helped us and the Kansas Health Foundation understand the scores a little more.
  • AEWell, just like all surveys, you can imagine that some of the comments were like this…. A coalition member wrote, “This survey is too long for busy people.”
  • AEWhen asked about their coalition’s ability to learn from the community, a coalition member noted, “I hadn’t thought about us collecting feedback on whether the community is satisfied with our work because we are so busy doing the work, the learning, the data collection, and getting the word out, to think about whether THEY are satisfied. Very interesting question.”
  • AESeveral of the coalition members reminded us that this first year is meant to be a planning year, and therefore their community could not have fully accomplished many of the items outlined in the rubric. For example, a coalition member commented, “The initial mission, vision, and values are defined but under development.”
  • AESeveral of the coalition members reminded us that their coalition has limited money, and limited time, to fully accomplish the tasks outlined in the rubric. For example, a coalition member wrote, “Our monetary resources are currently limited to the planning grant monies.”
  • AEA coalition member wrote, “These questions… are good questions for us to have as a guide for our work.”
  • AEA coalition member wrote, “Realistically, many of these points cannot be accomplished within a year, but it is a goal to strive towards.”
  • AEAnd here are the strengths from our perspectives as the evaluators on the project.
  • AE
  • AE
  • AE
  • AE
  • VP
  • VP
  • VP
  • VP
  • JM
  • JM
  • VP
  • Assessing the Capacity of Community Coalitions to Advocate for Change

    1. 1. Veena PankajKat AthanasiadesAnn EmeryJohanna MorariuAssessing the Capacityof Community Coalitionsto Advocate for ChangeAdvocacy Evaluation Breakfast SeriesHosted by the Aspen InstituteWashington, DCMay 22, 2013
    2. 2. Veena PankajDirector vpankaj@innonet.orgKat AthanasiadesAssociate kathanasiades@innonet.orgAnn EmeryAssociate aemery@innonet.orgJohanna MorariuDirector jmorariu@innonet.orgwww.innonet.org | @InnoNet_Eval | #CATevalAbout Us
    3. 3. #CATeval@InnoNet_Eval@KatAthanasiades@AnnKEmery@J_MorariuLet’s Connectwww.innonet.org | @InnoNet_Eval | #CATeval
    4. 4. What’s the first thing to cometo mind when you hearadvocacy evaluation?www.innonet.org | @InnoNet_Eval | #CATeval
    5. 5. What’s the first thing to cometo mind when you hearcoalition assessment?www.innonet.org | @InnoNet_Eval | #CATeval
    6. 6. How does this session relate toyour work and interests?www.innonet.org | @InnoNet_Eval | #CATeval
    7. 7. Agenda1 Background2 Development & Implementation3 Reporting to Stakeholders4 Strengths & Challenges5 Summary & Applicationswww.innonet.org | @InnoNet_Eval | #CATeval
    8. 8. Agenda12 Development & Implementation3 Reporting to Stakeholders4 Strengths & Challenges5 Summary & ApplicationsBackgroundwww.innonet.org | @InnoNet_Eval | #CATeval
    9. 9. Agenda12 Development & Implementation3 Reporting to Stakeholders4 Strengths & Challenges5 Summary & ApplicationsBackgroundwww.innonet.org | @InnoNet_Eval | #CATeval
    10. 10. SnapshotsDevelopedImplementedReportAug2012Sept2012Jan2013Development & Implementation2Oct2012www.innonet.org | @InnoNet_Eval | #CATeval
    11. 11. Seven categories of CATBasic Functioning andStructureAbility to Cultivate andDevelop ChampionsCoalition LeadershipAbility to Develop Allies andPartnershipsDevelopment & Implementation2Reputation and VisibilityAbility to Learn from theCommunitySustainabilitywww.innonet.org | @InnoNet_Eval | #CATeval
    12. 12. Vetting processKansas Health FoundationstaffGEO Place-BasedCommunity of PracticeKansas Advisory CommitteeDevelopment & Implementation2Healthy CommunitiesInitiative Technical AssistanceteamCoalition memberswww.innonet.org | @InnoNet_Eval | #CATeval
    13. 13. Development & Implementation256 coalition members12 TA providers68 total responseswww.innonet.org | @InnoNet_Eval | #CATeval
    14. 14. Agenda12 Development & Implementation3 Reporting to Stakeholders4 Strengths & Challenges5 Summary & ApplicationsBackgroundwww.innonet.org | @InnoNet_Eval | #CATeval
    15. 15. Reporting to Stakeholders3Memoswww.innonet.org | @InnoNet_Eval | #CATeval
    16. 16. Overall ScoresReporting to Stakeholders329%40%55%64%66%68%69%70%77%79%80%81%Community 6Community 4Community 9Community 8Community 2Community 11Community 7Community 12Community 3Community 5Community 10Community 1www.innonet.org | @InnoNet_Eval | #CATeval
    17. 17. Reporting to Stakeholders31-pageCommunitySnapshotswww.innonet.org | @InnoNet_Eval | #CATeval
    18. 18. Reporting to Stakeholders3Memoswww.innonet.org | @InnoNet_Eval | #CATeval
    19. 19. Reporting to Stakeholders384%77%75%72%70%69%52%70%0% 20% 40% 60% 80% 100%Basic Functioning and StructureAllies and PartnershipsChampionsReputation and VisibilityLearn from CommunityCoalition LeadershipSustainabilityOverallCommunityMemberswww.innonet.org | @InnoNet_Eval | #CATeval
    20. 20. Reporting to Stakeholders353%55%54%51%38%47%35%47%0% 20% 40% 60% 80% 100%Basic Functioning and StructureAllies and PartnershipsChampionsReputation and VisibilityLearn from CommunityCoalition LeadershipSustainabilityOverallTAProviderswww.innonet.org | @InnoNet_Eval | #CATeval
    21. 21. Reporting to Stakeholders384%53%77%55%75%54%72%51%70%38%69%47%52%35%70%47%0% 20% 40% 60% 80% 100%Basic Functioning and StructureAllies and PartnershipsChampionsReputation and VisibilityLearn from CommunityCoalition LeadershipSustainabilityOverallCommunityMembersTAProviderswww.innonet.org | @InnoNet_Eval | #CATeval
    22. 22. Reporting to Stakeholders3ReportRegion?Time in coalition?Time in community?Engagement?Number of members?Number of new members?www.innonet.org | @InnoNet_Eval | #CATeval
    23. 23. Reporting to Stakeholders3More timein coalitionMore coalitionmembersHigherscoreswww.innonet.org | @InnoNet_Eval | #CATeval
    24. 24. Agenda12 Development & Implementation3 Reporting to Stakeholders4 Strengths & Challenges5 Summary & ApplicationsBackgroundwww.innonet.org | @InnoNet_Eval | #CATeval
    25. 25. This survey is too long for busypeople.—Coalition memberStrengths & Challenges4www.innonet.org | @InnoNet_Eval | #CATeval
    26. 26. I hadn’t thought about us collectingfeedback on whether the communityis satisfied with our work becausewe are so busy doing the work…Very interesting question.—Coalition memberStrengths & Challenges4www.innonet.org | @InnoNet_Eval | #CATeval
    27. 27. The initial mission, vision, and valuesare defined but under development.—Coalition memberStrengths & Challenges4www.innonet.org | @InnoNet_Eval | #CATeval
    28. 28. Our monetary resources are limitedto the planning grant monies.—Coalition memberStrengths & Challenges4www.innonet.org | @InnoNet_Eval | #CATeval
    29. 29. These questions… are good questionsfor us to have as a guide for our work.—Coalition memberStrengths & Challenges4www.innonet.org | @InnoNet_Eval | #CATeval
    30. 30. Realistically, many of these pointscannot be accomplished within ayear, but it is a goal to strive towards.—Coalition memberStrengths & Challenges4www.innonet.org | @InnoNet_Eval | #CATeval
    31. 31. a. Integrate multiple perspectivesb. Show change over timec. Contextual data about policy changesd. Aggregate and community-level datae. Tool & evaluation = interventionStrengths4www.innonet.org | @InnoNet_Eval | #CATeval
    32. 32. a. Integrate multiple perspectivesb. Show change over timec. Contextual data about policy changesd. Aggregate and community-level datae. Tool & evaluation = interventionStrengths4www.innonet.org | @InnoNet_Eval | #CATeval
    33. 33. a. Integrate multiple perspectivesb. Show change over timec. Contextual data about policy changesd. Aggregate and community-level datae. Tool & evaluation = interventionStrengths4www.innonet.org | @InnoNet_Eval | #CATeval
    34. 34. a. Integrate multiple perspectivesb. Show change over timec. Contextual data about policy changesd. Aggregate and community-level datae. Tool & evaluation = interventionStrengths4www.innonet.org | @InnoNet_Eval | #CATeval
    35. 35. a. Integrate multiple perspectivesb. Show change over timec. Contextual data about policy changesd. Aggregate and community-level datae. Tool & evaluation = interventionStrengths4www.innonet.org | @InnoNet_Eval | #CATeval
    36. 36. a. Ratings are opinionsb. Self-awareness low at firstc. Languaged. Focus changes over timeChallenges4www.innonet.org | @InnoNet_Eval | #CATeval
    37. 37. a. Ratings are opinionsb. Self-awareness low at firstc. Languaged. Focus changes over timeChallenges4www.innonet.org | @InnoNet_Eval | #CATeval
    38. 38. a. Ratings are opinionsb. Self-awareness low at firstc. Languaged. Focus changes over timeChallenges4www.innonet.org | @InnoNet_Eval | #CATeval
    39. 39. a. Ratings are opinionsb. Self-awareness low at firstc. Languaged. Focus might change over timeChallenges4www.innonet.org | @InnoNet_Eval | #CATeval
    40. 40. Agenda12 Development & Implementation3 Reporting to Stakeholders4 Strengths & Challenges5 Summary & ApplicationsBackgroundwww.innonet.org | @InnoNet_Eval | #CATeval
    41. 41. Materialswww.innonet.orgWhere We’ve Beenwww.innonet.org | @InnoNet_Eval | #CATeval
    42. 42. Assessing the Capacityof Community Coalitionsto Advocate for ChangeVeena PankajDirector vpankaj@innonet.orgKat AthanasiadesAssociate kathanasiades@innonet.orgAnn EmeryAssociate aemery@innonet.orgJohanna MorariuDirector jmorariu@innonet.orgwww.innonet.org | @InnoNet_Eval | #CATeval

    ×