Describe the nature of the project, personal questions prompted by reading Patton.
The mission of the University of Minnesota Extension is to make an impact on our public – to solve their problems. Impact has been and continues to be the foundation of the Land Grant Mission and their Extension programming.
Impact is also a cornerstone of our sustainability. West, Drake and Londo (2009) claim in a Journal of Extension article that Extension may be headed for a fate similar to the Pony Express. In their words: “The Pony Express and Extension are two completely dissimilar organizations linked by a common problem: survival in changing times.” Though functionally dissimilar organizations, Extension like the Pony Express is now trying to sustain itself in a world much different than the one for which it was created. They propose a variety of challenges, and potential solutions. Among them is effective education design, and ultimately evaluation:“The fact remains, however, that Extension is dominated by individuals with subject-matter expertise but with little or no formal training in education, communication, psychology, or other fields relevant to Extension's mission of education. The stark reality is that we have limited evidence to demonstrate Extension's effectiveness and, in this day of heightened scrutiny and expectations for governmental programs, we must improve in this arena.” [Bold added.]Note their call for evidence to demonstrate Extension’s effectiveness.Photo from http://commons.wikimedia.org/wiki/File:Pony_express.jpg. Frank E. Webner, Pony Express rider~ ca. 1861.This work is in the public domain in the United States because it is a work of the United States Federal Government under the terms of Title 17, Chapter 1, Section 105 of the US Code. See Copyright.
Demonstrating impact is inherent to our competitive advantage. Summary statement of the Extension Hedgehog from McGrath, Conway & Johnson (2007) article in Journal of Extension. Building from Collins’ Good to Great concept, the authors described a thought experiment in which they tried to identify the one thing that Extension does best for its customers. I have bolded their assertion that Extension programs have impact, and can prove it through credible evaluation. Insofar as Extension staff agree with this summary of their work, they essentially concede the centrality of impact evaluation and reporting.
And, finally, the demonstration of impacts is inherent to our sustainability in Minnesota. In the Extension Enews, January 7, 2009 issue, Dean Durgan wrote about impact in her Making a Measurable Difference in Minnesota. To survive in critical budgetary times, the University of Minnesota Extension must be diligent in demonstrating a significant impact and value.
So, I suggest that impact evaluation/reporting bookends our 3-part strategic programming process – identifying and making explicit public value, program business planning, and impact evaluation/reporting. Essentially, these processes combine to ensure our programming is 1) worth public investment, 2) well-designed to achieve its purpose, and 3) actually achieves intended and important impacts.
Processing and Reflecting: Ask to summarize key points. Ask clarification questions. Workbook: Provide space to jot questions, observations, Ah Ha's.
The term ‘impact’ is variously defined (Evaluation Gap Working Group, 2006; Leeuw & Vaessen, 2009; Rossi, Lipsey, & Freeman, 2004; White 2010) but typically refers to long-term effects produced by an intervention. White (2010) points out that evaluators typically define impact as the final level of a program (intervention) causal chain. He contrasts this with the definition more commonly used by those in the ‘impact evaluation field’ as difference of an indicator of interest with/without the intervention. The definition used here from Leeuw & Vaessen (2009) also points out that impacts can be positive or negative, direct or indirect, and intended or unintended.
Bennettand Rockwell (1995) provided the useful TOP framework to help educators connect their planning and evaluation efforts. In this case, it can also be used to exemplify what IS and IS NOT considered a program impact. Along the planning side of the framework, we note problems with SEE (Social, Economic, Environment) conditions can call for changes in practices (behavior), which denote changes in KASA (Knowledge, Aspirations, Skills, Attitudes) and so forth to the specific resources necessary to pull off an intervention. Along the evaluation side of the framework, we note that the presences of resources can be evaluated, types and numbers of activities, numbers and demographics of participants, and so forth up to changes in practice and SEE outcomes. It is essentially the SEE outcomes and potentially the changes in practice that can be considered impacts long-term impacts of an intervention. KASA are typically considered short-term outcomes necessary for impacts. Other elements are functions of the intervention.
Impact evaluation tends to be defined in various forms as the difference in SEE conditions for populations with/without a specific intervention (Evaluation Gap Working Group, 2006; Leeuw & Vaessen, 2009; Rossi, Lipsey, & Freeman, 2004; White, 2010). Leeuw & Vaessen (2009) and White (2010) both called attention to two premises underlying impact evaluation: attribution and a counterfactual. ‘Attribution’ implies an approach to evaluation that attributes changes in conditions to the intervention (as opposed to maturity or effects of other intervention). ‘Counterfactual’ implies an approach to evaluation that includes an attempt to describe what would have happened in the absence of the intervention. White (2010) argued that there must always be a counterfactual in an impact evaluation, even if not a comparison group. He also argued that attribution needs to be a focus of the evaluation, even when there may be multiple contributing interventions to disentangle.
There is no absolute design for impact evaluation, though RCTs and other comparative-group designs are often referred as the ‘gold standard’ (Evaluation Gap Working Group, 2006; Leeuw & Vaessen, 2009; Rossi, Lipsey, & Freeman, 2004; White, 2010). However, it is worth giving thought to the appropriateness of our designs well before reporting (ideally when initially planning the program). Pertinent to this short review is the methodological guidance that Leeuw & Vaessen (2009) offered to those planning impact evaluations – 1) articulate the type and scope of the evaluation and agree what is valued, 2) clarify the program theory and address attribution, and 3) use a mixed-methods approach. White (2010, 2008) explored the use of mixed methods for impact evaluation. Notably, he suggested that the size of the intervention being evaluated may be an effective criteria for judging the applicability of qualitative or quantitative approaches (White, 2010) .
It is worth noting here the importance of program theory in guiding impact evaluation (and articulation of impacts). Many authors have explored the potential for program theory in helping evaluators develop appropriate foci, designs and measures (Chen, 2005; Patton, 2008; Rogers, 2008; Rossi, Lipsey, & Freeman, 2004). Germaine to the focus of this presentation is the typical contention that Extension programs are too complicated, multi-faceted or intertwined with other interventions to measure impact. Insofar as this is becoming a more and more tenuous position, it is worth looking closely at how these authors can help us clarify our causal claims underlying our program theories, assess the kinds of evidence supporting these claims, and develop impact evaluations that assess critical gaps. Explain how this has been accomplished in the Best Practices for Field Days Program, Ulead and Youth Development Extension programming (Joyce Hoelting, Personal Communication).
Smith & Straughn (1983) described the challenges entailed by a then new Extension accountability and monitoring policy. After deconstructing implications of the new policy, they pointed out four significant challenges to evaluation of the impacts of Extension programming. And they suggested solutions for each of these challenges. Many programs, in their view, did not have explicit goals, objectives, program theory. To mitigate this challenge, they suggested working with programs to make explicit their theories and/or abstaining from evaluation of programs lacking clarity. They suggested that programs may achieve relatively indistinguishable effects, potentially intertwined with other interventions. To mitigate these challenges, they suggested 1) focusing Extension programming on audiences where more significant impacts can be achieved more rapidly, and 2) evaluating groups of programs in multiple geographies to generate more convincing evidence of attributable impact. Finally, they suggested that Extension programs tended not to be evaluated for external audiences. To mitigate this challenge, they suggested developing more standardized, consistent evaluation procedures that were palpable to outside stakeholders. Nonetheless, they conceded the significance of the challenge, quoting Rossi and Freeman (1982): “We cannot overstress the difficulty of undertaking impact evaluations” (Smith & Straughn,1983, p. 55).To what extent are these challenges inherent in our current call for evidence of impact? Can we solve them in the same manner? [Suggest yes, if follow guidelines presented earlier. An early start on impact evaluation planning will improve coordination with program theory planning. Considering impact evaluation, we can give more attention to our target audiences, and operationalization of SEE Conditions. We can give more attention to careful program theory to enable more resources available for evaluation of necessary claims.]
Processing and Reflecting: Ask to summarize key points. Ask clarification questions. Workbook: Provide space to jot questions, observations, Ah Ha's.
So, the big goal is to provide 'real users' with 'information they can understand and apply' for 'real purposes‘ (Fitzpatrick, Sanders, & Worthen, 2004; Patton, 2008). Seems easy enough. So, why do we think the results of evaluations are so rarely used as we intend them (Patton, 2008; Weiss, 1980)?
Can it be so simple? Franz and McCann(2007) described a training/support process at Virginia Cooperative Extension for training staff to write and embrace straightforward impact statements. On their website (http://www.cals.vt.edu/communications/writingimpactstatements.html), they present concise definitions of program impacts, impact audiences and impact reporting. Germaine to this presentation, they also present a stepwise formula for writing impact statements. In a general sense, this is the target for a good impact evaluation report. And it provides a good framework for starting our discussion. To get started, It will be helpful to sketch this skeleton report.But, it is not really this simple to craft an effective report. As we learned in the last section of this presentation, there are elements of this report (e.g., the response, results, and who is responsible) that ultimately rely on an effective evaluation design. Moreover, we cannot assume that one report structure will work for different users or situations. In fact, the one-way method of writing may lead to reports that are not used at all, or misused.
Scholars don’t know for sure why evaluations are seldom used as intended (or used at all). In fact, they are still really trying to understand how/why/when use happens. But, the bottom line is that use is complicated (Amara, Ouimet, & Landry, 2004; Cousins & Leithwood, 1986; Johnson, 1998; Lipton, 1992; Patton, 2008; Shulha & Cousins, 1997; Weiss, 1988). For instance, the following framework from Johnson (1998) depicts the interactions of ‘background’, ‘interactional’ and ‘utilization variables’ in the use process. There are influences inherent to the internal and external environments and contexts of the evaluation. There are interrelated feedback loops. While this is not the only model, nor necessarily the absolutely correct model of evaluation use, it is clear that our impact reports must account for and integrate within these complicated utilization processes.
Unfortunately, evaluation reports are rarely tailored well to these complicated utilization processes (Fitzpatrick, Sanders, & Worthen, 2004, Patton, 2008). I suggest, however, that we can meet the challenge by following a fairly straightforward process to be thoughtful about our reports. Essentially, the production of good technical writing, evaluation report or otherwise, demands a clear sense of the ‘real purpose’ for a ‘real audience’ that encompasses ‘real content’. Following this straightforward approach to reporting, we essentially clarify our audience (user characteristics), purpose (outcomes) and content (program characteristics). We then connect these through specific attention to our impact evaluation design, report characteristics.The following slides will examine elements of this model in more detail.
While the model is fairly straightforward, it is more practical to essentially work both ends to the middle. Thus, I suggest beginning by clarifying your potential (or even desired) outcomes, and the intended uses that will address these outcomes.
It is relatively easy to develop a long list of potential outcomes of our impact reports: budget renewal, new program investments, improved abilities to describe the importance of a program, developing supporting attitudes toward a program, supporting a position about a program, etc. Weiss (1988), however, helps us understand that evaluations actually serve a much more manageable list of key functions to address these varied outcomes. Evaluations can serve as a warning signal. They can provide users with guidance in making decisions or informed actions. They can change/improve user’s concepts about programs and issues. They can also be useful in mobilizing support for programs and issues. So, working backward, it is helpful to define the ultimate outcome that our users (and/or we or our leadership) intend to accomplish through our report of impact. Then, we can identify which of the key evaluation functions are likely to best address this outcome. [DISCUSS SOME PURPOSES & FUNCTIONS]
Finally, it will be necessary for users to make use of our impact reports in certain ways related to our appropriate evaluation functions. Though various taxonomies of uses differ in number and complexity (Cousins, 2004; Cousins & Leithwood, 1986; Johnson, 1998; Patton, 2008; Shulha & Cousins, 1997), it will be helpful for our purposes to discern between three common types of use: instrumental, conceptual, and symbolic. Evaluation results can be used directly to inform decisions or actions – instrumental use. This type of use is related to the ‘warning’ and potentially ‘guidance’ functions of evaluations. It is the type of use that we probably conceive resulting from our evaluations. But, it is actually relatively rare (Patton, 2008; Weiss, 1982).Evaluation results can influence how we think about a program or issue – conceptual use. This type of use is related to the ‘reconceptualization’ and potentially ‘guidance’ functions of evaluations. Weiss (1982) called this kind of use ‘enlightenment’, and described it as relatively more common in policy settings.Evaluation results can also be used as tokens to support a position or action – symbolic use. This type of use is related to the ‘mobilize support’ function of evaluations. However, there is some disagreement as to whether this is an appropriate use of evaluation, insofar as the findings may not be used or may be misconstrued to achieve support.Finally, the processes of involvement in an evaluation can also induce individual, organizational or other changes – process uses. This type of use is potentially related to all of the functions of evaluation. It is a relatively new area of focus in research on evaluation utilization (Patton, 2008).
It is clear at this point that the ‘real purpose’ of your report relates directly to the function of your report, and how people will probably use it. In this chart, I make some subsequent suggestions about what you may need to know and pay attention to in formatting your report. For instrumental uses, I suggest it is important to format for utility/action. And it is therefore useful to know something about the decision/action, who will make the decision/action, and how the decision/action is likely to unfold. For conceptual uses, I suggest it is important to format for conceptual change. So, it is useful to know about who to involve, their concepts, and sources of learning. For symbolic uses, I suggest formatting to mobilize support. It is therefore helpful to know about the nature of the position to be supported, the context where calls to mobilize are likely to take place, and who is likely to make these calls. In this case, I also suggest care to prevent misuse.
It is also helpful to learn as much as possible about your ‘real users’ evaluation-related characteristics. Patton (2008) and others have described different ways that users may intend to utilize results (more on this later) and their timelines for use. Weiss (1982) and others have discussed the importance of the context in influencing use. And we know from education studies that individuals have different information processing preferences. So once we identify ‘who cares’, we need to work with these people, our colleagues, leaders and regional directors to learn what works for them. [ASK FOR SUGGESTIONS OF QUESTIONS WE MIGHT WANT TO ANSWER ABOUT THEM.] For instance, we may ask about their prior experiences with evaluations, or their expectations for good evaluation. We might ask what they intend to do with results of the evaluation, and when they may need the results. Some of the things that we want to learn about our users are consistent among uses and reports – experience with evaluation, information processing preferences. Others are specific to our purposes and intended uses – context, timeline, learning setting.
According to Patton (2008), we need to identify our specific users. It is quite typical to identify ‘stakeholders’ – in this example ‘funder’. To develop an effective report, however, we should be asking, “what funder?” – in this example ‘Don at Initiative Foundation’. We should ask, “who exactly…?” – in this example ‘Barb’. This will help us tailor our reports specifically for our users and their uses. Targeting more abstract definitions of our users (e.g., funder) can provide us some useful direction. But, as this example demonstrates, we risk being wildly off-target until we get more specific.
Above all, your ‘real users’ are the people, the specific individuals, who care personally about the evaluation of your program. In a mid-1970s study of the utilization of 20 federal health evaluations, Patton (2008) and colleagues identified two factors that were consistently important: 1) considerations of politics, and 2) a ‘personal factor’. In Patton’s (2008, p. 66) words: “The personal factor is the presence of an identifiable individual or group of people who personally care about the evaluation and the findings it generates. Where such a person or group was present, evaluation were used; where the personal factor was absent, there was a correspondingly marked absence of evaluation impact.” The importance of the personal factor has since become well-accepted within evaluation practice. So, it is important for our purposes of reporting impacts to start by asking ‘who cares’? We need to work with our colleagues, leaders and regional directors to identify these specific people. And in the case that no one cares, we may need to 1) rethink the focus of our programming (i.e. the stool), or 2) work on cultivating the personal factor before reporting (i.e., building understanding of the relevance/importance of our programming).
Graphic adapted from Patton (2008) Stakeholder Analysis: Power versus Interest Grid (p. 80). Patton (2008) also pointed out that we might cultivate different kinds of relationships with different kinds of users. Ideally, we will look for the personal factor in people who have a high interest in our program impacts, and high power to use the impact evaluation reports that we provide them. However, we may also want to cultivate evaluation reporting relationships with other people surrounding our evaluation. Some can increase the overall diversity of our reporting process. Others may be important elements of the context of our evaluations.[DESCRIBE examples like the Best Practices for Field Days program, Minnesota Master Naturalist. Ask participants for examples.]
In drafting our reports (and ideally planning the evaluation design), it is important to learn as much as we can related to two tests to which our users are likely to subject our results. Weiss and Bucuvalas (1980) interviewed 155 researchers in mental health fields about their use of 50 research reports. Analyzing these results, they identified two key frames of reference for interpreting reports of research: a truth and a utility test. The truth test encompasses 1) the extent to which the research appears to adhere to canons of the scientific method (i.e., commonly controlled experiments), and 2) accords with the user’s previous knowledge about how the world works. NOTE According to Weiss and Bucuvalas (1980), the more research seems to conform to user’s previous knowledge, the generally less important the adherence to canons of scientific method. The utility test encompasses 1) the extent to which the research provides explicit, practical directions with which users can actually do something, and 2) the extent to which the results challenge current practices or conceptions. NOTE Weiss and Bucuvalas (1980) suggested that challenging results tend to be perceived as more useful when results are less actionable.
Armed with a deep understanding of our programs, evaluation designs and analysis, it is all too easy to write a report that users cannot easily understand (Patton, 2008; Valovirta, 2002). Lacking careful attention to the specifics associated with our impact evaluations, we risk slinging generalized concepts loose and unfettered at our users - inert ideas seemingly irrelevant or incomprehensibleto their personal needs (Whitehead, 1959). In the words of Zuckerman (2001/2002): “We all fall prey to this tendency [slinging jargon] from time to time, but rarely do we consider the costs of lapsing into this kind of lingo, costs measured in a failure to communicate with the people we most need to reach, and in the mental laziness that jargon permits” (p. 34).To guard against this “mental dryrot,” Whitehead (1959) cautioned: “we must enunciate two educational commandments, ‘Do not teach too many subjects,’ and again, ‘What you teach, teach thoroughly’” (p. 3). We can adopt a similar caution for our impact reports: choose carefully the few messages to communicate, and communicate these thoroughly.
Patton (2008)described the importance of formatting data for action (as opposed to however it spits out in an Excel, SPSS or other graph). In this (admittedly simplified) case, I imagine a Master Naturalist program asking how to improve the distribution of different age-classes of participants. In the first graph, these data are presented in a typical manner, requiring users to make interpretations or even additional calculations to answer the question. In the second graph, I have reorganized the categories to present data in rank order. A little more helpful. In the final graph, I present data in rank order as above/below a mean level of participation. This makes immediately apparent to users the data to answer their question. Participation in the first three groups needs to increase….
Many Extension programs address complicated or complex issues (Rogers, 2008) with inherently complex impacts. It can therefore be useful to designate ‘proxies’ or indicators that provide rational, research-supported evidence of impact. For example, a colleague described use of somatic cell count in cows as a proxy of their overall health. Completion of 8th grade algebra is commonly accepted as an important gateway into collegiate science. Therefore, the 8th grade completion rates may be a useful proxy (and focus) for STEM programming and evaluation. The value of the volunteer hour can be used to monetize the impact of master volunteer programs on SEE conditions. Proxies essentially help users make better sense of our complex/complicated program theories and impacts.
Walter, Nuntley, and Davies (2003), Patton (2008) and others discussed the importance of reporting at the right place and time. It is fairly useless, for example, to submit a report related to Extension budgets two weeks after county commissioners have discussed/voted on their budget. From Weiss (1982), we can gather a few questions to answer about the time/place where our report will be useful:What are the boundaries that encompass the purpose of my report – actors, time, purpose? Who needs to see it? When and where?Do my users have a desired end-state in mind? Are they willing to consider alternatives? How should I frame my report?How do my users perceive my purpose as significant? How should I format my report? Aremy users going to follow a clear, sequential order to use my report to address my purpose? How should I format my report? When and where is it most important for users to have it?NOTE that Weiss (1982) also pointed out that user decision making processes seldom encompass all (or any) of the above characteristics. They more often tend to be diffuse, with decisions solidifying over time. Therefore, we should not be surprised if our answers to any of the above questions are fuzzy or nonexistent. Nonetheless, the process of thinking through these questions will help us to better position/format our reports. And we should plan reports that work in this fuzziness.
A number of influential evaluation scholars and practitioners have also explored the potential benefit of user involvement in the evaluation (and/or reporting) process (Cousins and Earl,1995, 1992;Cousins, Goh, & Clark, 2006; Patton, 2008; Walter, Nuntley, Davies; 2003). Cousins,Goh and Clark (2006) explored, for example, how evaluation data use in school contexts can lead to data valuing.Teachers and administrators in a group of 4 studyschools were involved in a two-step interview process. Cousins, Goh and Clark (2006) identified through analysis of interview transcripts a series of supportive and inhibiting factors that impacted reliance on evaluative inquiry in their study schools. One of the more influential factors they dubbed ‘data use leads to data valuing’. In other words, it was through using data that some of their participant teachers tended to start recognizing its value.Cousins and Earl (1995, 1992) pointed out, however, that involving participants in evaluations does not always work out well. It is a relatively fragile marriage. So, we should proceed carefully into user involvement strategies.Patton (2008) provided a useful taxonomy of levels of user involvement to guide our thoughtful planning of participatory strategies. Essentially, we can involve users to a lesser (informing) to greater (collaborating) extent, or even support their own abilities to deploy evaluation (empowerment). For each of these strategies, we make certain promises to our users. Strategies are also well suited to different kinds of users. In the end, it may be more typical to collaborate with some users in crafting our impact reports (e.g., regional directors, program leaders, key county extension committee members) or consult others (e.g., a program specialist or county commissioner) and simply inform others (Extension leadership, state legislators, federal reporting staff). NOTE, however, that authors like Lipton (1992) and Patton (2008) have described the importance of deeper involvement (consulting to collaborating) in all aspects of the impact evaluation. Ideally, we involve our intended users in helping us determine the priority impacts that will focus our evaluation, designs and data that they can trust and use, and reporting formats that fit their needs. This harkens back to an early slide in the presentation – involvement will ideally begin early…not right before we craft our reports. But involvement even then can be important.
Cousins and Weaver (2004)elaborated another useful taxonomy for planning involvement in evaluation and reporting. They identified 5 important aspects of evaluations that could be conducted in more or less collaborative/inclusive manners. They subsequently suggested that a given collaborative project could be analyzed by rating it Likert-style on each of the proposed scales for “a helpful way to think about collaboration” (Cousins & Weaver, 2004, p. 37).This taxonomy can help us be thoughtful in which parts of our impact evaluation and reporting processes involve our users to some extent.
Walter, Nuntley and Davies (2003) compiled a meta-analysis of research utilization literature, ultimately identifying a number of practices that support ‘research impact’ or use. In addition to concerns about formatting, content and placement of our impact reports, these practices enable us to support our users in making use of whatever report we produce. Walter, Nuntley and Davies (2003) suggest re-presenting results over and over, both print and orally. They suggest facilitating and/or educating users to use results. The social influence of others and/or collaborations in the evaluation process can catalyze use. And feedback/reminders are important.These supports should come as no surprise to many Extension field staff, as they are essentially hallmarks of effective adult education and behavior change practice. Walter, Nuntley and Davies (2003) have, in some ways, suggested only that we practice in our evaluations that which we do in our primary education work.
Weiss (1988) provided a helpful list of other channels beyond the written report that can be used to disseminate results of our impact evaluations. Considering Walter, Nuntley and Davies (2003) call for re-presentation of results and cultivation of social influence, we can imagine how these and potentially other channels may be useful secondary outlets (or potentially primary outlets) for reporting our impacts.[ASK for some descriptions of channels that participants have used in reporting their impacts.]
ConcedingSmith & Straughn’s (1983) call for standardized, palpable impact evaluation practices, it will be helpful to consider how our impact evaluation and reporting efforts coordinate with our established reporting practices. Due to the fluid nature of the federal reporting process, it can be conceived as an element central to coordinated reporting. In this report, we are relatively free to establish our own proposed impacts, outcomes and performance targets. We can also report on changes in plan and unintended outcomes. Therefore, the impacts and outcomes that become a focus of that process can evolve from our 3-fold program planning efforts (public value, program business planning, and impact evaluation planning). Moreover, they can be informed by expectations from our various policy/budgetary stakeholders. Data collected for the federal reporting process can then be used synchronously in fulfilling other reporting requests. Ideally, we could pull data from various scales, or comparative data as necessary for effect sizes and counterfactuals. Moreover, the federal process is responsive to the use of program theory in explaining program impacts.Therefore, it is recommended that program teams explore ways to improve their synchronization of federal reporting and other impact evaluation/reporting practices.[DESCRIBE a hypothetical example of how this may unfold. DISCUSS with participants benefits and challenges to coordinating our efforts in this manner.]
In summary, this presentation has explored recommendations for improving our reports of impacts of Extension programming. In a brief overview, the presentation described the necessity of reporting impacts to sustain a viable Extension. We need subsequently to clarify and prioritize our impacts, and design thoughtful impact evaluation. I suggest even including this as one of ‘three-legs’ that support our program quality. In developing our reports of impacts, we need to define a ‘real purpose’, evaluation function and related uses. We need to get to know our users in some specific ways – looking for the personal factor, and high interest/power users. And we need to develop reports that account for user’s truth/utility tests, communication styles, and decision making contexts. Finally, we can employ different strategies to support their use.I concluded the presentation with a call for coordination of our impact evaluation and reporting practices. This can be accomplished by using the federal reporting process to build and gather our evidence.
Intro To Impact Reporting
Communicate the ‘Of What' and ‘On What' to the ‘For Whom'An Introduction to Improving Extension Impact Reporting<br />Nathan J. Meyer, Extension Educator ESE<br />Draft May 20102009<br />
Overview<br /><ul><li>Why report Extension program impacts?
Our mission is impact<br />Taking University research and education to the people of Minnesota ... discovering real-world solutions to real-life problems.<br />University of Minnesota Extension Mission<br />
The Extension Hedgehog<br />Extension is the highest value higher education investment available to government and non-governmental funding agencies because Extension education programs have impact, and we can prove it with credible measures. And the positive impacts of our research and education programs are multiplied by thousands of volunteers that apply new knowledge and skills in service to their communities. <br />-McGrath, Conway & Johnson (2007)<br />
Our future is impact<br />No public entity can afford to offer programs that are nice and interesting but without significant impact and value.<br />Dean Beverly Durgan (2009)<br />
What are impacts and how do we evaluate for them? (A Brief Review)<br />
Impact is…<br />The positive & negative, primary & secondary long-term effects produced by an intervention, directly or indirectly, intended or unintended.<br /><ul><li>Final level of the program causal chain
Difference of indicator of interest with and without the intervention</li></li></ul><li>What is NOT an impact?<br />SEE Conditions<br />Practices<br />KASA<br />Reactions<br />Participation<br />Activities<br />Resources<br />SEE Outcomes<br />Practices<br />KASA<br />Reactions<br />Participation<br />Activities<br />Resources<br />Evaluation<br />Planning<br />
So impact evaluation is…<br />Attribution. Effects produced by…<br />Counterfactual. What would have happened otherwise….<br />
So impact evaluation must…<br />Articulate ‘of what’, ‘on what’, and ‘for whom’.<br />Address attribution and the counterfactual.<br />Articulate and use the program theory.<br />Use a mixed-methods approach.<br />Start early.<br />
Still a challenge…?<br />No clear program theory (explicit goals).<br />Effects are typically small.<br />Effects cannot be separated from other sources.<br />Few standardized, consistent procedures for evaluating programs.<br />
So how can we effectively communicate the 'of what' and 'on what' to the 'for whom'?<br />
Provide 'real users' with 'information they can understand and apply' about our impacts for 'real purposes'.<br />
It’s simple, right?<br />Elements of an Impact Statement:<br />Describe the problem in simple terms.<br />Describe the Extension program response.<br />Describe the results of the Extension program.<br />Describe who was responsible.<br />
Use is complicated….<br />Johnson (1998)<br />
Get to know your ‘real users’.<br />What decisions are the findings expected to inform? (Primary Uses)<br />Who exactly will make these decisions? When? (Intended Users)<br />Funder<br />Don at Initiative Foundation<br />Actually…Barb will be presenting findings to Don. He trusts her opinion.….<br />
Get to know your 'real users'.<br />The personal factor is the presence of an identifiable individual or group of people who personally care about the evaluation and the findings it generates.<br />Patton (2008)<br />
Then give them a report they can believe and apply<br />Truth. Extent to which research adheres to canons of scientific method, and accords to previous knowledge of how the world works.<br />Utility. Extent to which research provides actionable direction, or challenges the status quo. <br />
Communicate with your users<br />We all fall prey to this tendency [slinging jargon] from time to time, but rarely do we consider the costs of lapsing into this kind of lingo, costs measured in a failure to communicate with the people we most need to reach, and in the mental laziness that jargon permits.<br />Zuckerman (2001/2002)<br />
Mean # of Participants<br />Format data for action<br />
Proxies can be powerful…<br /><ul><li>Somatic cell count
Involve your users<br />Control of technical decision making – stakeholder to evaluator<br />Diversity among stakeholders selected for participation – diverse to limited<br />Power relations among participating stakeholders – conflicting to neutral<br />Manageability of evaluation implementation – unmanageable to manageable<br />Depth of participation – deep to consultative<br />
Support user in making use…<br /><ul><li>Re-presentation
Explore ways of coordinating your efforts</li></li></ul><li>Acknowledgments<br />Jean King, PhD and members of EDPA 8595 for their thoughtful ideas, critique and discussion. Joyce Hoelting, Renee Pardello and Mardi Harder for their willingness to describe current Extension impact reporting practices. Extension staff for sharing their impact reports.<br />