Evaluation and Assessment Demystified
Upcoming SlideShare
Loading in...5

Evaluation and Assessment Demystified



I collaborated with Jeremy Poehnert of Massachusetts Campus Compact to pilot this workshop on evaluation and assessment of civic learning in higher education. This is a very nuts and bolts approach to ...

I collaborated with Jeremy Poehnert of Massachusetts Campus Compact to pilot this workshop on evaluation and assessment of civic learning in higher education. This is a very nuts and bolts approach to the often overwhelming process of evaluating and assessing learning outcomes and community impact.



Total Views
Views on SlideShare
Embed Views



2 Embeds 5

http://www.linkedin.com 3
https://www.linkedin.com 2



Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment
  • IntroductionJeremy – Gregory – MPA in Non-profit management from
  • 1. Planning Phase – Define Mission, Goals, Objectives, and Desired Outcomes2. Collect Data3. Analyze Data4. Link the assessment findings to other institutional performance indicators5. Identify gaps between desired and actual outcomes6. Report assessment data and programs/services improvement plan to stakeholders7. Design strategies to bridge gaps between desired and actual outcomes8. Implement the strategies for program/service improvement
  • Evaluation and Assessment have their roots as independent branches of study in President Lyndon Johnson’s Great Society programs and the Planning-Programming-Budgeting (PPB) system.Vast sums of taxpayer dollars were investing in programs like: Job Corps, Peace Corps, Teacher Corps, Upward Bound, Head Start, Medicare, Medicaid, National Endowment for the Arts, National Endowment for the Humanities, Corporation for Public Broadcasting, Department of Transportation, and at least 11 acts related to the environment,One part of Johnson’s Great Society programming was offering legitimacy (and financial support) to scholars who examine: the efficiency and efficacy of how public resources were allocated the impact of social programs on individual behavior the effectiveness in attaining stated objectives the effects on the well-being of rich/poor, minority/majority, North/South dichotomiesTypically, people don’t like spending money on things that don’t work as advertised at the price they were quoted. Similarly, people generally don’t want to limit or discontinue programs that are effective, efficient, and equitable.General disenchantment with government-provided services in the 70s was summed up by President Gerald R. Ford in his first address to a joint session of Congress as President. He said: A government big enough to supply you with everything you want is a government big enough to take away everything you have. Specifically, he was speaking about curtailing government spending as a means of scaling back inflation, which was rampant at the time (nearly 10% or 3x what it is today). But this statement set the tone of the conservative agenda for the foreseeable future.The next period of program evaluation’s history is the Reagan years. Reagan’s political mandate (the legitimacy he had for his policies by winning the presidential election by 440 electoral votes) was to shrink government, including ending funding for social services. In order to do this responsibly, proper evaluations of the programs were necessary. By now, you’re probably wondering why I’m starting with a political history of program evaluation. The reason is simple: evaluations and assessments do not occur in a vacuum. They are the product of the political climate, both in the microcosm of a college campus and in global society. As evaluators of programs, projects, and policies, one needs to keep this in mind at all times.I made this word cloud from the nearly 800 page text, “Handbook of Practical Program Evaluation, 3rd Edition”. The more frequently a word appears in the text, the larger it is.All these words are often the scariest part. Separately they seem pretty innocuous, but together they can be overwhelming.Qualitative, implementation, logic, practical, statistical, measurement, recruitment, design, model, review… In evaluation and assessment, like in any field, words have particular meanings. Knowing these idiosyncrasies will help you better understand evaluations and assessments when you read them, and make you better at doing them yourself.All an evaluation boils down to is using critical thinking that you apply every day in a systematic and documented way. That’s it! Simple… time for lunch. The fear that surrounds evaluation and assessment are in the words and the fear of doing it wrong. There are dozens of different research methods and theoretical bases one can use to evaluate and assess programs, partnerships, and potential. Quantitative research is couched in statistical terms. Qualitative research is sometimes evanescent and due to its nature, often difficult to wrap one’s head around until you’re immersed in it. Fortunately, we’re all placed at college campuses where there are people who have the skills to help us, whether they know it or not.As I said before, evaluation is a relatively new academic discipline. To date, there are only 50 masters or doctoral programs globally that specialize in evaluation and only one PhD in Evaluation (begun in 2007). Most of these focus on educational and psychological testing and methodology. This would be the best place to search for help when doing an evaluation.Despite the existence of masters and doctoral programs on evaluation, that piece of paper isn’t wholly necessary. Every evaluation or assessment text notes that anyone can evaluate – evaluators come from every discipline, every background, every belief system (conservative, liberal, progressive).My goal for today is to provide you with enough tools and exposure to the processes of evaluation and assessment that you feel confident engaging in the process.
  • The ultimate goal of evaluation and assessment is efficiency, efficacy, and equity!Before we get into the meat of doing your own evaluation or assessment, let’s discuss what these three words mean.
  • These definitions, though rather academic, are pretty spot on. The difficulty in reaching the holy grail of program design is determining how each of these three fit together.Let’s look at an example we all have some experience with, either personally or anecdotally: financial aid for higher education.First, is it efficient? Why or why not?Effective? Why or why not?Equitable? Why or why not?It isn’t always easy to determine whether something is efficient, effective, and equitable. Frequently, it’s impossible to do so during the program, meaning we can only know after the fact. Consistent monitoring of program progress helps, but to get to the point of consistent monitoring, we need to have a clearer understanding of what we’re evaluating and assessing.Jeremy covered the first part of an evaluation very well. The first step is always asking yourself “What do I want to know?”I won’t labor over the topic again, just remind everyone that if you don’t know what you want to find out, evaluation becomes significantly more difficult – but not impossible. Some of the research methods I’ll cover in this presentation can help, but walking into an assessment or evaluation without having at least a vague idea of your goal is like maneuvering Boston streets without a map or GPS… you’ll probably get there eventually, but you won’t take the most direct route.
  • Now that we have our feet wet and we’ve discussed the holy grail of program design: efficacy, efficiency, and equity, let’s go over the agenda: (agenda)Before we get into some theory, there are a couple vital things to know:First – there are right and wrong ways to do evaluation. The validity of an evaluation depends on correctly applying the appropriate research method.Second – part of being a good evaluator is thinking critically and creatively. Sometimes this means playing devil’s advocate. More often than not, if you evaluate or assess something, you will be asked to help design the next iteration of it. For better or worse, a formal evaluation puts all the data cards in the evaluator’s hand. You need to play them wisely.Third – evaluation and assessment aren’t about pleasing people, or angering people for that matter, they are about gathering and reporting information in order to help a program or project better ameliorate the social problem in question. If you find yourself leaning toward saying something that isn’t backed up by the data, or other research, step back and evaluate your evaluation. Similarly if a stakeholder -- be that your boss, a community partner, or a funder asks you to do something you know will jeopardize the value of your information, then it is your job as an evaluator to explain to the stakeholder why that is unethical. As long as you are backing yourself up with the data, and the data was collected in a sound manner, you are allowed to be self-righteous. It’s one of the untold perks of being an evaluator. You should be cautious of the line between defending your data and methods, and defending your ego.Now let’s move on to some theory. I promise it’s not scary.
  • Program theory – assumptions about how a program relates to social benefit; how it produces intended outcomes; strategy and tactics to achieve the goalImpact theory – nature of the change in social conditions caused by program activitiesProcess theory – depicts organizational plan and utilization planThere are three main branches in the theory of evaluation: methods, value, and use.The first branch is evaluation guided by methods, which deals with obtaining generalizability, or “knowledge construction” as Shandish, Cook, and Leviton (1991) call it. Peter H. Rossi, Stuart A. Rice Professor of Sociology Emeritus at the University of Massachusetts-Amherst until his death in 2006.The value branch is based on the work of Michael Scriven (1967) who established the role of evaluator in valuing. This branch holds that the value of data is the most essential component of an evaluator’s work. Guba and Lincoln (1989) extend the evaluator’s role to include facilitating the placing of value by others.Independently both Daniel Stufflebeam and Joseph Wholey, focus on the concern for the way in which evaluation information is used and focuses on those who will use the information.I bring up these three theories only because evaluators must keep these concepts in mind: the methods used, the value of the evaluation, and who will use the evaluation.Evaluators must also keep an eye out for bullshit – from stakeholders, participants, respondents, and anyone involved, even tangentially, in the evaluation process. Harry Frankfurt, professor emeritus of Philosophy at Princeton, defined a theory of nearly lying in “On Bullshit” in 2005. *define bullshit*
  • Research, Monitoring, Assessment and Evaluation are used interchangeably in everyday language. Full disclosure, they are sometimes used interchangeably by professionals too. Despite that, I think it’s important to know exactly what you’re doing, whether everyone else does or not. (go through the definitions, then ask for examples from the group)
  • Like Jeremy discussed, being able to successfully evaluate a program means you need to know the purpose of the program.What I’d like to do now is get into groups for the next 10 minutes and talk about the MACC AmeriCorps*VISTA program.
  • It would be great if every evaluation could be identical, and if there was a set system for every evaluation and assessment – but there’s not. In order to be useful, evaluations must be tailored to each individual program. Yes, there is a What must be tailored? The questions the evaluation is to answer – limit the scope! No evaluation or assessment can be everything to everyone, and it shouldn’t! The methods and procedures used to answer the question – the methods must be practical (not too expensive or too time consuming) and provide meaning (data for data’s sake is a waste of time). The nature of the evaluator-stakeholder relationship – defining roles early on, ensuring impartiality (more on ethics later)
  • AssessmentProgram Theory – answers questions about conceptualization and designProgram Process – answers questions about operations, implementation and service deliveryEvaluability – negotiation and investigation undertaken by the evaluator and the evaluation sponsor, and possibly other stakeholders, to determine whether a program meets the preconditions for an evaluation and how the evaluation should be designed to ensure maximum utilityEfficiency – answers questions about program costs in comparison to either 1) the monetary value of its benefits, or 2) effectiveness in terms of the changes brought about in the social conditions it addresses.Needs – answers questions about social conditions a program intends to address and the need for the programImpact – answers questions about program outcomes and impact on social conditions
  • Evaluation investigates programs in ways that are adapted to their political and organizational environments.Program – using social research methods to determine effectiveness of programsImplementation (aka Program)– form of program evaluation that determines whether the program is delivered as intended to the target populationFormative – to furnish information that will guide program improvementSummative – to give a summary judgment on critical aspects of program performance, i.e. to determine if goals and objectives are metBlack Box – doesn’t have the benefit of an articulated program theory, therefore the program has no (formal) insight into what is causing the outcomes and whyIndependent – evaluator has primary responsibility for developing the evaluation plan, conducting the evaluation, and disseminating the resultsParticipatory – organized as a team project where evaluator and stakeholders work together to develop the evaluation plan, conduct the evaluation and disseminate the resultsEmpowerment – form of participatory evaluation where the evaluator’s role includes consultation and facilitation to help stakeholders learn to conduct their own evaluations, use the evaluation effectively for advocacy and change, and influence how a program affects their lives.
  • Qualitative – understand and interpret; small, not random; studies the whole, not variables; subjective; identify patterns, themes; less generalizable; theory generated from the data; explore, discover, construct; narrative reporting with direct quotations from participantsQuantitative – test hypotheses, make predictions, examine cause & effect; large, random selection; objective; specific variables are studied; identify statistical relationships; generalizable; describe, explain, predict; statistical reporting with correlations, comparisons of means, statistical significanceMixed – best of both worlds, looks to give a wide understanding of the program; worst of both worlds because it requires the evaluator to have a good understanding of two established methodologies that are sometimes at odds.
  • Environmental Scan – This is borrowed from the world of Business. My favorite framework for an environmental scan is PESTEL (Political Factors, Economic Factors, Socio-cultural Factors, Technological Factors, Environmental Factors, and Legal Factors). As a method of business analysis, there are plenty of resources out there to help you with this process. Another popular environmental scanning methods are SWOT/SWOC/SLOT/SLOC. In choosing a method for the environmental scan, just remember the real purpose: finding out what is happening around the organization, program, or project, that may have an effect on the organization, program, or project.Logic Model – graphical depiction of program theory (Kellogg Foundation)Mapping – graphical depiction of the past, present, and future of the program or project. Programs can be as small as a five person dual enrollment program or as large and complex as the United Nations. Typically, this includes: Mission, Objectives, Goals, Problems and Risks, Internal and External Players, Tools and Constraints, Opportunities, Realistic Goals, and Solutions, Long Term Improvements, Outcomes, Accountabilities and Stakeholders, Future areas for Evaluation, Vision and Legacy of an organization or program. The idea of mapping was developed by Dan H Fenn Jr. at Harvard Business School. I have two resources available electronically if you are interested in more detailed study of mapping and how to do it.Survey – Interview – Focus Groups – Case Study – Statistics - http://faculty.vassar.edu/lowry/webtext.html; http://wise.cgu.edu/;
  • Bias – extent to which a portion of the target group is unequally affectedValidity – extent to which a measure measures what it is intended to measureStatistical Power – the likelihood that a null hypothesis will be rejected, OR the answer to the question of “how big does my sample size need to be in order to ensure statistically value”Honesty and Directness - Pre-Post Test –> Testing EffectHidden Agendas
  • Take surveys as an example:Loaded question – “Have you stopped drinking while you’re at work?” The question implies a yes/no response, but answering anything other than “I never have drank at work.” would tell the questioner that you at one time did drink at work or you currently do. Loaded questions have inherent judgment and assumptions. These have no place in surveys, ever.There is a drastic difference between asking someone – “How did this experience affect you?” and “Did participating in this experience make you a better citizen?” One asks you to think about all the aspects of your experience and draw your own conclusions. The other leads you to just discussing how your participation affected your citizenship. While there’s a time and place for leading questions, by and large they should usually stay out of surveys. Same story with loaded questions, likeSuggestive question – “” or “As a result of my service, I have a better understanding of the needs facing my community”Compound Question – “As a result of my service, I have become interested in serving in a full-time AmeriCorps or other post-graduate service program after graduation.”
  • Clearly, this chart shows that 193% of the American Voting Public would vote Republican in the next election… right?
  • There are really only six steps to the nuts and bolts of an evaluation: (slide). The intimidating part is that the sheer volume of expertise or data gathering necessary to do an evaluation. That being said, one can do an entirely acceptable evaluation or assessment on the quick and on the cheap: the absolutely minimum requirements for an evaluation are the environmental scan, assessing program theory, measuring outcomes, and interpreting effects.
  • Assessing program theory is important because programs with weak or faulty conceptualization have little prospect for success, regardless of how well the concept is played out. In other words, if your program theory is garbage, do not waste your time evaluating the program.Remember – Program Theory is simply the logic that was used to create the program. More simply, it’s everything that led to the creation of the program.This essentially asks the question: What social situation are we trying to affect, and how are we planning to affect it?Inputs are the resources that an organization devotes to a project: employees, time, money, space, etc.Activities are things that an organization does to cause the change, if all goes as plannedOutputs are quantifiable, just like measures: typically with programs, these are simply counts of attendeesOutcomes are simply the effects that the program hopes to have. They are specific and measurable.
  • Are we meeting our intended outcomes? Where are we falling short, where can we improve upon our successes?During our undergrad years we all assessed one important thing: our progress to graduation.
  • Causality is not necessarily due to program changes.
  • This is where things like confidence intervals, statistical modeling, control groups and intervention groups. It is a serious undertaking and absolutely must be done under the supervision of a trained professional. Generally speaking college campuses have more than enough trained staff, but finding someone with time might be difficult. Anyone who has completed an applied doctoral or applied master’s program that required original research is qualified to help in this section.Now that I have that scary bit out of the way, there is some good news about impact assessment…Randomized field experiments are only necessary when the following five conditions are met. (Dennis and Boruch 1989) 1) The present practice must need improvement 2) The efficacy of the proposed program must be uncertain under field conditions. 3) There is no simpler alternatives for evaluating the program. 4) The results must be potentially important for policy decisions. 5) The design must be able to meet the ethical standards of both the researchers and the services providers.Most of the time, service learning programs do not meet these minimum qualifications. In particular, the efficacy of most service learning and civic engagement programs is certain under field observation. For example, if you have a service day where students clean an elementary school, one can count the hours, the number of rooms cleaned, the amount of garbage removed, and, if there is a reflection afterward or a survey provided to participants, then you can learn how students feel they changed as a result of the experience first-hand. Also of importance, is that if you get information for an entire population, inferential statistics aren’t necessary. If you have survey responses who participated in a program, you only need descriptive statistics because you are describing everyone who received the intervention.
  • 1. Planning Phase – Define Mission, Goals, Objectives, and Desired Outcomes2. Collect Data3. Analyze Data4. Link the assessment findings to other institutional performance indicators5. Identify gaps between desired and actual outcomes6. Report assessment data and programs/services improvement plan to stakeholders7. Design strategies to bridge gaps between desired and actual outcomes8. Implement the strategies for program/service improvement

Evaluation and Assessment Demystified Evaluation and Assessment Demystified Presentation Transcript

  • & Evaluation Assessment Gregory D Greenman II Jeremy PoehnertSunday, May 13, 12
  • Expectations • What are your expectations from this workshop? • What particular topics would you like addressed? • What are your fears about evaluation and assessment?Sunday, May 13, 12
  • Agenda • Thinking about data • The big picture – • Taking a strategic approach to assessment.Sunday, May 13, 12 View slide
  • Why do we evaluate and assess? What are the challenges of Evaluation and Assessment?Sunday, May 13, 12 View slide
  • The Why • Understand the impact of our work • Demonstrate the value of our work • Engage with constituents • Identify areas for expansion and development • Gauge progress toward a determined goal • Make a case for funding and support for our work • Compare programs and approaches • Meet reporting requirements • Improve our ability to reach our goalsSunday, May 13, 12
  • The What • Impact on Students • What have students learned from their experience? • How does it connect to the learning goals of related courses, majors, and the broader institution? • Impact on the Community • Are programs meeting the identified community needs? • Impact on the Campus • How does this work impact our institution? • This includes the institutionalization of civic engagement on campus and the impact of grants and funding, public relations, admissions and retention, and various college ranking systems. • The state of our Community Partnerships • How functional, productive, and strong are our community partnerships?Sunday, May 13, 12
  • The Basic Assessment CycleSunday, May 13, 12
  • A More Detailed Assessment Model 1. Planning 8. Phase 2. Collect Implement Data Strategies 7. Design 3. Analyze Strategies Data 6. Reporting 4. Linking 5. Identify GapsSunday, May 13, 12
  • The Painful Assessment Cycle • “Oh my gosh, the dean/president/accreditors/government/funder wants us to assess program X!” • “They want it by when?” • “Let’s throw together a survey/focus group/whatever.” • “Have you written the report yet?” • “Thank goodness that’s over.” • “Whatever happened to that report we wrote last year/decade?” • “What report?” • “Oh my gosh, the dean/president/accreditors/government/funder wants us to assess program X!”Sunday, May 13, 12
  • The Basics • What are you trying to discover through your evaluation or assessment? • What are some specific projects you are trying to evaluate or assess?Sunday, May 13, 12
  • What we all know, but rarely do • The best time to plan an evaluation or an assessment is before we start a project. • Think through the whole process before we get started.Sunday, May 13, 12
  • Key questions • Why are we doing this evaluation? • Who are our constituents? • Who is leading the evaluation? • Who is contributing to the evaluation? • What resources do we have to complete the evaluation? • What is our timeline? • How are we sharing the results? • What are the next steps after we complete this evaluation; how will it affect our programs? • How can we leverage this assessment to improve our work?Sunday, May 13, 12
  • The Unspoken Questions • Will this actually serve any purpose or is it just for show? • What if we get results we don’t like?Sunday, May 13, 12
  • The Three E’sE • Efficiency • Using the appropriate amount of a resource to produce an effect. • Efficacy • Ability to produce the desired effect regardless of collateral effects. • Equity • Fair distribution or access to the desired effect.Sunday, May 13, 12
  • Overview • A Little Light Theory • Definitions • Types of Evaluation and Assessment • Research Methods • Tools • Ethics & the Social Context of Evaluation • DIY Evaluation • ReviewSunday, May 13, 12
  • A Little Theory • Program Theory • Impact Theory • Process Theory • Branches of Evaluation Theory • Methods, Value, Use • On Bullshit • “Deceptive misrepresentation, short of lying, especially by pretentious word or deed, or somebody’s own thoughts,Sunday, May 13, 12
  • Definitions • Research • Any gathering of data, information, and facts for the advancement of knowledge • Monitoring • Maintain records on program performance and adherence • Assessment • A reflective process that is diagnostic, flexible, and absolute • Evaluation • A focused technique based on fixed criteria that compares and judges data against norms or baselinesSunday, May 13, 12
  • Definitions • Learning Objective or Outcome • A statement that captures what knowledge/skill/attitude should be exhibited following instruction/activity. • Response Rate • A ratio of the number of respondents over the number of potential respondents • MGOMA: Mission, Goals, Outcomes, Measures, and ActivitiesSunday, May 13, 12
  • MGOMA Activities Measures Outcomes Goals Mission • What the • How one • Effect on • Purpose of • Purpose of program does measures the target the program organization activities population • Abstract • Quantifiable • SpecificSunday, May 13, 12
  • MGOMA Activities Measures Outcomes Goals Mission • Trainings • Number of • Understand • Develop the • Dedicated • Resources trainings ing of role as civic skills of to promoting • Leadership • Frequency a citizen students community Training of resource • Understand • Build service in • Capacity use ing of partnerships higher Building • Amount community with the education. • Campus secured in needs community Consultation grants • Sense of • Integrate • State and • Number of ability to civic National partnerships affect a engagement policy work created community with teaching • Grants and • Number of • Reciprocal and research Funding students relationships • Partnership involved in with s service- community • Regional learning partners ConferencesSunday, May 13, 12
  • Tailoring • What needs to be tailored to an evaluation? • Evaluator – Stakeholder relationship • Evaluation criteriaSunday, May 13, 12
  • Evaluation Criteria • Needs or wants of the target • Past performance; historical data population • Targets set by program managers • Stated program goals and or program funders objectives • Expert opinion • Customary practice; norms of other programs • Pre-intervention baseline levels for the target population • Legal requirements • Cost or relative cost • Ethical or moral values; social justice and equitySunday, May 13, 12
  • Assessment • Assessment of Program Theory • Assessment of Program Process • Evaluability Assessment • Efficiency Assessment • Needs Assessment • Impact AssessmentSunday, May 13, 12
  • Evaluation • Program Evaluation • Implementation or Process Evaluation • Formative Evaluation • Summative Evaluation • Black Box Evaluation • Independent Evaluation • Participatory or Collaborative Evaluation • Empowerment EvaluationSunday, May 13, 12
  • Research Methods • Quantitative • “How much” or “to what degree” • Qualitative • Depth and breadth • Mixed-Methods • Combination of the twoSunday, May 13, 12
  • Toolbox • Environmental Scan • Logic Model • Mapping • Survey • Interview • Focus Group • Case Study • StatisticsSunday, May 13, 12
  • Ethics • Primum non nocere • Bias • Validity • Statistical Power • Honesty and Directness • Testing Effect • Hidden Agendas • Human SubjectsSunday, May 13, 12
  • Types of Questions • Loaded Question • A question with inherent assumptions and judgment about the respondent. • Leading Question • A question that suggests the information to the respondent. • Suggestive Question • Implies a particular response. • Compound Questions • Ask two or more separate questions at the same time.Sunday, May 13, 12
  • Sunday, May 13, 12
  • Resources • Evaluation: A Systematic Approach • Peter H. Rossi, Mark W. Lipsey, Howard E. Freeman • Assessing Service-Learning and Civic Engagement: Principles and Techniques • Sherril B. Gelmon, et al. • The Evaluation Cookbook • Learning Technology Dissemination Initiative at Heriot-Watt University, Edinburgh • W.K. Kellogg Foundation Evaluation Handbook • http://www.socialresearchmethods.net/ • Evaluation Center at WMU: www.wmich.edu/evalctr/ • American Evaluation Association: http://www.eval.org/Sunday, May 13, 12
  • DIY Evaluation • Environmental Scan • Assessing Program Theory • Assessing Progress • Measuring Outcomes • Assessing Impact • Interpreting EffectsSunday, May 13, 12
  • Environmental Scan • PESTEL • SWOT • Political • Strengths • Economic • Weaknesses/ • Socio-cultural Liabilities/ • Technological Limitations • Environmental • Opportunities • Legal • Threats/ ChallengesSunday, May 13, 12
  • Program Theory • Inputs • Time • Money • Space • Activities • Trainings • Outputs • Tutoring • Outcomes • Number of students attend the tutoring sessions • Initial • Intermediate • Students gain knowledge about a topic • Students perform well when tested • Long(er)-Term • Students are excited about learningSunday, May 13, 12
  • Progress Assessment • Service Utilization • How many people receive our services? • Is the target population receiving services? • Are the participants receiving the correct amount, type, and quality of service? • What are the collateral consequences of providing the service? • Program Operations • Are the proper resources being allocated to the program? • Does the program coordinate well with other programs? • Are participants satisfied with the service and the personnel? • Is the program well-organized? • Does the staff work well together?Sunday, May 13, 12
  • Measuring Outcomes • Stakeholder perspectives • Participant perspectives • Collateral consequencesSunday, May 13, 12
  • Impact Assessment • To determine which outcomes can be attributed to the program • The most effective impact assessment methods are randomized field experiments or quasi-experimentalSunday, May 13, 12
  • Interpreting Effects • Moderator Variables • Mediator Variables • Statistical Power • Statistical SignificanceSunday, May 13, 12
  • Review • DIY Evaluation • Resources • Evaluation Questions • Research Types and Methods • MGOMA • Toolbox • Ethics • Honesty and Directness • Human Subjects and Informed Consent • Validity, Bias, and Hidden Agendas • Strategic UseSunday, May 13, 12
  • You have the Data, Now What? • “Five years ago we did an assessment which played a key role in making this program a national model.” OR • “What’s this binder with all the dust on it?”Sunday, May 13, 12
  • Follow Up Steps • Analyze the data • Put the results in a broader context • How do these results fit within the broader environment? • Identify gaps between desired and actual outcomes • Report results to various constituents • Executive Summary for the President, PowerPoint for students/the community, in depth report for those intimately involved • Design strategies to bridge gaps between desired and actual outcomes • Implement the strategies • Start the process againSunday, May 13, 12
  • Scenarios • Scenario 1 -The Dean went to a conference and heard about how civic engagement promotes student learning and is also good for public relations. Now she wants you to do an assessment of all the civic engagement /SL projects your campus has ever done. She emails you about it on August 1, an would like the report by November 1. She isn’t offering any additional resources, and you’ll need to continue your normal work while you do the assessment. You feel like this request might be a little unrealistic, but want to offer a positive response to the idea of assessment. Use the worksheet to begin thinking through your response. How can you explain your concerns, what can you offer as a more realistic counter proposal?Sunday, May 13, 12
  • Scenarios • Scenario 2 • The Dean went to a conference and heard about how civic engagement promotes student learning and is also good for public relations. Now she wants to promote assessment of the programs already happening on campus. She meets with you about it on June 1 and asks you to start by looking at the college’s K-12 programs. She provides you with $3,000 in extra funding to support the assessment process. She’d like you to put together an assessment PLAN for your K-12 programs by August 15, and start implementing it September 1, when the new academic year starts. Use the worksheet to start thinking through the process.Sunday, May 13, 12