Your SlideShare is downloading. ×
0
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Results based accountability101 (2013)
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Results based accountability101 (2013)

1,963

Published on

Results based accountability101 (2013)

Results based accountability101 (2013)

Published in: Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,963
On Slideshare
0
From Embeds
0
Number of Embeds
3
Actions
Shares
0
Downloads
12
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Introduction and the difference between population and performance accountability: We are going to talk about two different kinds of accountability: Accountability for whole populations, like all children in Los Angeles, all elders in Chicago, all residents of North Carolina. This first kind of accountability is not the responsibility of any one agency or program. If we talk for example about “all children in your community being healthy,” who are some of the partners that have a role to play? Notice that the traditional answer is “It’s the health department.” It’s got the word health in it and so it must be the responsibility of the health department. And yet one of the things we have learned in the last 50 years is that the health department by itself can’t possibly produce health for all children without the active participation of many other partners. And that’s the nature of this first kind of accountability. It’s not about the health department. It’s about the kind of cross community partnerships necessary to make progress on quality of life for any population. Now the second kind of accountability, Performance Accountability, is about the health department. It’s about the programs and services we provide, and our role as managers, making sure our programs are working as well as possible. These are two profoundly different kinds of accountability. We going to talk about how to do each one well and then how they fit back together again.
  • These are criteria you should apply to any planning or management system you are considering. Most past efforts have been big paper exercise wastes of time. It is possible to do this work in a way that is simple, common sense, plain language, minimum paper and most importantly useful. Results and performance accountability is one approach that meets these tests.
  • Introduction and the difference between population and performance accountability: We are going to talk about two different kinds of accountability: Accountability for whole populations, like all children in Los Angeles, all elders in Chicago, all residents of North Carolina. This first kind of accountability is not the responsibility of any one agency or program. If we talk for example about “all children in your community being healthy,” who are some of the partners that have a role to play? Notice that the traditional answer is “It’s the health department.” It’s got the word health in it and so it must be the responsibility of the health department. And yet one of the things we have learned in the last 50 years is that the health department by itself can’t possibly produce health for all children without the active participation of many other partners. And that’s the nature of this first kind of accountability. It’s not about the health department. It’s about the kind of cross community partnerships necessary to make progress on quality of life for any population. Now the second kind of accountability, Performance Accountability, is about the health department. It’s about the programs and services we provide, and our role as managers, making sure our programs are working as well as possible. These are two profoundly different kinds of accountability. We going to talk about how to do each one well and then how they fit back together again.
  • Common Language, Common Sense, Common Ground: Here’s another way of thinking about what we’re going to talk about today: Common Language, Common Sense and Common Ground. We’re going to start with Common Language, because the truth of the matter is that it’s a Tower of Babel out there. People are using words in so many different ways. So we’ll start with common language. Common Sense is about the way the rest of the world works. If you look at any serious successful enterprise…. Business is always held up as the way we should model our behavior…. But look at any of the…. Business, the military, the sports world, the faith community. Any successful enterprise starts with ends and works backwards to means. And Common Ground is about the political nature of this work. And all of this, from first word to last, is political in one way or another. This is not necessarily bad. Politics is how we make decisions. But look at the political system, national, state or local and what do you see? People fighting with each other. But look at what they’re fighting about… and more often than not they’re fighting about means and not ends. There’s remarkable agreement that teen pregnancy is bad for our young people. Now we fight about whether to preach abstinence or hand out condoms. But this is a means debate. The agreement about teen pregnancy is remarkably broadly based. And when you begin to articulate what it is we want for children, families, community in plain language. We want children to be born healthy, be ready for school, succeed in school, grow up to be productive, happy contributing adults. We want to live in safe communities with a clean environment. When you begin to say things in plain language like that, it turns out that these kinds of statements are not Republican vs. Democrat. They’re not state vs. local. They’re not executive branch vs. legislative branch. They represent a kind of common ground, where people can come together and say “Yes, those are the conditions we’d like to be able to say exist here.” Now let’s have a healthy debate about the means to get there.
  • The Language Trap: Now you’ve seen all these words before. Read the outer ring of words. And then you get these modifiers in the middle. Read some or all of the inner ring of words. This page is the Jargon Construction Kit. If you want to sound fancy about this work, just pick three or four words off this page at random and string them together. Give example: “Measurable urgent systemic indicators,” whatever the hell that means. And I guarantee you’ll get away with it too, because people will be too embarrassed to ask you what you mean. I have a new rule, that anyone who uses three or more of these words in the same sentence doesn’t know what they’re talking about. It’s very common for two people to be in the same meeting using the same word. They have two entirely different ideas of what that word means, and they’re just talking right past each other. Has this ever happened to you?
  • So what we did a few years ago is develop a set of definitions that would allow us to have a disciplined conversation about this very complex work we’re trying to do. Now the purpose of these definitions is not to impose words on people. Words like “result” or “outcome” are just labels for ideas. If you think about if for a minute, that’s what words are, labels for ideas. And the same idea can have many different labels. What’s important here are not the labels. You can pick whatever labels you like. What important are the ideas, and that we manage to keep three ideas separate at the beginning of this work. Read the ideas and the examples for Results and Indicators.
    Now this last category, performance measures…. Are measures of how well a program, agency or service system is working. Now there are many different ways to categorize performance measures, but I believe that all performance measures can be categorized into one of these three categories: How much did we do? How well did we do it? Is anyone better off? And this last category we sometimes call “customer results” or “customer outcomes.”
    And if you do nothing else in terms of your language convention, I would strongly encourage you…. That whenever you want to use a word like “outcome” or “result” and you’re talking about a program or agency, put a modifier in front of it. Call if “program results” or “client outcomes,” something to distinguish it from the use of the words results and outcome to mean the whole population. This is the single biggest source of language confusion in the U.S. today.
    The Language of Accountability
    From www.raguide.org
    The most common problem in this work is the problem of language. People come to the table from many different disciplines and many different walks of life. And the way in which we talk about programs, services and populations varies, literally, all over the map. This means that the usual state of affairs in planning for children, families, adults, elders and communities is a Tower of Babel, where no one really knows what the other person is saying, but everyone politely pretends that they do. As a consequence, the work is slow, frustrating and often ineffective.It is possible to exercise language discipline in this work. And the way to do this is to agree on a set of definitions that start with ideas and not words. 
    Words are just labels for ideas. And the same idea can have many different labels. The following four ideas are the basis for definitions used at the beginning of this work. Alternative labels are offered:
    Results (or outcomes or goals) are conditions of well-being for children, adults, families or communities, stated in plain English (or plain Spanish, or plain Korean...). They are things that voters and taxpayers can understand. They are not about programs or agencies or government jargon. Results include: "healthy children, children ready for school, children succeeding in school, children staying out of trouble, strong families, elders living with dignity in setting they prefer, safe communities, a healthy clean environment, a prosperous economy." (An interesting alternative definition of a result is provided by Con Hogan: "A condition of well-being for people in a place - stated as a complete sentence." This suggests a type of construction for a result statement as "All ______ in ______ are _____." e.g. All babies in Vermont are born healthy.")
    Indicators (or benchmarks) are measures which help quantify the achievement of a result. They answer the question "How would we recognize these results in measurable terms if we fell over them?" So, for example, the rate of low-birthweight babies helps quantify whether we're getting healthy births or not. Third grade reading scores help quantify whether children are succeeding in school today, and whether they were ready for school three years ago. The crime rate helps quantify whether we are living in safe communities, etc.
    Strategies are coherent collections of actions which have a reasoned chance of improving results. Strategies are made up of our best thinking about what works, and include the contributions of many partners. No single action by any one agency can create the improved results we want and need.
    Performance Measures are measures of how well public and private programs and agencies are working. The most important performance measures tell us whether the clients or customers of the service are better off. We sometimes refer to these measures as client or customer results (to distinguish them from cross-community population results for all children, adults or families). It is sometimes useful to distinguish "program performance measures," from "agency performance measures" from "service system performance measures."
    The principal distinction here is between ends and means. Results and indicators are about the ends we want for children and families. And strategies and performance measures are about the means to get there. Processes that fail to make these crucial distinctions often mix up ends and means. And such processes tend to get mired in the all-talk-no-action circles that have disillusioned countless participants in past efforts. You actually have choices about which labels to use in your work. And clarity about language at the start will help you take your work from talk to action.
    What Mission and Vision, Values, Goals, Objectives, Problems, Issues Inputs and Outputs?
    Many of us have grown up with these traditional words in strategic planning and budgeting. Where do they fit? 
    First, remember that words are just labels for ideas. These seven words have no natural standard definition that bridges across all the different ways they are used. They are terms of art which can and are used to label many different ideas. This is why we pay so much attention to getting language discipline straight at the very beginning. It's the ideas that are important not the words. So you can choose to label the ideas in this guide with any words you like, provided you are consistent. 
    The word "mission" is usually used in relation to an organization, agency, program, initiative or effort. It is therefore mostly used in connection with agency or program performance accountability. Mission statements are usually concise statements of the purpose of an organization, sometimes also telling why and how the organization does what it does. Mission statements can be useful tools in communicating with internal and external stakeholders. It is possible to construct a mission statement from the performance measurement ideas in the upper right ("How well did we deliver service?") and lower right ("Is anyone better off?") quadrants of the performance measurement framework: For example: "Our mission is to help our clients become self sufficient ("Is anyone better off?" lower right) by providing timely, family friendly, culturally competent job training services ("How well did we deliver service?" upper right)." One mistake that is often made is that organizations spend months and sometimes years trying to craft the perfect mission statement before any other work can proceed. In the FPSI framework, mission statements are set aside, allowing the work of identifying and using performance measures to proceed quickly. Then, on a parallel track a small group can, if it is useful, use the work of the performance measurement groups to craft a workable mission statement.
    The word "vision" is often used to convey a picture of a desired future, often one that is hard but possible to attain. This is a powerful idea. And in fact one can think of the set of desired results for children and families as one way of articulating such a vision. "We want our community to be one which is safe and supportive, where all children are healthy and ready for school, where all children succeed in school, and grow up to be productive and contributing adults." This is an example of a vision statement made up of desired results or ends. It is possible to craft such a statement before or after the development of results.
    The word "values" in some ways defies definition. It is about what we hold most dear, how we view right and wrong, how we believe we should act, and how those beliefs are, in fact, reflected in our actions.  Our values underlie all of the work we do. And that is nowhere more true than in the work on the well-being of children, families and communities. Our values will guide our choice of results for children and families and the decisions we make about how we and our partners take action to improve those results.
    The word "goal" is often used interchangeably with "result and outcome" to label the idea of a condition of well-being for children, adults, families or communities (as in the case of Georgia, Missouri and Oregon for example). The word goal has many other common usages as well. It often serves as an all-purpose term to describe a desired accomplishment. "My goal for this month is to fix the roof." "Our goal is to increase citizen participation in the planning process." " The primary goal of the child welfare system is to keep children safe." and so forth. The word goal (or target) is sometimes used to describe the desired future level of achievement for an indicator or performance measure. "Our goal is 95% high school graduation in 5 years." "Our goal is to improve police response time to under 3 minutes." These are widely different usages. Still another use of the word "goal" is in relation to an implementation plan. Given a strategy and action plan to improve a particular result (children ready for school for example), it is possible to structure the action plan as a series of planned accomplishments (goals) with timetables and assigned implementation responsibility. For example, a goal in a "children ready for school plan" might be to "increase funding for child care by 25% this year and 50% next year." This is a specific action which will contribute to achieving the result. There is nothing wrong with any of these usages, provided they are clearly distinguished, used consistently and do not confuse the underlying concepts labeled results, indicators, strategies and performance measures discussed above.
    The word "objective" is often paired with the word goal to specify what amount to a series of  "subgoals" required to achieve the "higher" goal. The set of terms "mission, goal and objective" have a long history in the military to describe the strategic and  tactical components of a large or small action or engagement. And some of their usage in the business sector and the public and private service sector derives from this history. In this framework, the terms goal and objective are most often used to structure the action plan and specify who will do what, how, and by when.
    The words "problem" and "issue" are used in more ways that just about any planning term. They can be used to describe almost anything. "The problem with this computer is that the keyboard is too small." "The problem with our community is that there is not a safe place for children to play." "We must solve the issue of affordability if we are to provide child care for all who need it." These are three different uses of the words and there are countless others. Again, there is nothing wrong with any of these usages, provided that they do not interfere with the language discipline discussed above about ends and means.
    The words "input" and "output" are commonly used categories for performance measures. There is no standard usage. The word "input" is most often used to describe the staff and financial resources which serve to generate "outputs." "Outputs" are most often units of service.  
    Change Agent vs. Industrial Models: Much of the tradition of performance measurement comes from the private sector and in particular the industrial part of the private sector. Work measurement - dating back to the time and motion studies of the late 19th and early 20th centuries - looked at how to improve production. Industrial processes turn raw materials into finished products. The raw materials are the inputs; the finished products are the outputs.
    This model does not translate very well to public or private sector enterprises which provide services. It does not make much sense to think of clients, workers and office equipment as inputs to the service sausage machine, churning out satisfied, cured or fixed clients. Instead we need to begin thinking about services in terms of the change agent model. In this model, the agency or program provides services which act upon the environment to produce demonstrable changes in the well-being of clients, families, or communities. If the input/output language is maintained, then providing service is the input, change in customers' lives is the output.
    One common situation illustrates the problems which arise when industrial model thinking is applied to services. It is the belief that the number of clients served is an output. ("We have assembled all these workers in all this office space; and we are in the business of processing unserved clients into served clients.") This misapplication of industrial performance concepts to services captures much of what is wrong with the way we measure human service performance today. "Number of clients served" is not an output. It is an input, an action which should lead to a change in client or social conditions - the real output we're looking for. ("We served 100 clients - input - and 50 of them got jobs - output - and 40 of them still had jobs a year later - even more important output.") This is a whole different frame of mind and a whole different approach to performance measurement.
    A closely related industrial model problem involves treating dollars spent as inputs, and clients served as outputs. In this distorted view, dollars are raw materials, and whatever the program happens to do with those dollars are outputs. It's easy to see why this over-simplification fails to meet the public's need for accountability. In this construct, the mere fact that the government spent all the money it received is a type of performance measurement. This is surely a form of intellectual, and perhaps literal, bankruptcy. In this perverse scheme, almost all the agency's data is purportedly about outputs. This gives the agency the appearance of being output-oriented and very progressive. It just doesn't happen to mean anything.
    Much of the confusion about performance measurement derives from the attempt to impose industrial model concepts on change agent services. The best model would be one which could span industrial and change agent applications. Some government services still involve industrial-type production (although these are often the best candidates for privatization and a diminishing breed.) In other cases, discussed below, the service itself, or components of the service, have product-like characteristics and industrial model concepts apply well. But most government and private sector human services fall into the change agent category. The approach to performance measurement described in this website can be used for either industrial or change agent applications. (Excerpt from "A Guide to Developing and Using Performance Measures, Finance Project, 1997)
     
     
     
  • Now the principle distinction here is between ends and means. Results and Indicators are about ends. And performance measures tell us whether the particular programmatic means we’ve chosen to get there is working properly. Does that make sense. What we see as we look at the work around the country is that people are typically working on all three of these things, but it’s all mixed up in a hopeless soup of language. So one minute we’re talking about a condition of well being (result) and the next minute it’s a piece of data that measure that…. And the next minute a little program on the east side of town…. As if these were all the same thing and these distinctions really didn’t matter. And what happens when people mix up ends and means like that is that they get stuck. They start to circle and circle. The work becomes all talk and talk and talk. And we’ve all had experiences with process that are all talk. The talk is not what’s important here. What’s important is how we get from talk to action. And everything in this presentation is about that single simple challenge. How do we get from talk to action in a disciplined way. And I think the starting point is to have a common language.
    Within performance measures, we have another ends means distinction, like smaller Russian dolls nested inside larger dolls. Here, customer results become the ends and the services we provide become the means.
  • Rate each candidate measure high, medium or low on each criteria. Those that score highest rise to the top. Those that score H, H, L are powerful measures for which we do not now have data. These form the basis for the data development agenda.
  • This list of results was developed with a colleague at the Annie E. Casey Foundation. We spread out in front of us all the lists we could find having to do with children and families and tried to find the things these lists had in common.
    Notice that there’s only one thing on this list that’s stated in negative terms: “Young people staying out of trouble.” It’s on the list because that’s the way people actually talk. But all the other results are stated in positive terms and that’s a very important characteristic of results accountability.
    Most planning processes we have used in the past state with children’s problems or with unmet needs in the community. Now we have to talk about problems and unmet needs, but you don’t have to start there.
    We send a powerful message out into the community in the way we talk about this. And results should always be stated in positive, not negative, terms.
  • This list of results was developed with a colleague at the Annie E. Casey Foundation. We spread out in front of us all the lists we could find having to do with children and families and tried to find the things these lists had in common.
    Notice that there’s only one thing on this list that’s stated in negative terms: “Young people staying out of trouble.” It’s on the list because that’s the way people actually talk. But all the other results are stated in positive terms and that’s a very important characteristic of results accountability.
    Most planning processes we have used in the past state with children’s problems or with unmet needs in the community. Now we have to talk about problems and unmet needs, but you don’t have to start there.
    We send a powerful message out into the community in the way we talk about this. And results should always be stated in positive, not negative, terms.
  • This list illustrates the communication dimension of results accountability, as simple and direct a list as there is. Placer has done some extraordinary work linking these results at the population level to the individual case level work of programs and agencies.
  • This list comes from the Georgia Policy Council for Children and Families and is used by the network of Family Connections Councils in Georgia. Georgia has gone one step further and identified 25 indicators to tell if these conditions are being achieved. And Georgia has produced a report card at the state level and for each of the 159 counties.
    Many other places in the United States have produced such report cards, including
    CALIFORNIA
                    Contra Costa County: www.cccoe.k12.ca.us
                    San Mateo County: www.pls.lib.ca.us/healthysmc/33/children.pdf
                    Santa Cruz County: appliedsurveyresearch.org/cap_report.htm
                    Silicon Valley Joint Venture: jointventure.org                                   
            GEORGIA
                    Georgia Policy Council for Children and Families, and The Family Connection:                                                                      gpc-fc.org
    MINNESOTA
                    Hennepin County: www.co.hennepin.mn.us/opd/opd.htm 
            OHIO
                    Montgomery County Family and Children First Council:                                                                     http://www.fcfc.montco.org
            OREGON
                    Oregon Progress Board: econ.state.or.us/opb
    PENNSYLVANIA
                    Philadelphia Safe and Sound: Children's Report Card and Children's Budget                                                                  www.philasafesound.org
    VERMONT
                    Agency for Human Services: Community Profiles: ahs.state.vt.us        
           
    . Links to the best of these sites can be found on www.raguide.org.
  • There is a growing number of report cards on child, family and community well-being being developed across the U.S. and in other countries. Here are four such report cards from Georgia, San Mateo County California, Dayton Ohio and Santa Cruz County California.
  • This list illustrates the communication dimension of results accountability, as simple and direct a list as there is. Placer has done some extraordinary work linking these results at the population level to the individual case level work of programs and agencies.
  • Once you understand that results are the true ends of the work, you begin to understand that many of the other things we have been working on all these years are MEANS to the ends of better results, not ENDS in themselves.
  • LEAKING ROOF
    1. Ask "How many people here have ever had a leaking roof?" (Most hands will go up.)
    2. How can you tell if the roof is leaking? ("Water on the floor, down the walls etc.") So, this is how you might "experience" a leaking roof.    3. How could you measure how badly the roof is leaking? ("By how much water...") So you might put out a bucket and measure the number of inches in the bucket after each rainstorm! That's the chart  at the right (CLICK): the number of inches from the last three rainstorms.
    4. Where do you think this line is headed if we don't do anything? ("It will get worse. Through the roof, you might say.") (CLICK) Draw a forecast line going up. This is the forecast of where we're headed if we don't do anything. We want to turn this curve to zero, right! (CLICK) Draw it.
    5. Now, what's the first thing you do when you have a leaking roof? ("You get up on the roof and try to find out why it's leaking.") Right! You look for the cause of the leak. And this is the story behind the baseline, the causes of why this picture looks the way it does.
    6. Who are some of the people who might help you fix the leak? (brother-in-law, neighbor, professional roofer) These are some of your potential partners.
    7. Now, what kinds of things  work to fix a leak? (Patching material, get a whole new roof, sell the house.) You have some choices about types of patching material. Some will work better than others. Tar is probably better than duct tape.
    8. So let's review. You've got a leaking roof. It's getting worse and will keep getting worse unless you do something. You actually have the data on this. You've figured out the cause of the leak and the partners who might help fix it. And you've considered some of  the possible ways to fix it. Now the important final question is what are you going to do? This is your action plan.
    9. So now you've implemented your action plan. Maybe you've hired a roofer who's gotten up on the roof and patched it. And now what's the next thing you do? ("Wait for the next rainstorm.") Right! You wait for the next rainstorm to see if it's still leaking. And what if it's still leaking, what do you do? (Draw a new point lower but not zero.) ("You get back up on the roof.") Right! You start the whole process over again. You look for causes. You think about who can help and what works. And you try something else - maybe sell the house this time. This is an iterative process. Hopefully you fix the roof in one pass. But the things we are working on are much more complicated than a leaking roof, and one iteration won't do it.
    10. So, this is the whole thinking process! It's just common sense. It's how we solve everyday problems. And communities working to improve the quality of life, or managers working to improve their program's performance can use this same process. This is the thinking process at the heart of results and performance decision making! If you understand this process, you can go home now.
    11. Notice that we identified the "inches per bucket" measure pretty easily. With a leaking roof, it's obvious what's important and what could be measured. But with programs, agencies and service systems, the choice of what's important and what to measure is much more complex. That's the process that's addressed when we choose indicators or performance measures. (See for Question 3.7 for more information on choosing program, agency or service system performance measures. And see Question 2.7 for more information on the process for choosing indicators for population well-being.)
    12. Finally, notice that, in real life, we don't actually put out a bucket and measure the inches of water. We do this work based entirely on the way we experience the leak. We consider it fixed when we don't see water anymore. It is also possible to run the results decision-making processes without data, and use just experience. An action plan can be developed this way. It's a way to get started. But ultimately this is unsatisfying. In complex systems, you generally need data to see if you are making progress or not. Otherwise you are left with just stories and anecdotes. So if you don't have any data at all, you might start the process on the basis of experience. But you should give great attention to pursuing your Data Development Agenda.
  • These are three criteria that have been used to choose indicators for a result.
    From www.raguide.org:
    Given a set of candidate indicators, it is then possible to use criteria to select the best indicators to represent the result. Using the best of whats available necessarily means that this will be about approximation and compromise. If we had a thousand measures, we could still not fully capture the health and readiness of young children. We use data to approximate these conditions and to stand as proxies for them. There are three criteria which can be used to identify the best measures:
      Communication Power: Does the indicator communicate to a broad range of audiences? It is possible to think of this in terms of the public square test. If you had to stand in a public square and explain to your neighbors "what we mean, in this community, by children healthy and ready for school," what two or three pieces of data would you use? Obviously you could bring a thick report to the square and begin a long recitation, but the crowd would thin quickly. It is hard for people to listen to, absorb or understand more than a few pieces of data at time. They must be common sense, and compelling, not arcane and bureaucratic. Communication power means that the data must have clarity with diverse audiences.
    Proxy Power: Does the indicator say something of central importance about the result? (Or is it peripheral?) Can this measure stand as a proxy for the plain English statement of well-being? What pieces of data really get at the heart of the matter?
    Another simple truth about indicators is that they run in herds. If one indicator is going in the right direction, often others are as well. You do not need 20 indicators telling you the same thing. Pick the indicators which have the greatest proxy power, i.e. those which are most likely to match the direction of the other indicators in the herd
    Data Power: Do we have quality data on a timely basis? We need data which is reliable and consistent. And we need timely data so we can see progress - or the lack thereof - on a regular and frequent basis. Problems with data availability, quality or timeliness can be addressed as part of the data development agenda
     Identify primary and secondary indicators, and a data development agenda. When you have assessed the candidate indicators using these criteria, you will have sorted indicators into three categories:
      Primary indicators: those 3 or 4 most important measures which can be used as proxies in the public process for the result.  You could use 20 or 40, but peoples eyes would glaze over. We need a handful of measures to tell us how were doing at the highest level.
      Secondary indicators: All the other data thats any good. We will use these measures in assessing the story behind the baselines, and in the  the scenes planning work. We do not throw away good data. We need every bit of information we can get our hands on to do this work well.
      A data development agenda: It is essential that we include investments in new and better data as an active part of our work. This means the creation of a data development agenda - a set of priorities of where we need to get better.
  • Rate each candidate measure high, medium or low on each criteria. Those that score highest rise to the top. Those that score H, H, L are powerful measures for which we do not now have data. These form the basis for the data development agenda.
  • This sorting process will create a three part list for each result. This list will change over time as new data is developed.
  • Baselines have two parts: an history part that tells us where we’ve been and a forecast part that shows where we’re headed if we don’t do something different.
    Forecasting is an art no a science and often we show a range of forecasts, high, medium and low.
    Traditionally we define success as point to point improvement. This is often a setup for failure, because, sometimes the best you can do is slow the rate at which things are getting worse, while you work to turn the curve in the longer run.
    The better definition of success is “turning the curve away from the baseline,” or “beating the baseline.” This is a much more sophisticated, but also a much more fair way to gauge progress.
  • The cost of bad results is the price we as a society pay when things go wrong, when children are not born healthy, when they are not ready for school, when they do not stay out of trouble. The cost of bad results includes both government and non-government expenditures, and is estimated to exceed $200 billion per year in the U.S.
    The convergence of flat or declining revenue with increasing costs of bad outcomes is driving out expenditures for prevention and infrastructure.
    We must bring an investment mentality to the work of doing better. If we invest in prevention, it will lead to lower than baseline costs for remedial services in out-years.
  • The cost of bad results is the price we as a society pay when things go wrong, when children are not born healthy, when they are not ready for school, when they do not stay out of trouble. The cost of bad results includes both government and non-government expenditures, and is estimated to exceed $200 billion per year in the U.S.
    The convergence of flat or declining revenue with increasing costs of bad outcomes is driving out expenditures for prevention and infrastructure.
    We must bring an investment mentality to the work of doing better. If we invest in prevention, it will lead to lower than baseline costs for remedial services in out-years.
  • Tillamook County was successful in bringing down the teen pregnancy rate, while the rest of Oregon stayed about the same.
  • Boston successfully turned the curve on juvenile homicide rates, with zero homicides between July 1995 and December 1997, 2 and one half years.
  • This section presents instructions and reporting formats for the two turn the curve exercises, one for population accountability and one for performance accountability. And other exercises
  • Introduction and the difference between population and performance accountability: We are going to talk about two different kinds of accountability: Accountability for whole populations, like all children in Los Angeles, all elders in Chicago, all residents of North Carolina. This first kind of accountability is not the responsibility of any one agency or program. If we talk for example about “all children in your community being healthy,” who are some of the partners that have a role to play? Notice that the traditional answer is “It’s the health department.” It’s got the word health in it and so it must be the responsibility of the health department. And yet one of the things we have learned in the last 50 years is that the health department by itself can’t possibly produce health for all children without the active participation of many other partners. And that’s the nature of this first kind of accountability. It’s not about the health department. It’s about the kind of cross community partnerships necessary to make progress on quality of life for any population. Now the second kind of accountability, Performance Accountability, is about the health department. It’s about the programs and services we provide, and our role as managers, making sure our programs are working as well as possible. These are two profoundly different kinds of accountability. We going to talk about how to do each one well and then how they fit back together again.
  • All performance measures can be derived from the cross between two sets of interlocking questions: How much did we do? And How well did we do it?
  • Vs. these two dimensions of the work itself: Effort and Effect
  • This leads to a four part or four quadrant way of describing the different types of performance measures.
  • This illustrates the different types of measures for schools.
  • Here, with a tougher, more challenging measure in the lower right quadrant.
  • Examples of measures for a typical health plan or practice.
  • Examples of measures for a typical drug and alcohol treatment program.
  • Examples of measures for a fire department.
  • Examples of measures for a private sector business, in this case the auto industry. These examples are taken from an article in USA Today from September 1998
  • So why sort measure for your program into these categories?
    Simple. These categories are not equally important. The upper left is the least important. And yet we have some people who spend their whole careers living in this quadrant counting cases and activity. Somehow we have to push the discussion to the lower right quadrant, the one that measures whether our customers are better off.
  • This scheme accounts for all performance measures in the history of the universe and this chart is an attempt to back that claim up.
    A lot of us grew up with the terms “efficiency” and “effectiveness” as the terms of art in performance measurement. And you would think, considering their age and venerability, that they would somehow account for all performance measures. But interestingly enough they don’t. Efficiency is only one type of measure in the upper right quadrant. Effectiveness shares the stage with many other measures.
  • Other measures, in addition to efficiency, in the upper right quadrant, that answer the question “How well did we deliver services.”
  • Customer satisfaction has two different dimensions which are often mixed up together. Customer satisfaction with how well service is delivered is different from customer satisfaction with whether the service helped with the customer’s problems. The world’s simplest, yet complete, customer satisfaction survey: “Did we treat you well?” and “Did we help you with your problems?”
  • Customer satisfaction has two different dimensions which are often mixed up together. Customer satisfaction with how well service is delivered is different from customer satisfaction with whether the service helped with the customer’s problems. The world’s simplest, yet complete, customer satisfaction survey: “Did we treat you well?” and “Did we help you with your problems?”
  • As you move from the least important measures to the most important measures, you go from having the most control to having the least control. And this is another reason why people spend their whole lives in the upper left quadrant. Fear. It can be scary to look at the data in the lower right quadrant. But ask people why they went into their profession and the answers all lie in the lower right, in the ways in which we try to make our customer’s lives better.
  • The first purpose of performance measurement is to improve performance. We lose this simple idea in all the fads that run through this field. We forget that the purpose of the work is to get better.
    For many people, their only experience with performance measurement involves punishment. We must create a healthy environment in our organizations where people can use the most important information about what they do to get better.
    There are three ways to compare performance: To ourselves, to others and to standards. The first order of business is comparing to ourselves. Using a baseline, we can try to do better than our own history.
    We can compare to others when it is a fair comparison.
    And we can compare to standards.
  • Comparing performance:
    To others, when it is a fair apples/apples comparison
    What happens when you compare different providers of the same service on a measure? You get a bunch of providers clustered in the middle (click). You get some outliers high (click). And you get some outliers low (click).
    What happens when you reward these people (click), and you punish these people (click)? Well, before you answer this question you have to know why are these people doing well and why are these people doing poorly. Maybe these (top) people have all the easy cases. And these (low) people have all the tough cases. So you reward one and punish the other, and what message do you send throughout the service system? “Skim the easy cases for yourself. Dump the tough cases on someone else.” So if you’re not careful you can actually do damage to the service system and the people you are trying to serve. We’ve got to make sure that we go behnd these numbers so that we can know, “Are these people doing something exemplary or do they have an easier caseload?” “Are these people screwing up or do they have a tougher caseload?” We have to know the answers to these questions.
    For those trying to implement a results-based contracting system, I recommend a 3 year moratorium on rewards and punishments associated with the use of data. Give people time to learn how to do it right, working to improve against their own baseline. Then at the end of the period you can add rewards and if necessary punishments. You never give up your right to cancel contracts, so you always preserve that bottom line safeguard.
  • We have lots of examples of well-established standards in the upper right (How well did we do it?) quadrant, because we know what good service delivery looks like.
    But standards in the lower right (Is anyone better off?) quadrant are almost always experimental. This is partly because of the different mixes of easy and hard cases in different caseloads or workloads.
  • Note: You can use this slide here or after the discussion of standards in the Performance Accountability section.
    Here is a way to show all three comparisons on the same chart.
    Your baseline,
    A comparison baseline,
    And a goal, target or standard line, as a horizontal line – the idea being that you turn the curve and cross the goal line as soon as possible.
    Avoid publicly declaring year by year targets, if you can.
    Instead, count anything better than baseline as progress.
  • Remember the three basic categories of performance measures?
    Now let’s look at what measures fall in each of these categories in more detail.
  • Remember the three basic categories of performance measures?
    Now let’s look at what measures fall in each of these categories in more detail.
  • This chart shows in detail the different types of measure we typically find in each quadrant, and the measures that go with the three basic categories of performance measurement:
    How much did we do?
    How well did we do it?
    Is anyone better off?
    In the upper left, How much did we do? Quadrant, we typically count customers and activities.
    In the upper right, How well did we do it? Quadrant, there are a set of common measures that apply to many different programs. And there is a set of activity specific measures. For each activity in the upper left, there is one or more measures that tell how well that particular activity was performed, usually having to do with timeliness or correctness.
    In the lower quadrants, Is anyone better off? We usually have # and % pairs of the same measure. And these measures usually have to do with one of these four dimensions of better-offness: Skills/knowledge, Attitude, Behavior and Circumstance.
    For each of these measures, we can use point in time measures or point to point improvement measures.
  • Examples of measures for the Jim Casey Youth Opportunity Initiative
  • This section presents instructions and reporting formats for the two turn the curve exercises, one for population accountability and one for performance accountability. And other exercises
  • Here is the thinking process in the form of 7 plain language common sense questions.
    These questions should be asked an answered periodically (monthly, quarterly) at every intersection of supervision from the top to the bottom of the organization.
    This is the most important take-away page for performance measurement. It can be used immediately without any further training.
  • We’ve talked about two different kinds of accountability. Now let’s look at how they fit together.
  • The relationship is a “contribution” relationship, not a cause and effect relationship. What we do for our customers is our contribution to what we and our partners are trying to do across the community.
    Often the only difference between a population indicator and a lower right (Is anyone better off?) performance measure is the difference in scale between a client population and the total population.
    This allows us to think about how our work is aligned with what we are trying to accomplish across the community. It allows us to think about how the measures we use at the program level relate to those at the population level. And it allows us to avoid the trap of holding programs responsible for population level change. We can hold program responsible for what they do for their clients. We must hold ourselves, across the community, responsible for the well being of the population.
  • The relationship is a “contribution” relationship, not a cause and effect relationship. What we do for our customers is our contribution to what we and our partners are trying to do across the community.
    Often the only difference between a population indicator and a lower right (Is anyone better off?) performance measure is the difference in scale between a client population and the total population.
    This allows us to think about how our work is aligned with what we are trying to accomplish across the community. It allows us to think about how the measures we use at the program level relate to those at the population level. And it allows us to avoid the trap of holding programs responsible for population level change. We can hold program responsible for what they do for their clients. We must hold ourselves, across the community, responsible for the well being of the population.
  • As the service population approaches the total population, the measures of client better-offness begin to play a double role. They are used as management measures for the service system and they also can be used as an indicator proxy for the well-being of the whole population.
    High School graduation rate is a good example. It is used by the school system as a measure of performance. And it is also used as an indicator by community collaboratives for the results “All Children Succeed in School.”
  • Budgets of the future will have two parts:
    Volume I will present a picture of quality of life results and indicators and what is being done by government and its partners to improve.
    Volume II will present performance measures for departments and programs.
    Both will use the Baseline, Story, What Works and Strategy format shown above.
    This is a fractal… the same pattern at every level of magnification.
  • Budgets of the future will have two parts:
    Volume I will present a picture of quality of life results and indicators and what is being done by government and its partners to improve.
    Volume II will present performance measures for departments and programs.
    Both will use the Baseline, Story, What Works and Strategy format shown above.
    This is a fractal… the same pattern at every level of magnification.
  • Budgets of the future will have two parts:
    Volume I will present a picture of quality of life results and indicators and what is being done by government and its partners to improve.
    Volume II will present performance measures for departments and programs.
    Both will use the Baseline, Story, What Works and Strategy format shown above.
    This is a fractal… the same pattern at every level of magnification.
  • Management, budgeting and strategic planning should be thought of as a single system. Of the three, the most important is actually management. If you use data on a day to day basis to manage your programs, then once a year you spin out the budget and once every two or five years you spin out a strategic plan. There is great power in having all three processes fully aligned and RBA provides a method for doing that.
  • The crosswalk form allows the results accountability framework to be crosswalked to any other framework. We can see how different frameworks label the ideas in the left column. This will allow us to see different approaches to the work as convergent and not divergent.
    Page 20 in the workbook is filled out to show the crosswalk to a typical Logic Model or Theory of Change. Note that logic models work up the page, while results accountability works down the page. These are complimentary approaches. Logic models can be useful tools in testing the “what works” ideas. The natural question to ask is “Why do you think this would work?” and logic models, or theory of change models force you to articulate the causality chain from actions back to results. While logic models are useful tools, they are not recommended as the overarching framework because they start in the wrong place, with means, not ends… and because many versions of logic models are very paper intensive and take a long time to complete.
  • Funders need to begin to think about their grantmaking agenda in terms of their role in a larger strategy to improve results.
    Indicators tell whether the overall strategy is working.
    Performance measures are used in two ways: first to gauge the performance of grantees and second to manage the grantmaking organization itself.
  • Community collaborative groups and programs and agencies could use this as the agenda for their meetings. The meeting would be aligned with the thinking process that produced the action plan. Each iteration of this thinking process will improve the action plan.
  • There are four kinds of progress which can be reported. The first at the population level, and 2 3 and 4 at the program agency or service system level.
  • Implementation of results and performance accountability should proceed along three parallel tracks.
  • This section presents instructions and reporting formats for the two turn the curve exercises, one for population accountability and one for performance accountability. And other exercises
  • Participant instructions for the population turn the curve exercise.
  • Group report out format for the population turn the curve exercise.
  • Participant instructions for the performance turn the curve exercise.
  • Group report out format for the performance turn the curve exercise.
  • Transcript

    • 1. Results Accountability The Fiscal Policy Studies Institute Santa Fe, New Mexico Websites raguide.org resultsaccountability.com Book - DVD Orders amazon.com resultsleadership.org
    • 2. SIMPLE COMMON SENSE PLAIN LANGUAGE MINIMUM PAPER USEFUL
    • 3. Results Accountability is made up of two parts: Performance Accountability about the well-being of CLIENT POPULATIONS For Programs – Agencies – and Service Systems Population Accountability about the well-being of WHOLE POPULATIONS For Communities – Cities – Counties – States - Nations
    • 4. Results Accountability COMMON LANGUAGE COMMON SENSE COMMON GROUND
    • 5. THE LANGUAGE TRAP Too many terms. Too few definitions. Too little discipline Benchmark Target Indicator Goal Result Objective Outcome Measure Modifiers Measurable Core Urgent Qualitative Priority Programmatic Targeted Performance Incremental Strategic Systemic Lewis Carroll Center for Language Disorders Measurable urgent systemic indicatorsCore qualitative strategic objectivesYour made up jargon here
    • 6. DEFINITIONS Children born healthy, Children ready for school, Safe communities, Clean Environment, Prosperous Economy Rate of low-birthweight babies, Percent ready at K entry, crime rate, air quality index, unemployment rate 1. How much did we do? 2. How well did we do it? 3. Is anyone better off? RESULT or OUTCOME INDICATOR or BENCHMARK PERFORMANCE MEASURE A condition of well-being for children, adults, families or communities. A measure which helps quantify the achievement of a result. A measure of how well a program, agency or service system is working. Three types: = Customer Results PopulationPerformance Children born healthy Rate of low-birthweight babies Percent ready at K entry Children ready for school crime rate Safe communities air quality index Clean Environment unemployment rate Prosperous Economy
    • 7. From Ends to Means ENDS MEANS From Talk to Action PopulationPerformance RESULT or OUTCOME INDICATOR or BENCHMARK PERFORMANCE MEASURE Customer result = Ends Service delivery = Means From Talk to Action
    • 8. 1. Safe Community 2. Crime Rate 3. Average Police Dept response time 4. An educated workforce 5. Adult literacy rate 6. People have living wage jobs and income 7. % of people with living wage jobs and income 8. % of participants in job training who get living wage jobs IS IT A RESULT, INDICATOR OR PERFORMANCE MEASURE? RESULT INDICATOR PERF. MEASURE RESULT INDICATOR RESULT INDICATOR PERF. MEASURE
    • 9. Results – Indicators – Performance Measures in Amharic, Cambodian, Laotian, Somali, Spanish, Tigrigna, Vietnamese
    • 10. Translation Guide/Rosetta Stone Not the Language Police Ideas 1. A condition of well-being for children, adults, families & communities 2. 3. etc. Group 1 Group 2 Group 3 etc. RESULT OUTCOME GOAL TRANSLATION Back to the Idea
    • 11. POPULATION ACCOUNTABILITY Fiscal Policy Studies Institute Santa Fe, New Mexico www.resultsaccountability.com www.raguide.org For Whole Populations in a Geographic Area
    • 12. Community Outcomes for Christchurch, NZ 1. A Safe City 2. A City of Inclusive and Diverse Communities 3. A City of People who Value and Protect the Natural Environment 4. A Well-Governed City 5. A Prosperous City 6. A Healthy City 7. A City for Recreation, Fun and Creativity 8. City of Lifelong Learning 9. An Attractive and Well-Designed City
    • 13. Results for Children, Families and Communities A Working List ● Healthy Births ● Healthy Children and Adults ● Children Ready for School ● Children Succeeding in School ● Young People Staying Out of Trouble ● Stable Families ● Families with Adequate Income ● Safe and Supportive Communities
    • 14. Every Child Matters – Children Act Outcomes for Children and Young People Being Healthy: enjoying good physical and mental health and living a healthy lifestyle. Staying Safe: being protected from harm and neglect and growing up able to look after themselves. Enjoying and Achieving: getting the most out of life and developing broad skills for adulthood. Making a Positive Contribution: to the community and to society and not engaging in anti-social or offending behaviour. Economic Well-being: overcoming socio-economic disadvantages to achieve their full potential in life.
    • 15. Georgia Policy Council for Children and Families RESULTS ● Healthy Children ● Children Ready for School ● Children Succeeding in School ● Strong Families ● Self Sufficient Families
    • 16. Georgia Lehigh Valley, PADayton, OH Santa Cruz, CA REPORTCARDS
    • 17. New Zealand Kruidenbuurt Tilburg, Netherlands Portsmouth, UK Country Neighborhood City
    • 18. Placer County, California OUTCOMES for CHILDREN SAFE HEALTHY AT HOME IN SCHOOL OUT OF TROUBLE
    • 19. MEANS not ENDS 1. COLLABORATION 2. SYSTEMS REFORM 3. SERVICE INTEGRATION 4. DEVOLUTION 5. FUNDING POOLS To Improving Results In Themselves IO
    • 20. Leaking Roof (Results thinking in everyday life) Experience: Measure: Story behind the baseline (causes): Partners: What Works: Action Plan: Inches of Water ? Fixed Not OK Turning the Curve
    • 21. Criteria for Choosing Indicators as Primary vs. Secondary Measures Communication Power Proxy Power Data Power Does the indicator communicate to a broad range of audiences? Does the indicator say something of central importance about the result? Does the indicator bring along the data HERD? Quality data available on a timely basis.
    • 22. Choosing Indicators Worksheet Outcome or Result_______________________ Candidate Indicators Communication Power Proxy Power Data Power H M L H Measure 1 Measure 2 Measure 3 Measure 4 Measure 5 Measure 6 Measure 7 Measure 8 H Data Development Agenda Safe Community H M L H M L H H H L
    • 23. Three Part Indicator List for each Result Part 1: Primary Indicators Part 2: Secondary Indicators Part 3: Data Development Agenda ● 3 to 5 “Headline” Indicators ● What this result “means” to the community ● Meets the Public Square Test ● Everything else that’s any good (Nothing is wasted.) ● Used later in the Story behind the Curve ● New data ● Data in need of repair (quality,timeliness etc.)
    • 24. The Matter of Baselines Baselines have two parts: history and forecast H M L History Forecast Turning the CurvePoint to Point OK?
    • 25. The Cost of Bad Results Invest in prevention to reduce or avoid out-year costs. The costs of remediating problems after they occur Investment TrackCost $300 billion Revenue Convergence of Cost & Revenue
    • 26. 2008 The Business Case for Investment in Prevention United States 1970 to 2010`
    • 27. MADD
    • 28. Rebound
    • 29. 6.00% 8.00% 10.00% 12.00% 14.00% 16.00% 18.00% 20.00% Ncle 14.5 14.5 16.8 14.5 17 15 11.9 10.6 9.5 9.3 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 Source: Connexions Tyne and Wear, UK Newcastle, UK Revised 9 Nov 2007 NEET: Not in Education, Employment or Training
    • 30. - Rosell “If I include you, you will be my partner. If I exclude you, you will be my judge.”
    • 31. Performance Accountability For Programs, Agencies and Service Systems Fiscal Policy Studies Institute Santa Fe, New Mexico www.resultsaccountability.com www.raguide.org
    • 32. Results Accountability is made up of two parts: Performance Accountability about the well-being of CLIENT POPULATIONS For Programs – Agencies – and Service Systems Population Accountability about the well-being of WHOLE POPULATIONS For Communities – Cities – Counties – States - Nations
    • 33. “All performance measures that have ever existed for any program in the history of the universe involve answering two sets of interlocking questions.”
    • 34. How Much did we do? ( # ) How Well did we do it? ( % ) Quantity Quality Performance Measures
    • 35. Effort How hard did we try? Effect Is anyone better off? Performance Measures
    • 36. Effort Effect How Much How Well Performance Measures
    • 37. How much service did we deliver? Performance Measures How well did we deliver it? How much change / effect did we produce? What quality of change / effect did we produce? Quantity Quality EffectEffort OutputInput
    • 38. How much did we do? Education How well did we do it? Is anyone better off? Quantity Quality EffectEffort Number of students Student-teacher ratio Number of high school graduates Percent of high school graduates
    • 39. How much did we do? Education How well did we do it? Is anyone better off? Quantity Quality EffectEffort Number of students Student-teacher ratio Percent of 9th graders who graduate on time and enter college or employment after graduation Number of 9th graders who graduate on time and enter college or employment after graduation
    • 40. How much did we do? Pediatric Practice How well did we do it? Is anyone better off? Number of patients treated Percent of patients treated in less than 1 hour Quantity Quality EffectEffort # children fully immunized (in the practice) % children fully immunized (in the practice)
    • 41. How much did we do? Drug/Alcohol Treatment Program How well did we do it? Is anyone better off? Number of persons treated Percent of staff with training/ certification Number of clients off of alcohol & drugs - at exit - 12 months after exit Percent of clients off of alcohol & drugs - at exit - 12 months after exit Quantity Quality EffectEffort
    • 42. How much did we do? Fire Department How well did we do it? Is anyone better off? Number of responses Response Time Quantity Quality EffectEffort # of fires kept to room of origin % of fires kept to room of origin
    • 43. How much did we do? General Motors How well did we do it? Is anyone better off? # of production hrs # tons of steel Employees per vehicle produced # of cars sold $ Amount of Profit $ Car value after 2 years Quantity Quality EffectEffort Source: USA Today 9/28/98 % Market share Profit per share % Car value after 2 years
    • 44. How much did we do? Not All Performance Measures Are Created Equal How well did we do it? Is anyone better off? Least Important Quantity Quality EffectEffort Most Important Least Most Also Very Important
    • 45. RBA Categories Account for All Performance Measures (in the history of the universe) Quantity Quality Efficiency, Admin overhead, Unit cost Staffing ratios, Staff turnover Staff morale, Access, Waiting time, Waiting lists, Worker safety Customer Satisfaction (quality service delivery & customer benefit) Cost / Benefit ratio Return on investment Client results or client outcomes Effectiveness Value added Productivity Benefit value Product Output Impact Process Input EffectEffort Cost TQM Effectiveness Efficiency
    • 46. Quantity Quality Efficiency, Admin overhead, Unit cost Staffing ratios, Staff turnover Staff morale, Access, Waiting time, Waiting lists, Worker safety Customer Satisfaction (quality service delivery & customer benefit) Cost / Benefit ratio Return on investment Client results or client outcomes Effectiveness Value added Productivity Benefit value Process Input EffectEffort Cost TQM Product Output Impact RBA Categories Account for All Performance Measures (in the history of the universe)
    • 47. Quantity Quality Efficiency, Admin overhead, Unit cost Staffing ratios, Staff turnover Staff morale, Access, Waiting time, Waiting lists, Worker safety Customer Satisfaction (quality service delivery & customer benefit) Cost / Benefit ratio Return on investment Client results or client outcomes Effectiveness Value added Productivity Benefit value Process Input EffectEffort Cost TQM 1. Did we treat you well? 2. Did we help you with your problems? * Product Output Impact RBA Categories Account for All Performance Measures (in the history of the universe) * World’s simplest complete customer satisfaction survey
    • 48. Not All Performance Measures Are Created Equal Quantity Quality Efficiency, Admin overhead, Unit cost Staffing ratios, Staff turnover Staff morale, Access, Waiting time, Waiting lists, Worker safety Customer Satisfaction (quality service delivery & customer benefit) Cost / Benefit ratio Return on investment Client results or client outcomes Effectiveness Value added Productivity Benefit value Process Input EffectEffort Cost TQM Product Output Impact
    • 49. How much did we do? The Matter of Control How well did we do it? Is anyone better off? Quantity Quality EffectEffort Least Control PARTNERSHIPS Most Control
    • 50. The Matter of Use 1. The first purpose of performance measurement is to improve performance. 2. Avoid the performance measurement equals punishment trap. ● Create a healthy organizational environment. ● Start small. ● Build bottom-up and top-down simultaneously.
    • 51. 1. To Ourselves Can we do better than our own history? 2. To Others When it is a fair apples/apples comparison. 3. To Standards When we know what good performance is. Comparing Performance
    • 52. 2. To Others When it is a fair apples/apples comparison. 3. To Standards When we know what good performance is. Comparing Performance 1. To Ourselves First Can we do better than our own history? Using a Baseline CHART ON THE WALL
    • 53. Comparing Performance 1. To Ourselves First Can we do better than our own history? 2. To Others When it is a fair apples/apples comparison. Reward? Punish? 3. To Standards When we know what good performance is.
    • 54. 1. To Ourselves First Can we do better than our own history? 2. To Others When it is a fair apples/apples comparison. Comparing Performance 3. To Standards When we know what good performance is.
    • 55. The Matter of Standards Quantity EffectEffort 1. Quality of Effort Standards are sometimes WELL ESTABLISHED ● Child care staffing ratios ● Application processing time ● Handicap accessibility ● Child abuse response time 2. Quality of Effect Standards are almost always EXPERIMENTAL ● Hospital recovery rates ● Employment placement and retention rates ● Recidivism rates 3. Both require a LEVEL PLAYING FIELD and an ESTABLISHED RECORD of what good performance is. BUT AND
    • 56. Advanced Baseline Display Your Baseline Comparison Baseline Goal (line) Target or Standard Instead: Count anything better than baseline as progress. Avoid publicly declaring targets by year if possible. ● Create targets only when they are: FAIR & USEFUL
    • 57. How much did we do? Program Performance Measures How well did we do it? Is anyone better off? Quantity Quality EffectEffort # %
    • 58. Lay Definition All Data have two Incarnations Technical Definition HS Graduation Rate % enrolled June 1 who graduate June 15 % enrolled Sept 30 who graduate June 15 % enrolled 9th grade who graduate in 12th grade
    • 59. How much did we do? Separating the Wheat from the Chaff Types of Measures Found in Each Quadrant How well did we do it? Is anyone better off? # Clients/customers served # Activities (by type of activity) % Common measurese.g. client staff ratio, workload ratio, staff turnover rate, staff morale, % staff fully trained, % clients seen in their own language, worker safety, unit cost % Skills / Knowledge (e.g. parenting skills) # % Attitude / Opinion (e.g. toward drugs) # % Behavior (e.g.school attendance) # % Circumstance (e.g. working, in stable housing) # % Activity-specific measures e.g. % timely, % clients completing activity, % correct and complete, % meeting standard Point in Time vs. Point to Point Improvement
    • 60. How much did we do? Choosing Headline Measures and the Data Development Agenda How well did we do it? Is anyone better off? Quantity QualityEffectEffort # Measure 1 ---------------------------- # Measure 2 ---------------------------- # Measure 3 ---------------------------- # Measure 4 ---------------------------- # Measure 5 ---------------------------- # Measure 6 ---------------------------- # Measure 7 ---------------------------- #1 Headline #2 Headline #3 Headline #1 DDA #2 DDA #3 DDA% Measure 8 ---------------------------- % Measure 9 ----------------------------- % Measure 10 --------------------------- % Measure 11 --------------------------- % Measure 12 --------------------------- % Measure 13 --------------------------- % Measure 14 --------------------------- # Measure 15 ---------------------------- # Measure 16 ---------------------------- # Measure 17 ---------------------------- # Measure 18 ---------------------------- # Measure 19 ---------------------------- # Measure 20 ---------------------------- # Measure 21 ---------------------------- % Measure 15 ---------------------------- % Measure 16 ---------------------------- % Measure 17 ---------------------------- % Measure 18 ---------------------------- % Measure 19 ---------------------------- % Measure 20 ---------------------------- % Measure 21 ----------------------------
    • 61. Select 3 to 5 Performance Measures at each level of the organization 3 - 5 3 - 5 3 - 5 3 - 5 3 - 5 3 - 5 3 - 5 Pick the 3 – 5 most important of the 9 – 15 measures or create composites. “Get over it!” Be disciplined about what’s most important. Don’t get distracted. ? . .
    • 62. LR UR
    • 63. How Population & Performance Accountability FIT TOGETHER
    • 64. Contribution relationship Alignment of measures Appropriate responsibility THE LINKAGE Between POPULATION and PERFORMANCE POPULATION ACCOUNTABILITY Healthy Births Rate of low birth-weight babies Stable Families Rate of child abuse and neglect Children Succeeding in School Percent graduating from high school on time CUSTOMER RESULTS # of investigations completed % completed within 24 hrs of report # repeat Abuse/Neglect % repeat Abuse/Neglect PERFORMANCE ACCOUNTABILITY Child Welfare Program POPULATION RESULTS Child Welfare Program
    • 65. Contribution relationship Alignment of measures Appropriate responsibility THE LINKAGE Between POPULATION and PERFORMANCE POPULATION ACCOUNTABILITY Healthy Births Rate of low birth-weight babies Children Ready for School Percent fully ready per K-entry assessment Self-sufficient Families Percent of parents earning a living wage CUSTOMER RESULTS # persons receiving training Unit cost per person trained # who get living wage jobs % who get living wage jobs PERFORMANCE ACCOUNTABILITY POPULATION RESULTS Job Training Program
    • 66. Every time you make a presentation, use a two-part approach Result: to which you contribute to most directly. Indicators: Story: Partners: What would it take?: Your Role: as part of a larger strategy. Population Accountability Program: Performance measures: Story: Partners: Action plan to get better: Performance Accountability Your Role
    • 67. Every time you make a presentation, use a two-part format Result: to which you contribute to most directly. Indicators: Story: Partners: What would it take?: Your Role: within the larger strategy. Population Accountability Program: Performance measures: Story: Partners: Action plan to get better: Performance Accountability Your Role
    • 68. Division #1 Program #1
    • 69. Population Results 1. Population 2. Results (Outcomes, Goals) 3. Indicators (Benchmarks) Data Development Agenda Report Card 4. Baseline 5. Story behind the baseline Cost of Bad Results Research Agenda Part 1 6. Partners 7. What works Research Agenda Part 2 8. Action Plan (strategy) 9. Funding Plan (budget) Program Performance 1. Customers (Clients) 2. Performance measures Customer results Quality of Effort Quantity of Effort Data Development Agenda 3. Baseline 4. Story behind the baseline Research Agenda Part 1 5. Partners 6. What works Agency/program actions Partner's actions Research Agenda Part 2 7. Action Plan (strategy) 8. Funding Plan Framework: __________ Framework Crosswalk Analysis (For Population Well-being, Across Communities, Across Systems) (For Programs, Agencies and Service Systems) Example Input Activity Output Outcome Goal Logic Model
    • 70. ’
    • 71. Board of Directors Meeting AGENDA 1. New data 2. New story behind the curves 3. New partners 4. New information on what works. 5. New information on financing 6. Changes to action plan and budget 7. Adjourn 1. New data 2. New story behind the curves 3. New partners 4. New information on what works. 5. New information on financing 6. Changes to action plan and budget 7. Adjourn
    • 72. Different Kinds of Progress 1. Data a. Population indicators Actual turned curves: movement for the better away from the baseline. b. Program performance measures: customer progress and better service: How much did we do? How well did we do it? Is anyone better off? 2. Accomplishments: Positive activities, not included above. 3. Stories behind the statistics that show how individuals are better off.
    • 73. What’s Next? A Basic Action Plan for Results Accountability TRACK 1: POPULATION ACCOUNTABILITY ● Establish results ● Establish indicators, baselines and charts on the wall ● Create an indicators report card ● Set tables (action groups) to turn curves TRACK 2: PERFORMANCE ACCOUNTABILITY ● Performance measures, and charts on the wall for programs, agencies and service systems ● Use 7 Questions supervisor by supervisor and program by program in management, budgeting and strategic planning
    • 74. IN CLOSING
    • 75. “If you do what you always did, you will get what you always got.” Kenneth W. Jenkins President, Yonkers NY NAACP
    • 76. THANK YOU ! WEBSITES: www.raguide.org www.resultsaccountability.com BOOK ORDERS: www.trafford.com www.amazon.com
    • 77. EXERCISES Fiscal Policy Studies Institute Santa Fe, New Mexico www.resultsaccountability.com www.raguide.org
    • 78. Turn the Curve Exercise: Population Well-being 5 min: Starting Points - timekeeper and reporter - geographic area - two hats (yours plus partner’s) 10 min: Baseline - pick a result and a curve to turn - forecast – OK or not OK? 15 min: Story behind the baseline - causes/forces at work - information & research agenda part 1 - causes 15 min: What works? (What would it take?) - what could work to do better? - each partners contribution - no-cost / low-cost ideas - information & research agenda part 2 – what works 10 min: Report convert notes to one page Two pointers to action
    • 79. ONE PAGE Turn the Curve Report: Population Result: _______________ Indicator (Lay Definition) Indicator Baseline Story behind the baseline --------------------------- --------------------------- (List as many as needed) Partners --------------------------- --------------------------- (List as many as needed) Three Best Ideas – What Works 1. --------------------------- 2. --------------------------- 3. ---------No-cost / low-cost Sharp Edges 4. --------- Off the Wall 4. --------- Off the Wall
    • 80. The first step in performance accountability is to DRAW A FENCE Around something that has ORGANIZATIONAL OR FUNCTIONAL IDENTITY The Whole Organization Division A Division B Unit Division C Function Unit 1
    • 81. ● # of people served ● % participants who got jobs ● staff turnover rate ● # participants who got jobs ● % of children reading at grade level ● cost per unit of service ● # applications processed ● % patients who fully recover What Kind of PERFORMANCE MEASURE? Upper Left Lower Right Upper Right Lower Left Lower Right Upper Right Upper Left Lower Right
    • 82. Turn the Curve Exercise: Program Performance 5 min: Starting Points - timekeeper and reporter - identify a program to work on - two hats (yours plus partner’s) 10 min: Performance measure baseline - choose 1 measure to work on – from the lower right quadrant - forecast – OK or not OK? 15 min: Story behind the baseline - causes/forces at work - information & research agenda part 1 - causes 15 min: What works? (What would it take?) - what could work to do better? - each partners contribution - no-cost / low-cost ideas - information & research agenda part 2 – what works 10 min: Report Convert notes to one page Two pointers to action
    • 83. Program: _______________ Performance Measure (Lay definition)Performance Measure Baseline Story behind the baseline --------------------------- --------------------------- (List as many as needed) Partners --------------------------- --------------------------- (List as many as needed) Three Best Ideas – What Works 1. --------------------------- 2. --------------------------- 3. ---------No-cost / low-cost ONE PAGE Turn the Curve Report: Performance Sharp Edges 4. --------- Off the Wall 4. --------- Off the Wall

    ×