This PPT presents some reflections on undertaking cross-government agency evaluations, it highlights some common issues and presents some tips on how to manage a cross agency evaluation. The PPT was given at an ANZEA conference
TEST BANK For Essentials of Negotiation, 7th Edition by Roy Lewicki, Bruce Ba...
Reflections on cross government agency evaluations
1. Better together by building
strong relationships
Reflections on Cross Agency Evaluations
Evaluators
Will Bell John Wren PhD
(Senior Research Analyst, MBIE) (Principal Research Advisor, ACC)
Presentation: Our House, Our Whare, Our Fale: Building Strong Evaluation in Aotearoa
New Zealand.
Aotearoa New Zealand Evaluation Association Conference
Te Papa, 10 July, 2014
2. Rising to the challenge
• 20 years experience
• 4 key elements
• 30 min
What else would you like to get from today?
3. SELECTING EVALUATION TEAM
OPERATIONAL DELIVERY
MAKING THE CALL:
Evaluation rubrics
GOVERNANCE
Managing Cross-agency Evaluation
Traps for
the unwary
evaluator
4. Do you need a MOU?
Governance
• What is the evaluation
being used for?
• Problem Ownership
• Leadership – Seniority
• Pain points - No
surprises
• Managing External and
Internal Relationships
5. Building a Good Evaluation Team
• Types of Evaluators
• Skill sets
• Knowledge bases
• Seniority / Experience
• Relationship mgmt
6. Making the ‘call’, and sticking
to it
EVALUATION RUBRICS
• Problem clarity
• Methodological approach
• Rules for judgments
•Maintaining independence
• Transfer of findings
7. Delivering the message
Operational / Delivery
• Transparency
• Communication styles
• Managing personnel change
• Record-keeping: Forms of agreement
• Seeing the Links
• Managing relationships
8. SELECTING EVALUATION TEAM
Type of Evaluator…Pragmatic
idealist?
Philosophical commitment?
Skill sets?
Knowledge bases?
Seniority – Experience?
OPERATIONAL DELIVERY
Transparency!
Communication styles!
Managing Personnel change!
Record-keeping!
MAKING THE CALL:
Evaluation rubrics
Problem clarity!
Methodological approach!
Rules for judgments!
Value / Merit!
Independence of evaluation!
Transfer of findings?
GOVERNANCE
Evaluation purpose?
Problem Ownership?
Leadership – Seniority?
Pain points - No surprises!
Managing Cross-agency Evaluation
Traps for
the unwary
evaluator
9. Summary Reflections
1. Do you need a MOU? – not necessarily (don’t
get hung up on Form)
– You do need a good project plan
1. How do you build a good Evaluation Team?
– Identify the information needs, skill sets,
experience and seniority, and appoint to cover
these
1. How do you deliver the Message and Make it
Stick?
– Establish rules for making judgements
– Signal initial thoughts early – no surprises, ensure
sufficient seniority to support your case
– Think about policy and operational implications so
you know and can argue your case in a range of
settings to withstand the scrutiny
10. Summary Reflections
1. Do you need a MOU? – not necessarily (don’t
get hung up on Form)
– You do need a good project plan
1. How do you build a good Evaluation Team?
– Identify the information needs, skill sets,
experience and seniority, and appoint to cover
these
1. How do you deliver the Message and Make it
Stick?
– Establish rules for making judgements
– Signal initial thoughts early – no surprises, ensure
sufficient seniority to support your case
– Think about policy and operational implications so
you know and can argue your case in a range of
settings to withstand the scrutiny
Editor's Notes
Intro – Both
[John]
Good morning, thank you for making the effort to be here on this stunning Wellington day.
I’m John Wren… (and I’m Will Bell…). Today we’re going to talk about working cross-agency.
[Will]
Can I have a show of hands, how many of you are from Wellington?
How many of your are public servants?
How many of you have worked on a cross-agency project?
You’re in for a treat
[Will]
As many of you know, working on a cross-agency project can be challenging and at times you can feel like you’re stuck in a “tug of war” with your colleagues.
As public servants, and people working with the public sector, we are increasingly expected to work across the public sector to deliver “joined up” advice, so the rest of you can expect that you may be facing these challenges soon.
John and I have been working on cross-agency evaluations for over 20 years – admittedly that’s mostly John – and across a range of agencies and activities. We believe that working cross-agency really can bring about a more robust evaluation process.
Over the next half hour, we will share with you four key elements of successful cross-agency projects. These will make your life easier when working cross-agency. We’ll take a few questions at the end and we acknowledge that there’s a lot of experience in this room so we hope to have a really good discussion, but before we start we’ll make a quick note of any burning questions.
Are there any burning questions out there? Anything you’d like to get from today?
[white board them]
Briefly, what we’ll cover to day boils down to four key elements:
Governance – with John
Team – with Will
Rubrics – with Will
Delivery - John
Navigating these can be difficult and there are plenty of traps for the unwary evaluator. But don’t worry, we’ll help you through!
Take it away, John…
[John]
Governance starts with understanding what the Mandate is for the evaluation, and most importantly What is the Evaluation Question or Information Requirement
In our experience there can often be Different Perspectives or Interpretation between agencies and sometimes Ministers on What the Information Requirement is, there have been times when agencies go back to Ministers and ask for direction.
Sometimes - Often, I would suggest, it is simply smarter to plan for the gathering of information that is capable of addressing a range of information requirements – in doing this though, it is really important to understand the Pain Points / the Sensitivities that may / often do exist between agencies and which can become sources of tension in the evaluation.
Sometimes you may be directed to not address a particular topic – managing such situations calls for some seniority, leadership, experience, knowledge, sensitivity and relationship management skill, and should be allowed for early on in the design and management of an evaluation.
In this regard, in our experience and reflections having the right people at the decision-making table is essential ‘these are your “KeyStone Persons”, who have the right range of knowledge, experience and skills to oversee the project, including recognising when other people need to be brought to the table.
This brings us to the question of documenting key decisions about the Governance of the project. Establishing a MOU is often the defacto approach for some agencies, however in our experience this approach is not always useful, it in itself can be a barrier. We would suggest that what is important is not the Name of the Governance agreement (for example MOU, or Letter of Agreement or Project Plan, etc., but the content. The content we suggest should include agreements about Problem definition – what is the information required, who it is for, and when it is required, what the role of each agency is, and what each agency will contribute. Ideally, we think it should include some discussion about how decisions are made, particularly in regard to how evidence is interpreted and presented in the final documents (more on this latter).
Evaluation team – Will
It’s important to have the right tool for the job, and you will need different skills sets or types of people at different phases and stages of your evaluation. In the language of Skolits et al (2009) “ Manager, Detective, Designer, Negotiator, Diplomat, etc”
There are different types of evaluators, so it’s important to assemble a team that not only has the skills and experience you need, but are compatible.
In a large organisation it can be hard to know who has the skills you need, but by identifying people who are “linked in” to who’s who and what’s going on around the organisation, you’ll uncover these people pretty quickly. These skills could be data analysis, qualitative interviewing, statistical modelling, etc.
Working cross-agency works best when you acknowledge your shortcomings and work to your organisational strengths. For example, when our organisations work together we can rely on ACC’s expertise with actuarial analysis or other specialist content knowledge.
It’s important that you have the right balance of seniority and experience. You may find yourself charged with evaluating a particularly complex and controversial policy, so it’s important that you have appropriate “heavy hitters” around your table. Sometimes you’ll need a principal evaluator, other times a senior is appropriate. Not only will these people provide mentoring and coaching to other members of the team, but they will help ensure that the evaluation is robust and add credibility to the results. [We’ll touch on this more a bit later].
On the subject of capability building, it’s important when planning your evaluation to think about succession planning to make sure that the evaluation keeps momentum in the event of staff change. You can do this by ensuring adequate documentation and sharing the work to avoid silos on your project.
Relationship management is an essential skill that is often overlooked when putting together a project team. We think it deserves a special mention here as cross-agency work invariably involves building new relationships and leveraging off existing ones.
This sounds pretty easy eh, but in practice it can break down so you need someone with the skills and tenacity to make it work. These traits are:
Ability to influence, communicate, build trust, have empathy
How do you know if you’ve got a good evaluation team?
Evaluation rubrics – Will
As John mentioned earlier, the problem should be clarified up front as part of the project governance.
In our experience, the prescriptiveness of Cabinet requirements for evaluations can vary widely. Sometimes agencies will be required to “review” a particular policy and report back with policy recommendations. Sometimes they may be asked to determine whether a policy is “working as intended”. In any case, the Cabinet minute should be your first port of call for determining your rules for judgments. These should be clear to your governance group and client as part of your “no surprises” approach.
You need to establish and agree to the rules for judgments early on. Eg. “what does success look like?”, “who and how do we make judgment calls?”, “how will we balance contradictory findings?”. It’s not all about the t-statistics or p-values. You need to be able to tell a plausible story with some causality. This will likely mean adopting a mixed-methods approach.
How do you make the call?
As an evaluator, you are going to have to make a call on whether or not the policy or intervention is working. This gets increasingly complicated when there are multiple agencies’ interests. By maintaining the “independence” of the evaluation team, having senior evaluators with a bit of mana and adopting a “no surprises” approach by signaling likely findings early on you can confidently sit down with your clients and discuss all the evidence that lead to your conclusions.
To add value, we evaluators often like to give “transferable findings”, “policy implications” or “the so what?”
There isn’t always appetite for this and some think it oversteps a boundary and we’ve often seen these findings omitted from a final report. Despite this, we encourage you to do some of this thinking up front as it will enable you to gauge in a more informed debate around your evaluation findings. It may also be useful information to provide to your clients separately or verbally.
Transparency – this is about No Surprises, and about having some clear understanding about how decisions are made on the interpretation on the evidence gathered. How are “judgement going to made between different types of evidence. We suggest that it is important to keep you eye on “the story” – you are telling.
Communication style – this is about recognising that in cross-agency evaluations you are usually dealing with a range of audiences, we suggest that it is important to adopt a communication style that the audieance will hear – not what you are necessarily want to use or are trained for. We think there is increasingly a need for the use of mixed communication delivery modes – PPTs, Diagrams, Graphs, Business style Exe Summaries – short sentences, active writing. Ask yourself, of you 100 page ideal report – the ‘evaluation report of record’ – what are the absolutely key things you want to say / need to be said in a 2 to 5 page memo! ? Do you need a report written for Policy, and one for Operations, and another for Snr Exe and Cabinet? Has the team answered the questions / met the information requirements?
In recent years, we have found it useful to “socialise” our initial thinking early – to signal initial results. This process enables you to demonstrate the project is progressing, to test out your initial thinking, and importantly to manage the Pain Points that may be emerging.
Seeing the links – is about anticipating and recognising Policy / Operational implications of the evidence and its interpretation, and what additional research information might be available to usefully inform the evaluation. If you are able to see the links, we suggest you are better able to engage in a conversation with the end users, about what the results mean, how they might be used, and why you have come to the conclusions you have.
Exploring our pyramid again, we can now put a few points under each element.
Reflection – Both
We’ve posed some big questions today and some potential answers for you. But we’re now interested in hearing what you think.
What have you found useful in your experience?
MAKE SURE WE ADDRESS THE WHITEBOARD QUESTIONS