SlideShare a Scribd company logo
Ann Oper Res (2016) 236:15–38
DOI 10.1007/s10479-014-1578-6
From evidence-based policy making to policy analytics
Giada De Marchi · Giulia Lucertini · Alexis Tsoukiàs
Published online: 27 March 2014
© Springer Science+Business Media New York 2014
Abstract This paper aims at addressing the problem of what
characterises decision-aiding
for public policy making problem situations. Under such a
perspective it analyses concepts
like “public policy”, “deliberation”, “legitimation”,
“accountability” and shows the need to
expand the concept of rationality which is expected to support
the acceptability of a public
policy. We then analyse the more recent attempt to construct a
rational support for policy
making, the “evidence-based policy making” approach. Despite
the innovation introduced
with this approach, we show that it basically fails to address the
deep reasons why supporting
the design, implementation and assessment of public policies is
such a hard problem. We
finally show that we need to move one step ahead, specialising
decision-aiding to meet the
policy cycle requirements: a need for policy analytics.
Keywords Public policy · Policy cycle · Evidence-based ·
Decision aiding ·
Policy analytics
1 Introduction
Policies are all around us and, directly or indirectly, they
influence many aspects of our life.
Quite often, people ask themselves how such policies have been
conceived, why politicians
have decided to implement that policy, and not another one,
why it has been implemented
in that precise way and with particular resources, how these
have been used and so on.
As decision analysts we are quite often confronted with
“clients”, being public agencies
or stakeholders, involved in public decision processes to whom
we are expected to provide
useful knowledge for such processes. But what is useful
knowledge in such a context? Some
international agreements, laws and norms, a macroeconomic
plan, some politicians’ interests,
the national statistics, surveys and polls? This paper aims to
discuss “evidence-based policy
making”, a recent attempt to summarise such useful knowledge
as “evidence” which should
G. De Marchi · G. Lucertini · A. Tsoukiàs (B)
LAMSADE-CNRS, Universit Paris Dauphine, Paris Cedex 16,
France
e-mail: [email protected]
123
http://crossmark.crossref.org/dialog/?doi=10.1007/s10479-014-
1578-6&domain=pdf
16 Ann Oper Res (2016) 236:15–38
guide policies. It seeks to provide a critical perspective on it
and then propose a new concept
for supporting policy making: policy analytics.1
We start by considering policies as shapeless objects, modelled
by politics, where a set
of interrelated actions is aimed at achieving a set of multiple
and interrelated goals within a
period of time. In a public policy context the nature of
“decision process” or “policy cycle”
(Lasswell 1956) (we will use them interchangeably) can be
characterised by some relevant
features. First, the policy cycle consists of a set of interrelated
decision processes linked
by goals, resources, areas of interest or involved stakeholders.
Second, in a public context,
once we start considering laws, rights and governance principles
it is difficult to identify the
person(s) who will have the power to decide: who is(are) the
policy maker(s). Third, issues
may be ill defined, goals may be unclear, the stakeholders are
usually many and difficult to
detect. Fourth, actions and policies are interrelated, and so are
their consequences, although
sometimes seemingly very distant and disconnected. Fifth, the
factor time must be introduced.
On one hand, public policies in order to be effective and solve
problems in a comprehensive
and organic manner need a strategic approach, establishing long
term agendas; these might
be in contrast with the short-term agendas policy makers may
have. On the other hand, the
longer the time horizon of a policy, the more we need to
consider different and unforeseeable
risks and uncertainties (Beck 1992).
In the last few years this context has become more complex:
participation and “bottom up”
actions become frequent and are often required by law. Citizens,
if and when directly affected
by some policy, are (over)informed and becoming active in the
policy-making process. They
do not wait until some obscure decisions fall from top down,
and want to know and be
informed about the government decisions and actions. They
want to receive explanations
before accepting decisions. They are not subjects but agents of
democracy and, in this sense,
participatory processes become crucial both to have a
democratic process, and to avoid oppo-
sition phenomena as “not in my backyard”. That is why policy
makers now more than ever
are expected to be accountable and policy-making “evidence-
based” rather than based on
unsupported opinions difficult to argue. A transparent relation
between decision makers and
stakeholders becomes fundamental in order to attain and
preserve consensus. Consensus is
a very important resource in every public policy process. Most
actions policy makers or
other relevant stakeholders undertake are guided by consensus
seeking and most resources
committed within a policy cycle become important in relation
with their ability to be trans-
formed into consensus (Dente 2011). Thus, policies become
instruments “to exercise power
and shape the world” (Goodin et al. 2006, p. 3).
In 1997 the concept of evidence-based policy making (EBPM)
was introduced, in a mod-
ern form, by the Blair government (Blair 1994). The idea of
creating policies on the basis of
available knowledge and research on the specific topic is not
new and is generally accepted.
However, we need to deepen into the concept and its peculiarity
in order to better under-
stand exactly what it means. First of all, what is evidence?
According to the Oxford English
Dictionary, evidence means an “available body of facts or
information indicating whether a
belief or proposition is true or valid”. However, this definition
is far from clear and some-
what ambiguous. Why is “evidence” once again highlighted as a
support for deciding about
policies? The idea of using some form of evidence in order to
conceive a policy is not really
new. In which sense is its contribution now different?
EBPM has been defined as the method or the approach that
“helps people make well
informed decisions about policies, programmes and projects by
putting the best available
1 For a detailed discussion about this concept the reader can see
Tsoukiàs et al. (2013). The present paper
precedes conceptually the previously mentioned paper although
it appears after it.
123
Ann Oper Res (2016) 236:15–38 17
evidence from research at the heart of policy development and
implementation” (Davies
1999). It is important to point out that the scope of EBPM is to
help and “inform the policy
process, rather than aiming directly to affect the eventual goals
of the policy” (Sutcliffe and
Court 2005). We could say that EBPM essentially consists of
“the integration of experience,
judgement and expertise with the best available external
evidence from systematic research”
(Davies 2004). Evidence should include all data from past
experiences, as well as information
and good practices from the literature reviews. This is certainly
important and necessary, but
is it also sufficient to make a good policy? Dente (2011) claims
that in order to understand
and assess a policy, what is really important is the policy
making process which led to
such a policy, rather the policy itself. Thus, in order to support
such a decision process, we
need information and knowledge considering the policy cycle as
a whole, able to support
accountability requirements.
Davies (1999, 2004) and Gray (1997) claim that the introduction
of EBPM produces a
shift away from opinion-based decision making towards
evidence-based decision making.
This shift is far from easy. On one hand, EBPM seems to be
viewed as an objective method
to decide, distant from political ideology. On the other, this
statement is controversial, and
must be better explained. Despite EBPM trying to base policies
on “facts” and “evidence”
rather than insisting on bureaucracies or on political ideology, it
is important to underline
to the stakeholders (technical and not) that such “facts” and
“evidence” do not provide an
unambiguous guide to decision-making. In fact, we know that
data can be manipulated,
that interpretations are subjective and that good practices are
strictly linked with a specific
framework. In other words, constructing evidence does not end
the analyst’s work or the
need for the stakeholders’ critical intelligence. It is not a way to
delegate decisions, because
values, preferences and decisions should remain a political act.
The aim of this paper is to review the literature about policy
making and evidence-based
policy making (and related issues), highlight the origins,
understand the criticisms and con-
troversies, while looking for a new perspective which we shall
call “policy analytics”. Our
main claims are:
1. The policy making process or “policy cycle” is a long term
decision process characterised
by:
– the specific nature of public policies;
– the requirements of legitimation, accountability and
deliberation;
– the existence of multiple public decision processes within the
same policy cycle.
2. Supporting the policy cycle cannot be reduced to producing
just “evidence” (in terms of
data, knowledge, expertise etc.). The analytics providing
evidence aiming at supporting
general decision making processes are necessary, but not
sufficient in the case of public
policy making. Constructing evidence should be seen as a
specific, purpose-built, type of
decision aiding process, and as such, should be
methodologically well founded.
3. We need a new and richer concept accounting for all decision
aiding activities that aim
at supporting the policy cycle: we call this term “policy
analytics” and we will briefly
introduce some of its main features in this paper.
The paper is organised as follows. We start outlining the
meaning of some important terms
and concepts (Sect. 2). Next, we briefly present a review about
the EBPM state of the art
(Sect. 3). After that we discuss criticisms, which involves the
policy making process and
the introduction of evidence within it (Sect. 4). We will then
introduce and sketch out the
concept of “Policy Analytics” as a new term grouping the
activities and knowledge created
123
18 Ann Oper Res (2016) 236:15–38
to support policy making throughout the whole policy cycle
(Sect. 5). A concluding section,
including future challenges, ends the paper.
2 Terms and concepts
2.1 Public policy
To start with, it is important to understand that the concept of
public policy (PP) should be
wide and abstract enough in order to adapt itself to various
applications and contexts. For
such reasons, over the past 50years many definitions have been
coined to define PPs. They
have different meanings as the authors bring into focus different
aspects such as processes,
stakeholders, objects and decision levels (Anderson 1975; Dente
2011; Dunn 1981; Dye
1972; Hill 1997; Jenkins 1978; Kraft and Furlong 2007). From
this literature we can identify
six main characteristics of PPs:
• the power relations between different stakeholders,
• the different institutional levels,
• the duration over time,
• the use of public resources,
• the act of deciding (including deciding not to decide),
• the impacts of decisions.
However, according to different contexts and goals, different
types of policies may result
in combining the above characteristics at different levels: long
term city waste management
is a low institutional level policy with a long time horizon
implying a moderate use of public
resources potentially involving a small number of stakeholders,
while locating a regional
landfill often results in a very conflict-ridden (many
stakeholders with strong commitments)
situation although for a short time, potentially involving several
institutional levels.
First of all we need to understand the term “public”. Intuitively
public is any issue con-
cerning the community, something which in a direct or indirect
way affects all citizens. Then,
in our specific field we shall emphasise that every PP is a
process that implies a set of public
decisions; thus, it is a public decision process. It is developed
over a relatively long period
of time and involves different decisional levels, interacting
according to a set of determined
rules. The process and entailed interactions are developed in
order to solve a problem refer-
ring to a public issue, or rather a problem in which resources
and rationality are public. The
concept of “public issue” is not always clear: the issue that the
policy will address is an
object which conveys a meaning. Naming a public policy is the
action of defining such a
meaning and it implies the legitimation of this meaning.
However, every subject affected by
the policy (policy makers, experts, citizens, stakeholders)
makes-up his or her own meaning
of the policy, legitimated by its name and definition. Practically
speaking, stakeholders inter-
pret a policy according to their own needs and/or commitment
of resources and accordingly
exercise their legitimation. Seen from a resources point of view,
a public decision is a public
choice and it implies an allocation of public resources. Even no-
action in a determined field
is considered a policy, because it implies the public choice of
maintaining the same resource
allocation as before. Speaking about public resources,
governments/public agencies have to
make understandable how and why they use public resources in
order to tackle specific issues.
The public decision process is requested to be accountable to
the citizens in contrast with the
complexity of the entire process. Thus, we need an operational
definition able to summarise
the characteristics introduced:
123
Ann Oper Res (2016) 236:15–38 19
Definition 2.1 We consider a public policy as a public
agreement, allocating public resources
to a portfolio of actions aiming at achieving a number of
objectives set by the public decision
maker, considered as an organisation. Such agreement can be
interpreted in many different
ways depending on who is concerned by the policy.
Through this definition of public policy we want to highlight
that a policy has a meaning
for the stakeholders affected by the policy itself and for the
citizens in general, but that
such meaning could be neither shared nor consensual. Possibly
the policy may achieve both
knowledge sharing and consensus about the resource
distribution, but this is a potential
outcome. A policy does not only pursue quantifiable objectives;
it generates a legitimation
space, thus producing inclusion and/or exclusion. A legitimation
space is an abstract space
where the stakeholders reveal (at least partially) their concerns,
preferences, values and goals,
where they commit and look for resources and where they are
able to seek for and create
legitimation, namely agreement on decisions and actions,
through relations and discussions
(see Ostrom’s “action arena” Ostrom and Ostrom 2004, 1971,
and Ostanello and Tsoukiàs’
“interaction space” Ostanello and Tsoukiàs 1993). This is a
crucial difference with respect
to generic policies of the type a private business will typically
conceive.
Example 2.1 In the following section we shall introduce a
running example in order to allow
the reader to understand a number of concepts introduced in the
paper. The example is taken
from a real case study on the design of risk reduction measures
for urban and transportation
infrastructure (for more information see Mazri et al. 2012,
2014).
The case concerns the broader issue of designing risk reduction
measures around haz-
ardous industrial plants, namely the ones considered by the so
called “Seveso directives”. In
France the law 699/2003 obliges the “préfet” (the government
representative at local level)
to produce, for each hazardous plant in the territory under his
jurisdiction, a “Technological
Risk Reduction Plan (PPRT in French)” aiming at reducing to
an acceptable level the risks
the population may have to face in case of a major accident
occurring in that plant. Such
measures concern urban planning actions as well as actions
addressing the transportation
infrastructure present within a reasonable distance at the plant.
Examples of such measures
can be establishing areas (around the plant) where houses need
to be demolished or deciding
to divert a railroad.
We consider such PPRT as public policies:
– they affect multiple stakeholders;
– they use and redistribute public resources (land, money,
authority etc.);
– their designs result in de facto, but also de jure, participatory
decision processes;
– there is at least one moment in which the plan is deliberated;
– such plans affect a “public issue”, that is citizens’ safety, a
controversial issue allowing for
multiple interpretations (different stakeholders, such as the
local politicians, the scientific
advisors, the citizens individually etc., will likely have different
interpretations of what
safety means and how risks can be reduced).
2.2 Policy making process
Since the 1950s, policy making has been interpreted as a
process, that is a sequence of inter-
active stages or phases. Under such perspective, the policy
making process can be considered
as developing in time and space, merging actions and intentions,
decisions and also a lack of
decision-making, impacting on society and on the political
system itself.
The idea of modelling the policy making process (or cycle) in
terms of stages was first put
forward by Lasswell (1956). He introduced a model of the
process divided in seven stages:
123
20 Ann Oper Res (2016) 236:15–38
intelligence,promotion,prescription,invocation,application,termi
nation,andappraisal.This
set of stages has been contested and criticised, but the model
itself has been successful as a
framework for subsequently studying policy science and policy
analysis (Jann and Wegrich
2007). During the 1960s and 1970s, a number of different
process typologies were developed,
used to organise and systematize the growing research (for such
typologies see Anderson
1975; Brewer and Leon 1983; Brewer 1974; Jenkins 1978; May
and Wildavsky 1978). Today
the most conventional process to describe a chronology of a
policy making process is made
up of (Hill 1997; Jann and Wegrich 2007):
• agenda setting,
• policy formulation,
• decision making,
• implementation,
• evaluation.
This kind of policy making process—as presented by Lasswell
and others - has been
designed like a problem-solving model, according to the
rational model of decision-making
developed in organisation theory and public administration
(Jann and Wegrich 2007). Simon
(1947) has pointed out that the real world does not follow such
stages. However, this kind of
process still counts as a major reference (other models used in
public policies: the incremental
modelDror1964;Etzioni1967;Lindblom1959;thegarbagecanmode
lKingdon1984;March
and Olsen 1976, 1989). This origin of the studies on policy
making, as sequential stages, will
be our basis for interpreting the policy making process as a
proper decision-making process.
We need to emphasise that a policy cycle goes beyond the
public organisations concerned:
it is not an internal process to them. A public decision process
involves multiple and different
organisations and/or individuals. Thus, there is no single
rationality to simply follow. This
process is characterised by several rationalities, which may
conflict, and the process generates
a legitimation space in which these rationalities interact
(possibly following what Habermas
1984 calls communicative rationality).
Example 2.2 We continue with the example about the PPRT.
Establishing such a plan implies
entering a policy cycle:
– agenda setting: typically the political authority (préfet)
establishes the issues which need
to be considered as critical as far as the citizens’ safety is
considered (areas to be analysed,
infrastructures to be considered, types of accidents to be taken
into account, budget allo-
cated and timing of the process, just to mention some of the
issues which are typically
introduced in the cycle);
– policy formulation: different stakeholders, such as field
experts, process experts, local
focus groups, representatives of other actors and social groups,
need to establish what,
how and why is going to be used in order to assess risks,
mitigation measures and their
effectiveness etc.;
– decision making: a number of meetings, discussion forums,
consultation actions, con-
cerning both the whole set of involved stakeholders as well as
each group individually
are scheduled, aiming at addressing the issues introduced into
the agenda and formulated
as elements composing the policy (characterising different areas
through the level of the
incumbent risk, measuring the potential impact of each single
action potentially involved
in the policy etc.); the process ends when the plan is officially
deliberated by the “prfét”;
– implementation: the plan is enforced through legal actions,
specific projects are scheduled
(moving households considered under extreme risk, building
protections, implementing
risk management procedures etc.), actions communicating the
plan contents are performed
etc.;
123
Ann Oper Res (2016) 236:15–38 21
– evaluation: at a given time the results of the plan are assessed,
possibly against benchmarks
and/or targets, besides observing unforeseen consequences.
N.B. We do not wish to suggest that these activities necessarily
occur in a linear fashion.
2.3 Public deliberation, legitimation and accountability
A feature which helps to distinguish a public decision process
from other decision processes is
“public deliberation”, namely the legislative act. The outcome
of the decision is a public issue,
and the public authority must communicate it, the citizens must
know about it. The publication
of an official document which defines and explains the policy is
the act that produces the
wanted (and unwanted) outcomes and reactions to the decision.
The intermediate and final
act explain the motivations and the causes of that policy. Public
deliberation is also expected
to establish accountability of the public decision process and of
the public authority itself
and, to some extend, their legitimation to the general public or
stakeholders.
Linked with deliberation we find two concepts, recently used in
the field of PP: “Legitima-
tion” and “Accountability”. Both are important in order to
understand the relations established
between stakeholders. They are fundamental in the creation,
evolution and maintenance of a
conceptual social space that we call a legitimation space, where
stakeholders interact creating
relationships, goods, services, but above all develop and define
their rationality (Habermas
1984).
In relation with legitimation, we refer to authorisation and
consensus. The need for legit-
imisation stems from the relative dimension of power, that
makes it frail in terms of collective
acknowledgement. Political power does not get identification
and legitimation from a tran-
scendent order; thus, the recognition of its value, and so the
collective acknowledgement,
becomes an immanent issue: legitimation is obtained through
authority and consensus. On
one hand, the legitimacy of the public action and the relative
decisions, comes from the
law, which gives to the elected the authority to decide and
manage public resources. On
the other hand consensus is obtained when acting pretends to be
rational: more generally
speaking when the decision process itself pretends to be
“rational” (whatever rationality
model we may consider). Actually the fact that different
rationalities co-exist within a pol-
icy cycle allows to shift our attention to the pretentions of
rationality: policy decisions and
actions pretend to be rational and look for logical frames
allowing them to be perceived as
such.
However, before proceeding, we need to make a little
clarification. When speaking about
rationality, in this context, we are not referring to rational
planning, a concept used in urban
andregionalplanning(Faludi1973;Friedman1987)(criticisedinAle
xander2000;Hostovsky
2006). For us, a policy maker has a different role from a
“rational planner”. A rational planner
adopts a precise model of rationality. Policy makers are able to
legitimate their decisions and
actions because at any time they adopt the most suitable
rationality model. Rational planners
feel legitimate because they act “rationally” (according to some
model). Policy makers feel
rational because they act according to different legitimate
models of rationality.
How could rationality be legitimising in the field of PP? The
concept of rationality is not
straightforward,neithertrivial.Inresearch,manyformsofrationality
havebeenidentified(the
idea of multiple rationalities was introduced first by Weber
1922), all of them aiming at some
validity claims (Habermas 1984). We distinguish three different
approaches in establishing
such validity:
• economic rationality (Hammond 1997; Harsanyi 1955;
Robbins 1932); policies should
maximise the utility of a society seen as the aggregation of the
consumers within it;
123
22 Ann Oper Res (2016) 236:15–38
• bounded rationality (Simon 1955); policies should satisfy
some subjectively defined deci-
sion maker’s requirements for action;
• communicative rationality (Habermas 1984); policies should
result as consensual artifacts
through four validity dimensions:
– truth;
– scientific support;
– normative rightness;
– sincerity
When talking about accountability we refer to openness and
transparency: public adminis-
trations can, should and sometimes must show and justify the
reasons that support a decision,
to some policy or any allocation of public resources. The 2008
EVALSED guide of the
European Union (Regional Policy Inforegio 2011) defined it as:
Obligation, for the actors participating in the introduction or
implementation of a public
intervention, to provide political authorities and the general
public with information
and explanations on the expected and actual results of an
intervention, with regard to
the sound use of public resources. From a democratic
perspective, accountability is
an important dimension of evaluation. Public authorities are
progressively increasing
their requirements for transparency vis-a-vis tax payers, as to
the sound use of funds
they manage. In this spirit, evaluation should help to explain
where public money was
spent, what effects it produced and how the spending was
justified. Those benefiting
from this type of evaluation are political authorities and
ultimately citizens.
We want to highlight that the EU definition emphasises the
concept of accountability as
an unavoidable dimension of evaluation (from a democratic
point of view). This statement
leads us back to the concept of legitimation and lets us
understand that these two concepts
are complementary (see also the discussion emphasising the
difference between “new public
management” and “new public governance” in Almquist et al.
2012). The definition also
emphasises that evaluation should aim at helping the
accountability of policy makers for the
use of public resources, the effects of the implemented policies,
and the reasons for choosing a
specific alternative. In order to improve legitimation (and thus
the acceptance of policies) it is
important that the stakeholders feel some ownership over the
result (understand it, understand
the consequences, realise that different points of view have
been analysed and compared).
Ownership is associated with justified knowledge and beliefs:
stakeholders and policy makers
need to be able to explain and justify why and what they do to
their communities.
Example 2.3 We conclude the presentation of the PPRT case
explaining the three concepts
introduced above.
1. Public Deliberation The process constructing the PPRT is de-
facto participatory (besides
participation being enforced by law). However, such
participation is officially recognised
through the establishment of a committee (named CLIC: Comit
Local d’Information et
Concertation) which is expected to act as the place where all
issues are discussed. Each
single stakeholder, the CLIC itself, as well as the decision
maker (the préfet) perform
a number of public acts: releasing a document containing risk
analysis, publishing the
minutes of a CLIC meeting, releasing an intermediate or the
final version of the PPRT.
All such actions are public deliberations which characterise the
policy cycle and allow
the policy making process to be observed.
2. Legitimation The issue here is not just to reach an agreement
among the stakeholders about
the plan to be deliberated. First of all it is crucial that the whole
process is considered as
123
Ann Oper Res (2016) 236:15–38 23
legitimate (relevant stakeholders have been invited, the agenda
has been discussed and
agreed, unforeseen issues raised by some of them have been
seriously considered etc.).
Controversial conclusions might be nevertheless accepted or
approved (despite opposi-
tion) if the process is considered legitimate. Then the result
itself should be legitimating
in the sense that the public resources redistribution resulting
from adopting a specific
PPRT needs to address the expectations of the involved
stakeholders (although perhaps
not exactly as these were considering them). To give a more
precise example: if the local
transportation agency was expecting to improve safety for the
users of their services
travelling through the area considered as “risky”, there should
be somewhere resources
allocated taking this issue into account; perhaps not the ones the
agency was originally
considering (building a concrete protection), but a reasonable
alternative (installing a
warning system). In the PPRT case it has been shown that being
able to demonstrate
to each stakeholder that the actions included in the plan address
the concerns they have
carried within the discussions allows the stakeholders to feel
that the plan “takes care
of them”: in other terms that the use, reallocation and
distribution of public resources is
done for “public purposes” and not for satisfying opposing
private ones.
3. Accountability Accountability is related to the process
through which the involved stake-
holders become able to understand, justify and explain the
policy cycle and its outcomes.
In the PPRT case it is important for the stakeholders to be able
to show how certain
information (a risk level), combined with a value structure
(fairness in cost distribution)
leads to a certain decision (covering part of a railway, paid by
the hazard plant generating
the risk). It is equally important to show that the way some
stakeholders perceive risks
has been integrated in the procedures where risk is quantified.
Last, but not least it is
important to communicate about the potential risks and their
consequences in a way that
different stakeholders understand at least one common core
message (for instance: “we
work for your safety”).
3 Evidence-based policy making: state of the art
3.1 Premises
“Evidence-based policy making” (EBPM) is a “new” topic that
pervades the last decade of
social sciences’ debates. However, there is nothing new in the
idea of using “evidence” to
support decisions. Aristotle (1990) claims that decisions should
been informed by knowl-
edge. Later, this way of thinking and acting created several
philosophical movements around
“positivism” (see Comte 1853, 1865; De Saint-Simon 1976;
Giddens 1974; Hanfling 1981,
Zammito 2004, up to “constructivism” Watzlawick et al. 1967).
The interest towards the
use of knowledge as rational and logical reasoning, grew until
the second half of the 20th
century, when these were intended both as cause and effect
(Dryzek 2006; Sanderson 2002),
as well as the ability to rank all known available alternatives
(Ostrom and Ostrom 1971)
(see also Bouyssou et al. 2000, 2006). In the beginning, the
concept of rational decision was
central both for the economic dimension of problem solving and
the scientific management
of enterprises (Tsoukiàs 2008).
In this it was considered possible to use the scientific method to
improve policy making.
An important figure in this field was Harold Lasswell,
committed to the idea of a “policy
sciencedemocracy”(Lasswell1948,
1965;LernerandLasswell1951).In1963Buchananand
Tullock organised a conference in which the shared interest was
the application of “economic
reasoning” (commonly considered as a good example of
rationality) to collective, political
123
24 Ann Oper Res (2016) 236:15–38
or social decision-making. In 1967, the term “public choice”
was adopted to distinguish this
area (Ostrom and Ostrom 1971). The public choice approach is
related with the theoretical
tradition in public administration, formulated by Wilson (1887),
later criticised by Herbert
Simon. Wilson’s major thesis was that “the principles of good
administration are much the
same in any system of government” (Ostrom and Ostrom 1971,
p. 203), and “Efficiency
is attained by perfection in hierarchical ordering of a
professionally trained public service”
(Ostrom and Ostrom 1971, p. 204). Wilson gave also a strong
economic conceptualisation of
the term efficiency. He said “the utmost possible efficiency and
at the least possible cost of
either money or of energy” (Wilson 1887, p. 197 cited in
Ostrom and Ostrom 1971, p. 204, see
also Ostrom and Ostrom 1971; White 1926; Wilson 1887).
Under such a perspective “policy
problems were technical questions, resolvable by the systematic
application of technical
expertise” (Goodin et al. 2006, p. 4). However, already since
the ’40s, Simon (1947, 1955,
1959, 1962, 1964, 1969, 1979) strongly criticised the theory
implicit in the traditional study
of public administration, because there are no reasons to believe
in one “omniscient and
benevolent despot”.
In spite of such criticism, as Dryzek (2006, p. 191) said:
these dreams may be long dead, and positivism long rejected
even by philosophers of
natural science, but the terms “positivist” and “post-positivist”
still animate disputes
in policy fields. And the idea that policy analysis is about
control of cause and effect
lives on in optimising techniques drawn from welfare economics
and elsewhere, and
policy evaluation that seeks only to identify the causal impact
of policies.
Dryzek’s quote suggests that “positivism” and “post-
positivism”, are still alive and indeed
we claim that the promotion of EBPM has been a return to such
approaches.
3.2 Evolution
3.2.1 From medicine to social science
Evidence-based policy making was born from the roots of
evidence-based medicine (EBM,
Dowie 1996; Sackett et al. 1996) and evidence-based practice
(EBP, Melnyk and Fineout-
Overholt 2005; Mitchell 1999). Indeed, it is easy to track such
roots in the EBPM logic,
and the way to understand problems and solutions. EBM and
EBP are based on the simple
concept of finding the best solution which integrates past
experience into the problem-solving
process. The practice of EBM needs to integrate individual
clinical expertise with the best
available external critical evidence from systematic research, in
consultation with the patient,
in order to understand which treatment suits the patient best. In
this sense, we can say (with
Solesbury 2001) that EBM and EBP have both an educational
and a clinical function. In other
words, this kind of evidence is based on a regular assessment
through a defined protocol of
the evidence coming from all the research. In order to respond
to this need of EBM for
systematic up-to-date review, the Cochrane Collaboration was
initiated in 1993, which deals
with the collection of all such information.
Subsequently, given the good results obtained in medicine using
such an approach, politi-
cians were interested in using the same scientific method to
support public decisions and
legitimise policy making. Given the success of the Cochrane
Collaboration in the production
of a “gold standard”, the Campbell Collaboration was
established in 2000, which aims at
systematically reviewing social science in the fields of
education, crime, justice and social
welfare.
123
Ann Oper Res (2016) 236:15–38 25
3.2.2 EBPM in the UK
In 1994, the Labour party termed itself as “New Labour”
announcing a new era: “New
Labour” was expected to be a party of ideas and ideals but not
of outdated ideology. “What
counts is what works”. The objectives were radical. The means
would be “modern” (Blair
1994). In this first announcement it was possible to recognise
the same roots and philosophy
pervading EBM and EBP. Moreover, in 1997 when the Labour
Party won the general elections
they decided to open a new style of policy making. In order to
organise and promote it, they
published the Modernising Government White Paper (Cabinet
Office, London 1999), in
which they argue that:
government must be willing constantly to re-evaluate what it is
doing so as to produce
policies that really deal with problems; that are forward-looking
and shaped by the evi-
dence rather than a response to short-term pressures; that tackle
causes not symptoms;
that are measured by results rather than activity; that are
flexible and innovative rather
than closed and bureaucratic; and that promote compliance
rather than avoidance or
fraud. To meet people’s rising expectations, policy making must
also be a process of
continuous learning and improvement. (p.15)
better focus on policies that will deliver long-term goals. (p.16)
Government should regard policy making as a continuous,
learning process(...)We
must make more use of pilot schemes to encourage innovations
and test whether they
work.(p.17)
encourage innovation and share good practice (p.37)
In this document they describe the goals of the new government
changing the approach to
public policy. This change implied the evolution of the
evidence-based method and logic. We
can consider the Government White Paper as the Manifesto of
United Kingdom’s EBPM,
where EBPM covers the same role of EBM, that is to give
accountability at the field of policy.
Such accountability is promoted by two main forms of evidence
(Sanderson 2002):
• the first one refers to the goals and then to the effectiveness of
the work of the government;
• the second one refers to the effective results and,
consequently, the knowledge on how
well policy works under different circumstances.
In practice, policy processes have been viewed as learning
processes that have to be
studied, analysed and monitored in order to obtain new evidence
for building future policies.
The expectation was a goal shift of the policy making process:
from a short term policy
founded on ideology and no-scientific knowledge to a long term
policy founded on identified
causes of the social problems being faced. Under such a
perspective, any components of
the policy process based on non scientific arguments are
considered a deviation from the
“truth/reality” of the problems. In fact, David Blunkett, in his
speech in 2000 (Blunkett
2000), emphasised that:
This Government has given a clear commitment that we will be
guided not by dogma
but by an open-minded approach to understanding what works
and why. This is central
to our agenda for modernising government: using information
and knowledge much
more effectively and creatively at the heart of policy making
and policy delivery.
“What works and why” became the UK slogan for EBPM
promotion. The following
government put emphasis on EBPM, although with some
differences. The shift was from
123
26 Ann Oper Res (2016) 236:15–38
policy learning to policy delivery, and thus the need to move
away from experimentation and
the awareness that what matters most is hard quantitative data.
In fact, in recent years EBPM
has evolved from a giving attention to any kind of scientific
analysis to a great attention
on quantitative and economic analysis (Cabinet, Performance
and Innovation Unit, London
2001; HM Treasury, London 1997).
3.2.3 Other experiences
After being developed in UK, EBPM expanded its influence to
other English speaking coun-
tries, mainly USA and Australia. In the USA, the most
representative event was the foundation
in 2001 of the US Coalition for Evidence Based Policy that aims
at increasing government
effectiveness through the use of rigorous evidence about what
works. Evidence is again
consciously borrowed by medicine, with the explicit goal of
replicating the effectiveness
that produced many advances in the field of human health in the
field of social policies.
Evidence-based policy making was seen as an instrument of
rationality that would let society
avoiding to waste resources pursuing expensive but ineffective
social policies. Evidence is
thus a resource-rationing tool (Marston and Watts 2003) in the
sense that it indicates the right
way to address a social problem, making the country more
efficient: focussing the spending
only on satisfactory policies.
In Australia there is no formal coalition and no explicit formal
willingness to apply EBPM,
but the language of evidence has spread to many fields of public
policy: we can see examples
in health and family services, community services, education
and immigration. In these
fields we can find sentences referring to evidence that implies
that EBPM is being actively
promoted in a specific way: “research helps to depoliticise
educational reform” (Department
of Education, Training ad Youth 2000, p. 190) or, as Mark
Latham, then leader of the Federal
Parliamentary Australian Labor Party, put it in his speech on
welfare reform (Latham 2001,
p. 1):
My conclusion is that we should forget about the grand theories
of sociology and the
ideologies of the old politics and pursue an evidence based
approach to welfare reform
Here, evidence is considered as a solution far away from
political ideology and thus as an
apolitical solution; Smith and Kulynych (2002, p. 163) state that
“efficiency becomes the
primary political value, replacing discussion of justice and
interest”.
Is it true? Does the use of evidence mean avoiding ideology? Is
efficiency improved using
evidence? In the next section we will discuss whether indeed
the use of “evidence” can
effectively replace “ideology” and “values”.
4 Criticism of EBPM
Within the EBPM debate, authors cast doubt on whether
introducing evidence in the policy
making process is actually innovative. If previously policy
making could be described as a
“swamp” (Schn 1979) characterised by complexity, uncertainty
and ignorance, then EBPM
aims to move onto firmer ground in which sound evidence,
rather than political ideology
or prejudice, could drive policy. The question is whether this
confidence in the power of
evidence is really a step forward, as EBPM could appear as a
return to the old time trust in
instrumental rationality. In fact Parson (2002, p. 44) states that:
123
Ann Oper Res (2016) 236:15–38 27
EBPM must be understood as a project focused on enhancing
the techniques of man-
aging and controlling the policy making process as opposed to
either improving the
capacities of social science to influence the practices of
democracy.
Sanderson (2002, p. 1) argues that: “the resurgence of evidence-
based policy making might
be seen as a reaffirmation of the modernist project, the enduring
legacy of the Enlightenment,
involving the improvement of the world through the application
of reason ”.
Actually, in the UK EBPM the focus was on effectiveness,
efficiency and value for money.
This experience is characterised by a managerial emphasis
(Trinder 2000, p. 19). EBPM, in
its effort to implement accountability, is linked with an
instrumentalist way of managerial
reforms that have infiltrated public administration practices in
many western democracies
over the past three decades (see Almquist et al. 2012). Despite
opposite claims these man-
agerial reforms can be assimilated by the same technocratic
logic, concerned with procedural
competence rather than substantive output (Marston and Watts
2003).
In the following section we introduce the main issues for which
EBPM has been criticised:
the existence of multiple evidences; the multiplicity of factors
influencing policy making;
and the contingent character of evidence.
4.1 Multiple evidences
There are many typologies of evidence: the distinction most
used is between hard/objective
and soft/subjective. The first one includes primary quantitative
data collected by researchers
from experiments, secondary quantitative social and
epidemiological data collected by
government agencies, clinical trials and interview or
questionnaire-based social surveys.
Other sources of evidence, typically devalued as “soft”
(Marston and Watts 2003, p.
151), are photographs, literary texts, official files,
autobiographical material like diaries
and letters, the files of a newspaper and ethnographic and
particular observer accounts.
Davies defines a scheme (Davies 2004, p. 15) in which he
shows that there are seven
kinds of evidence that originate from scientific research: impact
evidence, implementa-
tion evidence, descriptive analytical evidence, public attitudes
and understanding, statis-
tical modelling, economic evidence, and ethical evidence.
Moreover, Davies states that
in policy making “privileging any type of research evidence or
research methodology,
is generally inappropriate” (Davies 2004, p. 11). Thus, a
balance between the different
kinds of research methodology and general competence of the
full range of research meth-
ods is required. Otherwise, Sanderson (2002) states the need for
developing just impact
evidence (as defined by Davies) in order to build policies
through long term impact
evaluation.
Due to different opinions in the debate, in order to avoid
misunderstandings in practice,
the UK Cabinet Office clarify the meaning of evidence in the
White Paper on Modernising
Government (Cabinet Office, London 1999) defining it as:
expert knowledge; published research; existing research;
stakeholder consultations;
previous policy evaluations; the Internet; outcomes from
consultations; costings of
policy options; output from economic and statistical modelling
From this definition it seems that this conception of evidence
allows for both “conven-
tional and unconventional scientific methods”. However, a
hierarchy of these is implicitly
established in practice. This kind of practice is far from being
neutral or objective. Indeed, the
selection of the “more appropriate” evidence or the one with
“greater weight” is necessarily
a limitation of what counts as valid knowledge. The building of
a hierarchy of knowledge
123
28 Ann Oper Res (2016) 236:15–38
means that we can consider some forms of knowledge to be
more related to reality/truth. If this
could be considered correct in some cases, it is always not
neutral. The same thing happens
each time we choose a theory or a method rather than another.
Every theory is based on some
hypotheses or interpretations of the complex reality and they are
not omni-comprehensive. In
choosing what counts as the valid knowledge for policies,
policy makers implicitly state their
interpretation of reality. For instance, observing the
recommended and adopted evidences
in the UK experience we can deduce that the interpretation of
the reality could be intended
as post-positivist, in the sense that the stress is on the cause-
effect relationship. This claim
is supported by the importance given to the concept of
effectiveness and efficiency. These
two concepts became in the UK policy evaluation often the first,
if not the only, qualification
needed by a policy to be implemented (HM Treasury, London
2003).
If the discussion up to this point highlighted the problem that
around us there are multiple
evidences (statistics, surveys, polls etc.) which are difficult to
choose from and/or priori-
tise, we need to focus on another aspect often neglected or
underestimated when talking
about evidence. The problem is the multiple interpretations that
the same “evidence” may
carry. This is all the more important, since evidence is expected
to be used within a policy
cycle where multiple stakeholders with multiple concerns are
involved and who are naturally
going to interpret the evidence differently. To make things more
complicated, such multiple
interpretations can be influenced by how evidence is technically
produced. We present two
examples to better make our point.
Example 4.1 (Air Quality) Consider the case of Air Quality (see
Bouyssou et al. 2000).
“Evidence” about Air Quality (in France) is expected to be
provided by the ATMO index.
This index takes into account four pollutants, measured (it does
not matter here how) on a
scale from 0 to 10 and chooses the maximum among them (the
worst). This way to construct
the index reflects the approximate knowledge we have about the
impact on health of these four
pollutants: anyone among them is supposed to be equally
unhealthy. However, consider three
consecutivemeasurements:thefirstoneattime t1
isthecurrentsituation(atacertainlocation),
while t2 and t3 refer to the situation observed after two policies
have been implemented, using
the same budget. The case is summarised in Table 1.
Should we consider the ATMO index to be “evidence” then the
policy conducting to situ-
ation t3 should be considered as better than the policy
conducting to situation t2. Indeed for
the ATMO index, the quality of the air did not improve from t1
to t2, while it did from t1
to t3. Obviously this is counter-intuitive! One could claim that
the disaggregate information
should be used as “evidence”, but then are we sure that each of
the four measures does not
suffer the same type of problem the ATMO index presents? We
shall not discuss here what
the more appropriate way to measure Air Quality should be.
What we want to emphasise
is that the ATMO index can be used as “evidence” about
whether an observed situation is
“healthy”, but cannot be used as “evidence” about the
effectiveness of Air Quality improve-
ment policies. In other terms, this index (like any other) allows
multiple interpretations which
are more or less suitable to the type of assessment we are
interested in performing. Such
Table 1 Three different
measurements of Air Quality
Pollutant CO2 SO2 O3 Dust
t1 5 5 8 8
t2 3 3 8 2
t3 7 7 7 7
123
Ann Oper Res (2016) 236:15–38 29
interpretations are strongly related, among others, to how the
index has been (technically)
established.
Example 4.2 Statistics about Poverty Some may claim that raw
statistics reporting “facts”
should be considered as “evidence”. But then consider the
following fact: 95% of rural
households in country XXXX do not have tap water available.
What should be inferred from
that? Perhaps that connecting rural households to fresh water
distribution is a national priority,
requiring an appropriate policy (and corresponding
investments).
Surprisingly, if we ask the household owners what they think
about that, we could discover
that this is not a priority for them. They might claim that they
do not see the problem. They
fetch water from the nearby water pools. Ok! Here is the
problem. Typically, water is fetched
by women, while the household owners are men. Certainly, men
do not see the problem.
What happens if we ask the women? Surprisingly, the women
also claim that there is no
problem! Indeed, a more thorough investigation reveals that
going for water is one of the rare
occasions they have to get out of home! Even though fetching
water is a hard task, it pays
because it allows women to have some social life.
The story, which is a simplified version of a real one, tells us
that raw statistics do not
automatically reveal any truth. Facts need to be interpreted in
order to be used for any deci-
sion process and such interpretations are related to subjective
values, constraints, customs,
history or social norms, among others. The example tells us that
“evidence” does not exist
independently from the decision process for which it is expected
to be used. Although “facts”
exist, choosing the “facts that matter” is a subjective process
and interpreting these “facts”
is another subjective process.
Summarising both examples, we can claim that looking for
evidence while considering
how to solve a problem is certainly a sound attitude to have and
certainly preferable to a purely
intuitive approach. However, contrary to the dominant idea that
“evidence” should guide
policy making, it seems that it is the policy that should guide us
in looking for appropriate
“evidence”. Actually we should consider questions of the type:
• Who needs this evidence?
• Why is that (s)he needs this evidence (what is the problem)?
• What is the purpose (how the evidence is going to be used)?
• Who else is affected by such evidence and how?
• What resources do we commit and what do we expect?
It turns out that such questions are practically the same ones we
need to answer when trying
to model a decision aiding process, see Tsoukiàs (2007). Under
such a perspective we can
still consider that we can follow a scientific approach in aiding
policy makers involved in a
policy cycle, but without denying subjectivity, political
priorities, values or culture, placing
them, instead, at the center of the methodology to be used.
The problem in policy making is not whether there is enough
relevant information, but
how to consider, construct and interpret it: “the danger is not
that one uses no evidence at all,
but that one uses simply the most readily available” (Perri
2002).
4.2 Policy making as a result of many factors
“Evidence”isnottheonlydeterminantofpolicymaking,butitisjuston
eofseveralfactorsthat
influence policy makers in choosing and determining policies. It
is sure that EBPM represents
an evolution with respect to the traditional approach that largely
considers in power, people
and politics the only policy making factors (Parson 2002).
However, it is clear that we have
123
30 Ann Oper Res (2016) 236:15–38
to overcome the “nave” concept of Evidence-Based Policy
Making, where research replaces
policy, and experts/technicians replace the politicians. Policies
are complex “objects” and
the policy making process is influenced by several relevant
factors. Davies (2004) indicated
the following ones:
• Experience, Expertise and Judgement
Policy making implies several stakeholders each carrying
different types of knowledge
such as grounded experience (of local groups, citizens,
economic actors), expertise (of
technical staff, scientists, experts) and judgements (public
opinion, elected bodies, com-
mittees). Such knowledge is expected to be integrated in the
policy making process (Nutley
et al. 2003). This could play a significant role when the existing
information is imperfect
or non-existent (Grimshaw et al. 2003).
• Resources
Establishing a policy mobilises material and immaterial
resources (knowledge, authority,
capital, land, etc.) and results the allocation of resources aimed
at implementing a plan
of actions. Such resources are bounded (and scarce). The result
is a quest for efficiency
both as far as the policy making process and its outcomes are
concerned. This “economic”
aspect of the policy making process is perhaps the most studied
in terms of supporting
methodologies and practices (Cabinet Office, London 2001,
2003; HM Treasury, London
2003; ODPM, Office of the Deputy Prime Minister, London
2000).
• Values
Values are the essence of policy making. They induce
preferences, priorities, judgements
and justify actions. They have several different origins:
ideology, culture, religion, beliefs,
knowledge, discussion etc.. It is unlikely that any policy making
process can be legiti-
mated without making reference to some set of values.
However, it should be noted that
values evolve over time in unexpected directions (consider the
cases of the value of the
environment in the last 50years, the value of women rights in
the last 150years or the
value of individual freedom in the last 250years).
• Habit and Tradition
Political institutions have their own organisational inertia. The
policy making process is
characterised by procedures and patterns often rooted in culture
and history, but never-
theless constraining the potential outcomes. Several times such
constraints appear under
the form of fundamental laws (such as constitutions), but
equally likely they can appear
as socially constructed legitimation processes and outcomes.
• Lobbyists, Pressure Groups and Consultants
Any policy making process mobilises pressure groups, informal
or organised lobbyists as
well as the opinion of experts. Such stakeholders are not always
visible and have a less
systematic influence. However, they play a key role in the
participation process, allowing
specific concerns, stakes and interests to find their way the
discussion.
• Pragmatics and Contingencies
Policy making, agendas and decisions are influenced by
unanticipated contingencies and
“emergency” procedures which do not necessary fit with
rational policy making. Policies
are expected to take into account long term uncertainties as well
as the aspirations of
future generations. This can be in contradiction with a
contingent, short term view of
policy making (Phillips Inquiry, London 2001; Royal Society,
London 2002).
The above list of factors, which are pragmatically considered
when conceiving or evaluat-
ing a policy, shows that “evidence” needs to be understood in
terms of knowledge produced
within a decision aiding process and not as objective
information revealing the truth.
123
Ann Oper Res (2016) 236:15–38 31
4.3 Evidence as contingent knowledge
Young et al. (2002) identify five models in which the relation
between research and policy
can be shaped and defined, five ways through which knowledge
inputs are managed through
the policy cycle:
• the knowledge-driven model,
• the problem-solving model,
• the interactive model,
• the political/tactical model,
• the enlightenment model.
These models are used to understand how evidence is thought to
shape or inform policy in
exploring the assumptions underlying evidence-based policy
making. The first two models
are the extreme forms; they differ in the direction of influence
of the relationship: in the first
(knowledge-driven model) research leads policy in a sort of
scientific inevitability, whereas in
the second (problem-solving model) research priorities follow
policy issues. In the interactive
model there is no position of influence, and the relationship is
characterised as mutual, subtle
and complex. The political/tactical model sees research
priorities as settled by the political
agenda; studies are used to support a political position. In the
enlightenment model, however,
the benefits of research are indirect because they contribute to
the comprehension of the
context in which the policies will act.
These five models are steps in a ladder between the power of
authority and the power of
expertise. “Emphasising the role of power and authority at the
expense of knowledge and
expertise in public affairs seems cynical; emphasising the latter
at the expense of the former
seems naïve” (Solesbury 2001, p. 9). In the experience of the
United Kingdom, we cannot
recognise any of these as a dominant model (Wells 2007, p. 25).
Sanderson (2002, p. 5) defines
a set of variables that are implied in the definition of a model:
“the nature of knowledge and
evidence; the way in which social systems and policies work;
the ways in which evaluation
can provide the evidence needed; the basis upon which
evaluation is applied in improving
policy and practice”. Under such a perspective, the Labour
government announcement that
a new era in which policy would be shaped by evidence, thereby
implying that “the era of
ideologically driven politics is over” (Nutley et al. 2003, p. 3)
is controversial, to say the
least. It is neither neutral, nor uncontested; evidence is a
fundamentally ambiguous term.
The way through which EBPM has been perceived and practiced
reveals an idea of policy
making based on a “cause-effect” principle. To simplify, social
outcomes are seen as the result
of how certain mechanisms work within a certain social context:
once we know the mecha-
nisms and the context, we can foresee the consequences. This
approach has been criticised
by constructivists, for whom the “knowledge of the social world
is socially constructed and
culturally and historically contingent” (Sanderson 2002, p. 6).
Sanderson (2002) points out
that research does not have the role of producing objectivity, or
solutions for policy makers,
concluding that constructivism needs to be reconciled with
“practical requirements”.
5 Discussion
Let us summarise our claims.
• Policies have a twofold impact:
– they deliberate an allocation of resources aimed at pursuing
some objectives (not
always measurable though);
123
32 Ann Oper Res (2016) 236:15–38
– they generate a legitimation space, thus producing inclusion
(or exclusion), inviting
stakeholders to enter within (on leave) it.
• Policy making is a long term decision making process with
specific characteristics:
– it entails participation “de facto” (due to the legitimation
associated with any policy);
– there are moments, at least one, of public deliberation;
– it is expected to be accountable, not only for the involved
stakeholders, but also for
the citizens in general;
– it is guided by the search of legitimation, both for the policy
itself and the policy
makers.
• Policy making should be viewed as a “policy cycle”, from the
perception of a problem,
to the design of policies, their legitimation, their
implementation, their monitoring, their
assessment etc.. Under such a perspective, a policy cycle:
– requires knowledge aimed at supporting the processes within
it;
– produces knowledge used both within the cycle and beyond it.
•
Withinapolicycycleseveraldecisionproblemsarisesuchas:Whichas
pectsoftheproblem
should be considered as more important? What information is
relevant and should be used?
Who are the principal stakeholders? Who else is affected by the
situation and possible
policies? Which resources are allocated, where, how and when?
What matters in terms of
potential consequences?
Decisions (of any type) result from combining factual
information with subjective values,
opinions and likelihoods. For this reason, decisions are
synonymous with responsibility.
Under such a perspective, there is nothing like an “objective
decision”. Decisions are made
by somebody, or a group of people, and reflect his/her/their
standpoint within a decision
process. The consequence is that there will never exist an
“objectively defined policy”.
Policies will always reflect what subjectively matters for those
implied in the policy cycle.
• What could be considered as a legitimated source for values,
opinions and likelihoods
to be considered by the policy makers in presence of multiple
stakeholders and multiple
scenarios? The market? A referendum? A focus group? A public
debate? A poll? Whatever
we adopt we should remember that:
– it is a subjective choice to privilege any source of knowledge;
– there exist different forms and levels of participation (see
Daniell et al. 2010), and these
do not necessary result in improving the efficiency of the
decision process (sometimes
more participation may result in less efficiency);
– the validity of any legitimation claim is a social construction,
resulting from argumen-
tation about facts, norms, values, sincerity and relevance.
Our survey about the EBPM movement highlights a number of
issues:
– there has been a legitimate demand for allowing scientific
knowledge, facts, statistics and
other information sources to acquire a status within the policy
cycle;
– despite opposite declarations, there has been a clear trend in
considering such “evidence”
as the driving force in the process of designing policies, thus
claiming for such evidence
a status of “objective knowledge”;
– such a trend contradicts the nature of the policy cycle both
from a substantial point of view
(knowledge is not objective, it is functional to some purpose)
and from a procedural point
of view (there exist many evidences with different possible
interpretations, arbitrarily used
by the stakeholders within the policy cycle);
123
Ann Oper Res (2016) 236:15–38 33
– arguing about policies needs legitimate knowledge; it also
produces knowledge which in
turn needs to become legitimated.
The above discussion leads us to consider the problem of how
knowledge is produced
(constructed) in order to support decision making. The
construction of knowledge (or
evidence) in order to design a business policy has already been
considered establishing
what is called today “Business Analytics”.2 Business analytics
has been initially devel-
oped mainly for the private sector (Davenport et al. 2010;
Davenport and Harris 2007),
although applications in other areas, for example health
analytics and learning analyt-
ics are also growing (see Buckingham Shum 2012; Fitzsimmons
2010). Seen from a
very pragmatic point of view, “analytics” is an umbrella term
under which many dif-
ferent methods and approaches converge. These include
statistics, data mining, business
intelligence, knowledge engineering and extraction, decision
support systems and, to a
larger extent, operational research and decision analysis. The
key idea consists in devel-
oping methods through which it is possible to obtain useful
information and knowledge
for some purpose, this typically being conducting a decision
process in some business
application.
The distinctive feature in developing “analytics” has been to
merge different techniques
and methods in order to optimise both the learning dimension as
well as its applicability in
real decision processes. In recent years “analytics” has been
associated with the term “big
data” in order to take into account the availability of large data
bases (and knowledge bases
as well) possibly in open access (Open Data organisations are
now becoming increasingly
available). Such data come in very heterogeneous forms and a
key challenge has been to be
able to merge such different sources, in addition to solving the
hard algorithmic problems
presented by the huge volume of data available.
However, the mainstream approach developed in this domain is
based on two restrictive
hypotheses. The first is that the learning process is basically
“data driven” with little (if any)
attention paid to the values which may also drive learning. This
is with regard both to “what”
matters and also to “why” it matters, potentially incorporating
considerations of the extent to
which different stakeholder perspectives are valued or trusted.
The second is that, in order to
guarantee the efficiency of the learning process seen, from an
algorithmic point of view, as a
process of pattern recognition, it is necessary to use “learning
benchmarks” against which it
is possible to measure the accuracy and efficiency of the
algorithms. While this perspective
makes sense for many applications of machine learning, it is
less clear how it can be useful
in cases where learning concerns values, preferences,
likelihoods, beliefs and other subjec-
tively established information, which is potentially revisable as
constructive learning takes
place.
What does this tell to us as decision analysts? We denote with
this term the profession-
als/practitioners who are invited to enter the policy cycle as
“experts” or as “technical staff”
with the explicit or implicit role of producing the decision
support knowledge within the
cycle. We can summarise the type of demand decision analysts
receive, in terms of produc-
ing information and knowledge aiming at aiding the
stakeholders, the policy makers or the
citizens to:
– better understand the stakes and issues at play;
– better understand the potential consequences of potential
actions;
– better foresee potential unexpected/unwanted outcomes;
– better justify, explain, argue about options, decisions and
strategies;
2 This paragraph from Tsoukiàs et al. (2013)
123
34 Ann Oper Res (2016) 236:15–38
– design/construct/conceive new options beyond the existing
ones;
– improve participation, inclusion and, ultimately, democracy.
In doing so decision analysts need to use existing information
(facts, science, grounded
knowledge, best practices etc.), need to constructively model
values, opinions and likelihoods
for the stakeholders and need to do so in a meaningful,
operational and legitimated way. We
denote this set of skills under the term of “policy analytics”
(Tsoukiàs et al. 2013): To support
policy makers in a way that is meaningful (in a sense of being
relevant and adding value to
the process), operational (in a sense of being practically
feasible) and legitimating (in the
sense of ensuring transparency and accountability), decision
analysts need to draw on a wide
range of existing data and knowledge (including factual
information, scientific knowledge,
and expert knowledge in its many forms) and to combine this
with a constructive approach
to surfacing, modelling and understanding the opinions, values
and judgements of the range
of relevant stakeholders. We use the term “Policy Analytics” to
denote the development and
application of such skills, methodologies, methods and
technologies, which aim to support
relevant stakeholders engaged at any stage of a policy cycle,
with the aim of facilitating
meaningful and informative hindsight, insight and foresight.
6 Conclusion
Designing, implementing and assessing public policies is a
major challenge for our societies.
Our paper was guided by a general underlying question: why is
aiding to design, implement
and assess public policies so different from other decision
aiding skills used when the clients
are business oriented and the problem context does not concern
public issues?
The aim of this paper was twofold. On one hand, we tried to
understand why pub-
lic policies represent a specific challenge for decision aiding.
On the other, we analysed
the so-called “evidence-based policy making” literature since it
represents the most recent
attempt, originating from the clients’ side (the policy makers),
to focus on the relation
between the policy making process and the technical, scientific,
expert, analytical support
that such a process demands. The standpoint of our analysis has
been clearly decision ana-
lytic. We do not underestimate the sociological or political
dimension of policy modelling
support. We have rather focused on the challenges policy
making offers to our discipline and
profession.
The analysis of the so-called policy cycle shows that policy
making is a decision making
process with precise characteristics: long time horizon, use,
exchange and redistribution of
public resources, de-facto participatory nature, deliberative,
accountability and legitimation
driven. This is related to the specific nature of public policies,
which besides being delib-
erations about resource allocation, create a legitimation space
for stakeholders and citizens.
If decision aiding is a process generating knowledge (possibly
in an analytic form, but not
only) to be used in a decision process, then it is clear that the
knowledge required to support
policy making processes needs to address such characteristics
(for instance addressing the
problem of legitimate knowledge or of legitimated
argumentation).
Under such a perspective our paper suggests that the evidence-
based policy making
approach, although originating from a legitimated demand, fails
to address such challenges.
The reason is that evidence does not exist independently from
policies and that once identified
does not “objectively” drive the policy cycle. However, the
demand for using analytic infor-
mation in order to support policy making remains valid, but
needs to be addressed differently.
This new frame, briefly presented in the paper, we call “policy
analytics”.
123
Ann Oper Res (2016) 236:15–38 35
Acknowledgments This paper has gone through multiple
versions. The evolution of the paper benefitted
from the remarks of many colleagues (mostly during the special
sessions about policy analytics in the EURO
conferences in Vilnius, 2012 and Rome, 2013) among which we
would like to mention Valerie Belton and
Gilberto Montibeler with whom we wrote a position paper about
policy analytics (originally scheduled to
appear after this one). The comments of three referees as well
as of the guest editors helped us very much in
order to improve the paper. The support of a PEPS/PSL-CNRS
2013 grant is acknowledged by the third author.
When the first version of this paper was written the first author
was with the Department of Architecture at
Alghero, University of Sassari, IT, while the second author was
with the DIMEG, University of Padova, IT.
The support of both is acknowledged.
References
Alexander,E.R.(2000).Rationalityrevisited:Planningparadigmsin
apost-postmodernistperspective.Journal
of Planning Education and Research, 19, 242–256.
Almquist, R., Grossi, G., van Helden, G.J ., & Reichard, C.
(2013). Public sector governance and accountability.
Critical Perspectives of Accounting, 24, 479–487.
Anderson, J. (1975). Public policy making. New York: Praeger
Publishing. 9 editions, last in 2006 with
Houghton Miffling.
Aristotle.(1990).Nicomacheanethics.Oxford:OxfordUniversityPr
ess.Originallypublishedin350bc,english
edition by I. Bywater.
Beck, U. (1992). Risk society: Towards a new modernity. New
Delhi: Sage. Translated from the German
Risikogesellschaft, published in 1986.
Blair, T. (1994). Labour party manifesto. http://www.labour-
party.org.uk/manifestos/1997/
1997-labour-manifesto.shtml
Blunkett, D. (2000). Influence or irrelevance: Can social
science improve government. In Speech to the eco-
nomic and social research council.
Bouyssou, D., Marchant, T., Pirlot, M., Perny, P., Tsoukiàs, A.,
& Vincke, P. (2000). Evaluation and decision
models: A critical perspective. Dordrecht: Kluwer.
Bouyssou, D., Marchant, T., Pirlot, M., Tsoukis, A., & Vincke,
P. (2006). Evaluation and decision models
with multiple criteria. Stepping stones for the analyst. Berlin:
Springer.
Brewer, G. D. (1974). The policy science emerge: To nature and
structure a discipline. Policy Sciences, 5,
239–244.
Brewer, G., & de Leon, P. (1983). The foundations of policy
analysis. Cole: Brooks.
Buckingham Shum, S. (2012). Learning analytics. UNESCO
Policy Brief.
Cabinet Office, London. (1999). Modernising Government
White Paper.
Cabinet Office, London. (2001). Regulatory Impact Appraisal.
Cabinet Office, London. (2003). The Magenta Book.
Cabinet, Performance and Innovation Unit, London. (2001). A
discussion paper: Better policy delivery and
design.
Comte, A. (1853). The positive philosophy of Auguste Comte.
Chapman (reissued by Cambridge University
Press, 2009).
Comte, A. (1865). A general view of positivism. Trubner and
Co. (reissued by Cambridge University Press,
2009).
Daniell, K. A., Mazri, C., & Tsoukiàs, A. (2010). Real world
decision-aiding: A case of participatory water
management. In S. French & D. Rios-Insua (Eds.), e-
Democracy: A group decision and negotiation per-
spective (pp. 125–150). Berlin: Springer.
Davenport, T. H., & Harris, J. H. (2007). Competing on
analytics: The new science of winning. Harvard:
Harvard Business School Press.
Davenport, T. H., Harris, J. G., & Morison, R. (2010). Analytics
at work: Smarter decisions, better results.
Boston: Harvard Business Press.
Davies, P. T. (2004). Is evidence-based government possible?
Jerry Lee Lecture: http://www.nationalschool.
gov.uk/policyhub/downloads/JerryLeeLecture1202041
Davies, P. T. (1999). What is evidence-based education? British
Journal of Educational Studies, 47(2), 108–
121.
De Saint-Simon, C. H. (1976). Political thought of Saint-Simon.
Oxford: Oxford University Press.
Dente, B. (2011). Le decisioni di policy. Bologna: Il Mulino.
(In Italian).
Department of Education, Training ad Youth. (2000). The
impact of educational research. Technical report,
Higher Education Division.
123
http://www.labour-party.org.uk/manifestos/1997/1997-labour-
manifesto.shtml
http://www.labour-party.org.uk/manifestos/1997/1997-labour-
manifesto.shtml
http://www.nationalschool.gov.uk/policyhub/downloads/JerryLe
eLecture1202041
http://www.nationalschool.gov.uk/policyhub/downloads/JerryLe
eLecture1202041
36 Ann Oper Res (2016) 236:15–38
Dowie, J. (1996). Evidence based medicine. Needs to be within
framework of decision making based on
decision analysis. British Medical Journal, 20, 170–171.
Dror, Y. (1964). Muddling through: “science” or inertia? Public
Administration Review, 24, 153–157.
Dryzek, J. S. (2006). Policy analysis as critique. In M. Moran,
M. Rein, & R. E. Goodin (Eds.), The Oxford
handbook of public policy, chapter 9 (pp. 190–203). New York:
Oxford University Press.
Dunn, W. (1981). Public policy analysis. An introduction.
Englewood Cliffs, NJ: Prentice Hall.
Dye, T. (1972). Understanding public policy. Englewood Cliffs,
NJ: Prentice Hall.
Etzioni, A. (1967). Mixed scanning: A third approach to
decision making. Public Administration Review, 27,
387–392.
Faludi, A. (1973). Planning theory. Oxford: Oxford University
Press.
Fitzsimmons, P. (2010). Rapid access to information: The key to
cutting costs in the NHS. British Journal of
Healthcare Management, 16, 448–450.
Friedman, J. (1987). Planning in the public domain: From
knowledge to actions. Princeton, NJ: Princeton
University Press.
Giddens, A. (1974). Positivism and sociology. London:
Heinemann.
Goodin, R. E., Rein, M., & Moran, M. (2006). The public and
its policies. In M. Moran, M. Rein, & R. E.
Goodin (Eds.), The Oxford handbook of public policy, chapter 1
(pp. 3–35). New York: Oxford University
Press.
Gray, J. A. M. (1997). Evidence-based healthcare: How to make
health policy and management decisions.
New York: Churchill Livingstone.
Grimshaw, J. M., Thomas, R. M., MacLennan, G., Fraser, C., &
Ramsay, C. R. (2003). Effectiveness and
efficienciency of guideline dissemination and implementation
strategies. Final report. Aberdeen: Health
Services Research Unit.
Habermas, J. (1984). The theory of communicative action.
Oxford: Polity Press.
Hammond, P. J. (1997). Rationality in economics. Rivista
Internazionale di Scienze Sociali CV, 247–288.
Hanfling, O. (1981). Logical positivism. Oxford: Blackwell.
Harsanyi,J.C.(1955).Cardinalwelfare,individualisticethics,andin
terpersonalcomparisonsofutility.Journal
of Political Economy, 63, 309–321.
Hill, M. (1997). The public policy process. Harlow, England:
Pearson Education Limited.
HM Treasury, London (1997). Appraisal and evaluation in
central goverment. London: The Green Book.
HM Treasury, London (2003) The green book: A guide to
appraisal and evaluation.
Hostovsky, C. (2006). The paradox of the rational
comprehensive model of planning. Tales from waste man-
agement planning in Ontario, Canada. Journal of Planning
Education and Research, 25, 382–395.
Jann, W., & Wegrich, K. (2007). Theories of the policy cycle.
In F. Fischer, G. J. Miller, & M. S. Sidney (Eds.),
Handbook of public policy analysis (pp. 43–62). Boca Raton,
FL: CRC Press.
Jenkins, W. (1978). Policy analysis: A political and
organizational perspective. Oxford: Martin Robertson.
Kingdon, J. W. (1984). Agendas, alternatives and public
policies. Boston, NY: Little Brown.
Kraft, M., & Furlong, S. R. (2007). Public policy. Politics,
analysis and alternatives (2nd ed.). Washington:
CQ Press.
Lasswell, H. D. (1948). Power and personality. New York:
Norton.
Lasswell, H. D. (1956). The decision process: Seven categories
of functional analysis. College Park: University
of Maryland Press.
Lasswell, H. D. (1965). World politics and personal security.
New York: Free Press.
Latham, M. (2001). The myths of the welfare state. In Key note
presentation. Brisbane: Institute of Public
Administration Australia, Qld Division.
Lerner, D., & Lasswell, H. D. (1951). The policy sciences.
Stanford: Stanford University Press.
Lindblom, A. (1959). The science of muddling through. Public
Administration Review, 19, 78–88.
March, J. C., & Olsen, J. P. (1976). Ambiguity and choice in
organizations. Bergen: Universitetsforlaget.
March, J. C., & Olsen, J. P. (1989). Rediscovering intitutions:
The organizational basis of politics. New York:
The Free Press.
Marston, G., & Watts, R. (2003). Tampering with the evidence:
A critical appraisal of evidence-based policy-
making. The Drawing Board: An Australian Review of Public
Affairs, 3, 143–166.
May, J. V., & Wildavsky, A. B. (1978). The policy cycle.
Beverly Hills: Sage Publications.
Mazri, C., Debray, B., Merad, M. & Tsoukiàs A. (2012).
Participative design of participation structures: A
general approach and some risk management case studies.
Cahier du LAMSADE, 327, Universit Paris
Dauphine.
Mazri, C., Lucertini, G., Olivotto, A., Prod’homme, G. &
Tsoukiàs, A. (2014). Protection of transport
infrastructures against major accidents in land use planning
policies. A decision support approach. Journal
of Loss Prevention in the Process Industries, 27, 119–129.
123
Ann Oper Res (2016) 236:15–38 37
Melnyk, B. M., & Fineout-Overholt, E. (2005). Making the case
for evidence-based practice. Philadelphia:
Lippincott Williams & Wilkins.
Mitchell, G. (1999). Evidence-based practice: Critique and
alternative view. Nursing Science Quarterly, 12,
30–35.
Nutley, S. M., Walter, I., & Davies, H. T. O. (2003). From
knowing to doing: A framework for understanding
the evidence-into-practice agenda. Evaluation, 9(2), 125–148.
ODPM, Office of the Deputy Prime Minister, London (2000).
Integrated Policy Appraisal.
Ostanello,A.,&Tsoukiàs,A.(1993).Anexplicativemodelofpublicin
terorganisationalinteractions.European
Journal of Operational Research, 70, 67–82.
Ostrom, V., & Ostrom, E. (1971). Public choice: A different
approach to the study of public administration.
Public Administration Review, 31, 203–216.
Ostrom, E., & Ostrom, V. (2004). The quest for meaning in
public choice. American Journal of Economics
and Sociology, 63(1), 105–147.
Parson, W. (2002). From muddling through to muddling up—
evidence based policy making and the moderni-
sation of British Government. Public Policy and Administration,
17, 43–60.
Perri, S. (2002). Can policy making be evidence-based? Journal
of Integrated Care, 10, 3–8.
Phillips Inquiry, London. (2001). The BSE inquiry: The inquiry
into BSE and variant CJD in the United
Kingdom.
Regional Policy Inforegio. (2011). EVALSED: The resource for
the evaluation of socio-economic development.
European Commission. [Online; Accessed 06 June 2011].
Robbins, L. (1932). An essay on the nature and significance of
economic science. London: Macmillan.
Royal Society, London (2002). Foot and Mouth Desease 2001:
Lessons To Be Learned Inquiry.
Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., &
Richardson, W. S. (1996). Evidence based
medicine: What it is and what it isn’t. British Medical Journal,
13, 71–72.
Sanderson, I. (2002). Evaluation, policy learning and evidence-
based policy making. Public Administration,
80(1), 1–22.
Schön, D. A. (1979). Generative metaphor: A perspective on
problem-setting in social policy. In A. Ortony
(Ed.), Metaphor and thought (pp. 137–163). Cambridge:
Cambridge University Press.
Simon, H. A. (1947). Administrative behaviour: A study of
decision making processes in administrative
organizations. New York: MacMillan.
Simon, H. A. (1955). A behavioural model of rational choice.
The Quarterly Journal of Economics, 69, 99–118.
Simon, H. A. (1959). Theories of decision-making in economics
and behavioral science. American Economic
Review, 49, 253–283.
Simon, H. A. (1962). The architecture of complexity.
Proceedings of the American Philosophical Society,
106, 467–482.
Simon, H. A. (1964). Administrative behavior: A study of
decision making process in administrative organi-
zations. New York: Mac Millan.
Simon, H. A. (1969). The science of the artificial. Cambridge:
The MIT Press.
Simon, H. A. (1979). Rational decision making in business
organisations. American Economic Review, 69,
349–513.
Smith, S., & Kulynych, J. (2002). It may be social, but why is it
capital? The social construction of social
capital and the politics of language. Politics and Society, 30(1),
149–186.
Solesbury, W. (2001). Evidence based policy: Whence it came
and where it’s going. ESRC UK Centre for
Evidence Based Policy and Practice, October 2001. Working
Paper 1.
Sutcliffe, S. & Court, J. (2005). Evidence-based policy making:
What is it? How does it work? What relevance
for developing country? London: Overseas Development
Institute.
Trinder, L. (2000). Introduction: The context of evidence-based
practice. In L. Trinder & S. Reynolds (Eds.),
Evidence-based practice: A critical appraisal (pp. 1–16).
Oxford: Blackwell Science.
Tsoukiàs, A., Montibeller, G., Lucertini, G., & Belton, V.
(2013). Policy analytics: An agenda for research
and practice. EURO Journal on Decision Processes, 1, 115–134.
Tsoukiàs, A. (2007). On the concept of decision aiding process:
An operational perspective. Annals of Oper-
ations Research, 154, 3–27.
Tsoukiàs, A. (2008). From decision theory to decision aiding
methodology. European Journal of Operational
Research, 187, 138–161.
Watzlawick, P., Beavin, J. H., & Jackson, D. D. (1967).
Pragmatics of human communication. New York:
W.W. Norton.
Weber, M. (1922). Wirtschaft und Gesellschaft. Tubingen:
Mohr.
Wells, P. (2007). New labour and evidence based policy
making: 1997–2007. People, Place and Policy, 1,
22–29.
White, L. D. (1926). Introduction to the study of public
administration. New York: The Macmillan Company.
123
38 Ann Oper Res (2016) 236:15–38
Wilson, W. (1887). The study of administration. Political
Science Quarterly, 2, 197–222.
Young,K.,Ashby,D.,Boaz,A.,&Grayson,L.(2002).Socialsciencea
ndtheevidence-basedpolicymovement.
Social Policy & Society, 1, 215–224.
Zammito, J. H. (2004). A nice derangement of epistemes. Post-
positivism in the study of science from Quine
to Latour. Chicago: The University of Chicago Press.
123
Reproduced with permission of the copyright owner. Further
reproduction prohibited without
permission.
c.10479_2014_Article_1578.pdfFrom evidence-based policy
making to policy analyticsAbstract1 Introduction2 Terms and
concepts2.1 Public policy2.2 Policy making process2.3 Public
deliberation, legitimation and accountability3 Evidence-based
policy making: state of the art3.1 Premises3.2 Evolution3.2.1
From medicine to social science3.2.2 EBPM in the UK3.2.3
Other experiences4 Criticism of EBPM4.1 Multiple
evidences4.2 Policy making as a result of many factors4.3
Evidence as contingent knowledge5 Discussion6
ConclusionAcknowledgmentsReferences
*In assessing feasibility, it is important to identify critical
barriers that will prevent the policy from being developed
or adopted at the current time. For such policies, it may not be
worthwhile to spend much time analyzing other
factors (e.g., budget and economic impact). However, by
identifying these critical barriers, you can be more readily
able to identify when they shift and how to act quickly when
there is a window of opportunity.
Table 1: Policy Analysis: Key Questions
Framing Questions
—is it legislative, administrative,
regulatory, other?
vernment or institution will implement?
Will enforcement be necessary? How is it funded?
Who is responsible for administering the policy?)
s the legal landscape surrounding the policy (e.g.,
court rulings, constitutionality)?
debated previously)?
-added of the policy?
-term
outcomes?
consequences of the policy?
Criteria Questions
Public Health Impact:
Potential for the
policy to impact risk
factors, quality of
life, disparities,
morbidity and
mortality
increase access, protect
from exposure)?
and burden (including
impact on risk factor, quality of life, morbidity and mortality)?
o What population(s) will benefit? How much? When?
o What population(s) will be negatively impacted? How much?
When?
How?
re there gaps in the data/evidence-base?
Feasibility* :
Likelihood that the
policy can be
successfully adopted
and implemented
Political
history, environment, and
policy debate?
opponents? What are their
interests and values?
perspectives associated
with the policy option (e.g., lack of knowledge, fear of change,
force of habit)?
and high priority
issues (e.g., sustainability, economic impact)?
Operational
developing, enacting, and
implementing the policy?
implemented, and
enforced?
Economic and
budgetary impacts:
Comparison of the
costs to enact,
implement, and
enforce the policy
with the value of the
benefits
Budget
from a budgetary
perspective?
o e.g., for public (federal, state, local) and private entities to
enact,
implement, and enforce the policy?
Economic
-savings, costs
averted, ROI, cost-
effectiveness, cost-benefit analysis, etc.)?
o How are costs and benefits distributed (e.g., for individuals,
businesses,
government)?
o What is the timeline for costs and benefits?
-base?
Point/Counterpoint / 715
Zolas, N., Goldschlag, N., Jarmin, R., Stephan, P., Owen-Smith,
J., Rosen, R. F., Allen, B.
McFadden, Weinberg, B. A. & Lane, J. I. (2015). Wrapping it
up in a person: Examining
employment and earnings outcomes for PhD recipients. Science,
360, 1367–1371.
BIG DATA AND THE TRANSFORMATION OF PUBLIC
POLICY ANALYSIS
Ron S. Jarmin and Amy B. O’Hara
INTRODUCTION
Recent years have seen a growing number of uses of novel “big
data” sources to
monitor, improve, and study the delivery of public services. By
“big data,” we mean
data generated as a consequence of government, business, or
citizen activities. Big
data is often said to be characterized by four Vs: volume,
velocity, variety, and verac-
ity. Examples include data from cameras and sensors that permit
real-time traffic
monitoring to more novel uses such as tracking gunshots
through networks of audi-
tory sensors. The latter allow more granular measurement of
criminal activity than
traditional police reports (Carr & Doleac, 2015) and are
suggestive of how valuable
these new data sources can be to policy analysts and applied
social scientists.
New data from sensors or apps where citizens engage directly
with government
service providers (Crawford & Walters, 2013) will undoubtedly
help transform pub-
lic policy analysis. However, many programs directly impact
individuals, house-
holds, firms, or other private sector organizations. These
programs generate their
own administrative “big data.” Indeed, the administrative
records these programs
produce often have volume, an assembly of them provides
variety, and depending on
source, they have veracity (UNECE, 2013). Improvements in
computation and ana-
lytics have streamlined processing of such data, allowing
analysts to draw actionable
intelligence from them. Said differently, compiling rich data
resources, especially
when linked and analyzed with modern “big data” tools,
provides unprecedented
opportunity to transform public policy analysis.
However, administrative data have varying privacy or
confidentiality statutes or
regulations making access difficult. To address this, Congress
recently approved
funding for the Census Bureau to fortify its platform aligning
the appropriate data,
tools, and researchers to facilitate evidence building. Relevant
program data is held
in a variety of federal and state agencies. These entities possess
differing capabilities
to support research, have data that is not prepared for analytical
use, and have a
variety of access requirements and processes. A federated
infrastructure permits
data from many sources to be curated, integrated, and
provisioned in ways that
foster credible and transparent evidence building and adhere to
the rules and policies
of the relevant data owners. The platform being expanded at the
Census Bureau
addresses the key barriers to research access while permitting
information providers
to retain control of their data. Housing the infrastructure at the
Census Bureau
makes sense as it has broad legal authority to obtain data, a
robust infrastructure
for curating and integrating data from many sources, and an
outstanding record as
a data steward. This approach is efficient, consistent with
privacy principles, and
Journal of Policy Analysis and Management DOI: 10.1002/pam
Published on behalf of the Association for Public Policy
Analysis and Management
716 / Point/Counterpoint
clarifies access procedures by providing one “front door” for
policy analysts and
researchers needing access to administrative data.
Improving access to administrative records and technology to
link and analyze
these “big data” will transform evidence building. As we
highlight below, this change
is already underway, with analyses that investigate the impacts
of policies over
time, across programs, and spanning generations. Moreover, the
Census Bureau
has used administrative records to measure economic activity
since the 1940s and
is currently adopting best practices from the private sector to
modernize economic
measurement (Bostic, Jarmin, & Moyer, 2016). Thus, our big
data solution for policy
analysis is a straightforward extension of these activities.
For social scientists and policy analysts, however, big data
sources can com-
plement data carefully “designed” for a given purpose. Surveys
are designed data:
offering a known universe/frame, stable and well-defined
questions and content,
and documented treatment of missing data. These have been
widely used to analyze
the impact of policy changes and longitudinal outcomes using
weighted samples.
Surveys have a lot of variety, but they fall far short on volume,
velocity and, increas-
ingly, on veracity. Surveys have quality challenges as a result
of declining respondent
cooperation and participation (Meyer, Mok, & Sullivan, 2015).
Studies show that
underreporting in surveys (Call et al., 2013; Meyer & Mittag,
2015) impacts under-
standing of welfare and Medicaid participation, using
administrative data to show
the measurement differences. Using the sources together holds
great promise to pro-
duce blended statistics. For example, blended statistics could
help income studies
beset by right tail problems such as top-coding, undercoverage,
and underreport-
ing (Burkhauser et al., 2012). Income studies have turned to
administrative data
(Auten, Gee, & Turner, 2013; Chetty, Hendren, & Katz, 2014)
despite limited infor-
mation on the left tail. Surveys have traditionally offered the
only view of individuals
over time. Longitudinal surveys such as the National
Longitudinal Survey, Panel
Study on Income Dynamics, and National Center for Education
Statistics surveys
that have been mainstays in education evaluations are giving
way to studies relying
on administrative data (Figlio, Karbownik, & Salvanes, 2015).
ILLUSTRATIVE USES OF ADMINISTRATIVE DATA FOR
POLICY ANALYSIS
In this section, we discuss several recent papers as “isolated
uses” of using a variety
of administrative data sets.1 These studies are typically
conducted by teams that
obtained access to confidential data through arrangements with
administrative or
statistical agencies or through collaboration with agency
researchers. We argue
that this narrowly defined data access, underdeveloped data
infrastructure, and
inadequate metadata and tools prevent all but highly expert
researchers to draw
inferences from the data. We will return to access issues in the
next section but
want to illustrate more specifically how “big data” can aid in
policy analysis.
Intergenerational mobility studies have been transformed using
longitudinal tax
data. The Joint Statistical Research Program of the Statistics of
Income Division at
the IRS enables studies that use long panels of tax returns to
observe individuals
over time. Isolated access2 for specific tax administration
research resulted in the
1 While not our focus here, the recent empirical literature
features studies using an increasing number
and variety of private sector data sets. The potential of these
data resources for public policy analysis is
only beginning to be explored. Einav and Levin (2014a, 2014b)
review recent work using private data.
Researchers affiliated with NSF-Census Research Network have
been exploring the use of Twitter data to
understand labor market transitions and personal financial
management tools to understand household
income and spending patterns (see Gelman et al., 2014).
2 See http://www.sciencemag.org/news/2014/05/how-two-
economists-got-direct-access-irs-tax-records.
Journal of Policy Analysis and Management DOI: 10.1002/pam
Published on behalf of the Association for Public Policy
Analysis and Management
Point/Counterpoint / 717
Equality of Opportunity project,3 deemed “big data” by lead
researcher Raj Chetty.
Through that data access, Chetty, Hendren, and Katz (2014)
analyzed geography
and high-mobility areas. Also using tax data and collaborating
with IRS analysts, a
Stanford team has studied intergenerational mobility (Mitnik et
al., 2015). Johnson,
Massey, and O’Hara (2015) discuss the potential and challenges
of using tax data,
survey data, and other administrative data to study mobility.
Chetty, Hendren, and Katz’s (2016) follow-up on the Moving to
Opportunity
(MTO) experiment assesses long-term outcome for the voucher
program that started
in 1994. Earlier studies failed to find impacts on earnings or
employment rates of
adults and older children from MTO treatments (e.g., Kling,
Liebman, & Katz, 2007;
Oreopoulos, 2003; Sanbonmatsu et al., 2006). However, using
tax returns from 1996
to 2012, Chetty, Hendren, and Katz (2016) found that moves to
lower poverty neigh-
borhoods significantly improved college attendance rates and
earnings for children
who were young (below age 13) when their families moved. In
his summary of MTO
studies, Rothwell (2015) notes, “As Chetty and his colleagues
show, even a few extra
years of data can make a large difference.” To conduct the
study, Chetty, Hendren,
and Katz (2016) had isolated access to the MTO data as well as
tax return data.
Our final example of isolated access is led by the University of
Michigan’s Institute
for Research on Science and Innovation (IRIS).4 IRIS is a
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx
Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx

More Related Content

Similar to Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx

A Review Of Public Policy Development In Uganda
A Review Of Public Policy Development In UgandaA Review Of Public Policy Development In Uganda
A Review Of Public Policy Development In Uganda
Carmen Pell
 
Lines Of Argument Presentation at Insights to Impact Meeting
Lines Of Argument Presentation at Insights to Impact MeetingLines Of Argument Presentation at Insights to Impact Meeting
Lines Of Argument Presentation at Insights to Impact Meeting
ODI_Webmaster
 
What is public policy and its charactristics.ppt
What is public policy and its charactristics.pptWhat is public policy and its charactristics.ppt
What is public policy and its charactristics.ppt
nouranezarour
 
Strategies For Impact And Policy Relevance V2
Strategies For Impact And Policy Relevance V2Strategies For Impact And Policy Relevance V2
Strategies For Impact And Policy Relevance V2RECOUP
 
Top Tips_Research Communications: Effective Communication for Southern Resear...
Top Tips_Research Communications: Effective Communication for Southern Resear...Top Tips_Research Communications: Effective Communication for Southern Resear...
Top Tips_Research Communications: Effective Communication for Southern Resear...
GDNet - Global Development Network, Cairo Office
 
ppt on understaing policy
ppt on understaing policyppt on understaing policy
ppt on understaing policy
nida19
 
Public policy analysis
Public policy analysisPublic policy analysis
Public policy analysis
NAVEED Ali Qureshi
 
Evidence based policy
Evidence based policy Evidence based policy
Evidence based policy
pasicUganda
 
Executingpublicpolicywithstrategicmanagementandbenchmarking 160517055710
Executingpublicpolicywithstrategicmanagementandbenchmarking 160517055710Executingpublicpolicywithstrategicmanagementandbenchmarking 160517055710
Executingpublicpolicywithstrategicmanagementandbenchmarking 160517055710
ROWENABAGNAS2
 
Executing public policy with strategic management and benchmarking
Executing public policy with strategic management and benchmarkingExecuting public policy with strategic management and benchmarking
Executing public policy with strategic management and benchmarking
Mildred Villacorta
 
Public Policy
Public PolicyPublic Policy
Public Policy
Jo Balucanag - Bitonio
 
publicpolicy-130621235359-phpapp02.pdf
publicpolicy-130621235359-phpapp02.pdfpublicpolicy-130621235359-phpapp02.pdf
publicpolicy-130621235359-phpapp02.pdf
MaddyNatividad1
 
publicpolicy-130621235359-phpapp02 (1).pdf
publicpolicy-130621235359-phpapp02 (1).pdfpublicpolicy-130621235359-phpapp02 (1).pdf
publicpolicy-130621235359-phpapp02 (1).pdf
MaddyNatividad1
 
publicpolicy-130621235359-phpapp02 2.pdf
publicpolicy-130621235359-phpapp02 2.pdfpublicpolicy-130621235359-phpapp02 2.pdf
publicpolicy-130621235359-phpapp02 2.pdf
mmople
 
Publicpolicy 130621235359-phpapp02
Publicpolicy 130621235359-phpapp02Publicpolicy 130621235359-phpapp02
Publicpolicy 130621235359-phpapp02
Johnryl Francisco
 
A Review Of Experimental Evidence Of How Communication Affects Attitudes To I...
A Review Of Experimental Evidence Of How Communication Affects Attitudes To I...A Review Of Experimental Evidence Of How Communication Affects Attitudes To I...
A Review Of Experimental Evidence Of How Communication Affects Attitudes To I...
Lori Moore
 
Public policy making and implementation in nigeria connecting the nexus
Public policy making and implementation in nigeria connecting the nexusPublic policy making and implementation in nigeria connecting the nexus
Public policy making and implementation in nigeria connecting the nexus
Alexander Decker
 

Similar to Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx (20)

A Review Of Public Policy Development In Uganda
A Review Of Public Policy Development In UgandaA Review Of Public Policy Development In Uganda
A Review Of Public Policy Development In Uganda
 
PEA.pdf
PEA.pdfPEA.pdf
PEA.pdf
 
Lines Of Argument Presentation at Insights to Impact Meeting
Lines Of Argument Presentation at Insights to Impact MeetingLines Of Argument Presentation at Insights to Impact Meeting
Lines Of Argument Presentation at Insights to Impact Meeting
 
What is public policy and its charactristics.ppt
What is public policy and its charactristics.pptWhat is public policy and its charactristics.ppt
What is public policy and its charactristics.ppt
 
Shaila.policyanalysis advocacy
Shaila.policyanalysis advocacyShaila.policyanalysis advocacy
Shaila.policyanalysis advocacy
 
Strategies For Impact And Policy Relevance V2
Strategies For Impact And Policy Relevance V2Strategies For Impact And Policy Relevance V2
Strategies For Impact And Policy Relevance V2
 
Top Tips_Research Communications: Effective Communication for Southern Resear...
Top Tips_Research Communications: Effective Communication for Southern Resear...Top Tips_Research Communications: Effective Communication for Southern Resear...
Top Tips_Research Communications: Effective Communication for Southern Resear...
 
ppt on understaing policy
ppt on understaing policyppt on understaing policy
ppt on understaing policy
 
Public policy analysis
Public policy analysisPublic policy analysis
Public policy analysis
 
Evidence based policy
Evidence based policy Evidence based policy
Evidence based policy
 
Executingpublicpolicywithstrategicmanagementandbenchmarking 160517055710
Executingpublicpolicywithstrategicmanagementandbenchmarking 160517055710Executingpublicpolicywithstrategicmanagementandbenchmarking 160517055710
Executingpublicpolicywithstrategicmanagementandbenchmarking 160517055710
 
Executing public policy with strategic management and benchmarking
Executing public policy with strategic management and benchmarkingExecuting public policy with strategic management and benchmarking
Executing public policy with strategic management and benchmarking
 
Public Policy
Public PolicyPublic Policy
Public Policy
 
publicpolicy-130621235359-phpapp02.pdf
publicpolicy-130621235359-phpapp02.pdfpublicpolicy-130621235359-phpapp02.pdf
publicpolicy-130621235359-phpapp02.pdf
 
publicpolicy-130621235359-phpapp02 (1).pdf
publicpolicy-130621235359-phpapp02 (1).pdfpublicpolicy-130621235359-phpapp02 (1).pdf
publicpolicy-130621235359-phpapp02 (1).pdf
 
publicpolicy-130621235359-phpapp02 2.pdf
publicpolicy-130621235359-phpapp02 2.pdfpublicpolicy-130621235359-phpapp02 2.pdf
publicpolicy-130621235359-phpapp02 2.pdf
 
Publicpolicy 130621235359-phpapp02
Publicpolicy 130621235359-phpapp02Publicpolicy 130621235359-phpapp02
Publicpolicy 130621235359-phpapp02
 
A Review Of Experimental Evidence Of How Communication Affects Attitudes To I...
A Review Of Experimental Evidence Of How Communication Affects Attitudes To I...A Review Of Experimental Evidence Of How Communication Affects Attitudes To I...
A Review Of Experimental Evidence Of How Communication Affects Attitudes To I...
 
Data and MandE
Data and MandEData and MandE
Data and MandE
 
Public policy making and implementation in nigeria connecting the nexus
Public policy making and implementation in nigeria connecting the nexusPublic policy making and implementation in nigeria connecting the nexus
Public policy making and implementation in nigeria connecting the nexus
 

More from durantheseldine

Angela’s Ashes​ - Murasaki Shikibu said that the novel happens be.docx
Angela’s Ashes​ - Murasaki Shikibu said that the novel happens be.docxAngela’s Ashes​ - Murasaki Shikibu said that the novel happens be.docx
Angela’s Ashes​ - Murasaki Shikibu said that the novel happens be.docx
durantheseldine
 
ANG1922, Winter 2016Essay 02 InstructionsYour second e.docx
ANG1922, Winter 2016Essay 02 InstructionsYour second e.docxANG1922, Winter 2016Essay 02 InstructionsYour second e.docx
ANG1922, Winter 2016Essay 02 InstructionsYour second e.docx
durantheseldine
 
Anecdotal Records Anecdotal Record Developmental Domain__ _.docx
Anecdotal Records Anecdotal Record Developmental Domain__      _.docxAnecdotal Records Anecdotal Record Developmental Domain__      _.docx
Anecdotal Records Anecdotal Record Developmental Domain__ _.docx
durantheseldine
 
Andy and Beth are neighbors in a small duplex. In the evenings after.docx
Andy and Beth are neighbors in a small duplex. In the evenings after.docxAndy and Beth are neighbors in a small duplex. In the evenings after.docx
Andy and Beth are neighbors in a small duplex. In the evenings after.docx
durantheseldine
 
Andrew John De Los SantosPUP 190SOS 111 Sustainable CitiesMar.docx
Andrew John De Los SantosPUP 190SOS 111 Sustainable CitiesMar.docxAndrew John De Los SantosPUP 190SOS 111 Sustainable CitiesMar.docx
Andrew John De Los SantosPUP 190SOS 111 Sustainable CitiesMar.docx
durantheseldine
 
Android Permissions DemystifiedAdrienne Porter Felt, Erika.docx
Android Permissions DemystifiedAdrienne Porter Felt, Erika.docxAndroid Permissions DemystifiedAdrienne Porter Felt, Erika.docx
Android Permissions DemystifiedAdrienne Porter Felt, Erika.docx
durantheseldine
 
ANDREW CARNEGIE PRINCE OF STEELNARRATOR On November 25th, 1835 i.docx
ANDREW CARNEGIE PRINCE OF STEELNARRATOR On November 25th, 1835 i.docxANDREW CARNEGIE PRINCE OF STEELNARRATOR On November 25th, 1835 i.docx
ANDREW CARNEGIE PRINCE OF STEELNARRATOR On November 25th, 1835 i.docx
durantheseldine
 
Andrew CassidySaint Leo UniversityContemporary Issues in Crimina.docx
Andrew CassidySaint Leo UniversityContemporary Issues in Crimina.docxAndrew CassidySaint Leo UniversityContemporary Issues in Crimina.docx
Andrew CassidySaint Leo UniversityContemporary Issues in Crimina.docx
durantheseldine
 
andrea lunsfordstanford universitymichal brodysono.docx
andrea lunsfordstanford universitymichal brodysono.docxandrea lunsfordstanford universitymichal brodysono.docx
andrea lunsfordstanford universitymichal brodysono.docx
durantheseldine
 
Andrea Azpiazo – Review One. Little Havana Multifamily Developme.docx
Andrea Azpiazo – Review One.  Little Havana Multifamily Developme.docxAndrea Azpiazo – Review One.  Little Havana Multifamily Developme.docx
Andrea Azpiazo – Review One. Little Havana Multifamily Developme.docx
durantheseldine
 
And what we students of history always learn is that the human bein.docx
And what we students of history always learn is that the human bein.docxAnd what we students of history always learn is that the human bein.docx
And what we students of history always learn is that the human bein.docx
durantheseldine
 
and Contradiction in Architecture Robert Venturi .docx
and Contradiction in Architecture Robert Venturi .docxand Contradiction in Architecture Robert Venturi .docx
and Contradiction in Architecture Robert Venturi .docx
durantheseldine
 
Ancient Egypt1The Civilization of the Nile River V.docx
Ancient Egypt1The Civilization of the Nile River V.docxAncient Egypt1The Civilization of the Nile River V.docx
Ancient Egypt1The Civilization of the Nile River V.docx
durantheseldine
 
Anayze a landmark case. The assesment should include a full discussi.docx
Anayze a landmark case. The assesment should include a full discussi.docxAnayze a landmark case. The assesment should include a full discussi.docx
Anayze a landmark case. The assesment should include a full discussi.docx
durantheseldine
 
Anatomy and Physiology of the Digestive SystemObjectives· Iden.docx
Anatomy and Physiology of the Digestive SystemObjectives· Iden.docxAnatomy and Physiology of the Digestive SystemObjectives· Iden.docx
Anatomy and Physiology of the Digestive SystemObjectives· Iden.docx
durantheseldine
 
ANAThe Article Review by Jeanette Keith on Book by Stephanie McCu.docx
ANAThe Article  Review by Jeanette Keith on Book by Stephanie McCu.docxANAThe Article  Review by Jeanette Keith on Book by Stephanie McCu.docx
ANAThe Article Review by Jeanette Keith on Book by Stephanie McCu.docx
durantheseldine
 
Analyzing workers social networking behavior – an invasion of priva.docx
Analyzing workers social networking behavior – an invasion of priva.docxAnalyzing workers social networking behavior – an invasion of priva.docx
Analyzing workers social networking behavior – an invasion of priva.docx
durantheseldine
 
Analyzing and Visualizing Data Chapter 6Data Represent.docx
Analyzing and Visualizing Data Chapter 6Data Represent.docxAnalyzing and Visualizing Data Chapter 6Data Represent.docx
Analyzing and Visualizing Data Chapter 6Data Represent.docx
durantheseldine
 
Analyzing and Visualizing Data Chapter 1The .docx
Analyzing and Visualizing Data Chapter 1The .docxAnalyzing and Visualizing Data Chapter 1The .docx
Analyzing and Visualizing Data Chapter 1The .docx
durantheseldine
 
Analyzing a Primary Source RubricName ______________________.docx
Analyzing a Primary Source RubricName ______________________.docxAnalyzing a Primary Source RubricName ______________________.docx
Analyzing a Primary Source RubricName ______________________.docx
durantheseldine
 

More from durantheseldine (20)

Angela’s Ashes​ - Murasaki Shikibu said that the novel happens be.docx
Angela’s Ashes​ - Murasaki Shikibu said that the novel happens be.docxAngela’s Ashes​ - Murasaki Shikibu said that the novel happens be.docx
Angela’s Ashes​ - Murasaki Shikibu said that the novel happens be.docx
 
ANG1922, Winter 2016Essay 02 InstructionsYour second e.docx
ANG1922, Winter 2016Essay 02 InstructionsYour second e.docxANG1922, Winter 2016Essay 02 InstructionsYour second e.docx
ANG1922, Winter 2016Essay 02 InstructionsYour second e.docx
 
Anecdotal Records Anecdotal Record Developmental Domain__ _.docx
Anecdotal Records Anecdotal Record Developmental Domain__      _.docxAnecdotal Records Anecdotal Record Developmental Domain__      _.docx
Anecdotal Records Anecdotal Record Developmental Domain__ _.docx
 
Andy and Beth are neighbors in a small duplex. In the evenings after.docx
Andy and Beth are neighbors in a small duplex. In the evenings after.docxAndy and Beth are neighbors in a small duplex. In the evenings after.docx
Andy and Beth are neighbors in a small duplex. In the evenings after.docx
 
Andrew John De Los SantosPUP 190SOS 111 Sustainable CitiesMar.docx
Andrew John De Los SantosPUP 190SOS 111 Sustainable CitiesMar.docxAndrew John De Los SantosPUP 190SOS 111 Sustainable CitiesMar.docx
Andrew John De Los SantosPUP 190SOS 111 Sustainable CitiesMar.docx
 
Android Permissions DemystifiedAdrienne Porter Felt, Erika.docx
Android Permissions DemystifiedAdrienne Porter Felt, Erika.docxAndroid Permissions DemystifiedAdrienne Porter Felt, Erika.docx
Android Permissions DemystifiedAdrienne Porter Felt, Erika.docx
 
ANDREW CARNEGIE PRINCE OF STEELNARRATOR On November 25th, 1835 i.docx
ANDREW CARNEGIE PRINCE OF STEELNARRATOR On November 25th, 1835 i.docxANDREW CARNEGIE PRINCE OF STEELNARRATOR On November 25th, 1835 i.docx
ANDREW CARNEGIE PRINCE OF STEELNARRATOR On November 25th, 1835 i.docx
 
Andrew CassidySaint Leo UniversityContemporary Issues in Crimina.docx
Andrew CassidySaint Leo UniversityContemporary Issues in Crimina.docxAndrew CassidySaint Leo UniversityContemporary Issues in Crimina.docx
Andrew CassidySaint Leo UniversityContemporary Issues in Crimina.docx
 
andrea lunsfordstanford universitymichal brodysono.docx
andrea lunsfordstanford universitymichal brodysono.docxandrea lunsfordstanford universitymichal brodysono.docx
andrea lunsfordstanford universitymichal brodysono.docx
 
Andrea Azpiazo – Review One. Little Havana Multifamily Developme.docx
Andrea Azpiazo – Review One.  Little Havana Multifamily Developme.docxAndrea Azpiazo – Review One.  Little Havana Multifamily Developme.docx
Andrea Azpiazo – Review One. Little Havana Multifamily Developme.docx
 
And what we students of history always learn is that the human bein.docx
And what we students of history always learn is that the human bein.docxAnd what we students of history always learn is that the human bein.docx
And what we students of history always learn is that the human bein.docx
 
and Contradiction in Architecture Robert Venturi .docx
and Contradiction in Architecture Robert Venturi .docxand Contradiction in Architecture Robert Venturi .docx
and Contradiction in Architecture Robert Venturi .docx
 
Ancient Egypt1The Civilization of the Nile River V.docx
Ancient Egypt1The Civilization of the Nile River V.docxAncient Egypt1The Civilization of the Nile River V.docx
Ancient Egypt1The Civilization of the Nile River V.docx
 
Anayze a landmark case. The assesment should include a full discussi.docx
Anayze a landmark case. The assesment should include a full discussi.docxAnayze a landmark case. The assesment should include a full discussi.docx
Anayze a landmark case. The assesment should include a full discussi.docx
 
Anatomy and Physiology of the Digestive SystemObjectives· Iden.docx
Anatomy and Physiology of the Digestive SystemObjectives· Iden.docxAnatomy and Physiology of the Digestive SystemObjectives· Iden.docx
Anatomy and Physiology of the Digestive SystemObjectives· Iden.docx
 
ANAThe Article Review by Jeanette Keith on Book by Stephanie McCu.docx
ANAThe Article  Review by Jeanette Keith on Book by Stephanie McCu.docxANAThe Article  Review by Jeanette Keith on Book by Stephanie McCu.docx
ANAThe Article Review by Jeanette Keith on Book by Stephanie McCu.docx
 
Analyzing workers social networking behavior – an invasion of priva.docx
Analyzing workers social networking behavior – an invasion of priva.docxAnalyzing workers social networking behavior – an invasion of priva.docx
Analyzing workers social networking behavior – an invasion of priva.docx
 
Analyzing and Visualizing Data Chapter 6Data Represent.docx
Analyzing and Visualizing Data Chapter 6Data Represent.docxAnalyzing and Visualizing Data Chapter 6Data Represent.docx
Analyzing and Visualizing Data Chapter 6Data Represent.docx
 
Analyzing and Visualizing Data Chapter 1The .docx
Analyzing and Visualizing Data Chapter 1The .docxAnalyzing and Visualizing Data Chapter 1The .docx
Analyzing and Visualizing Data Chapter 1The .docx
 
Analyzing a Primary Source RubricName ______________________.docx
Analyzing a Primary Source RubricName ______________________.docxAnalyzing a Primary Source RubricName ______________________.docx
Analyzing a Primary Source RubricName ______________________.docx
 

Recently uploaded

678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
CarlosHernanMontoyab2
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
Celine George
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
Atul Kumar Singh
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
Vivekanand Anglo Vedic Academy
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
Vikramjit Singh
 
The geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideasThe geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideas
GeoBlogs
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
TechSoup
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
Delapenabediema
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
Jisc
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
Jheel Barad
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
TechSoup
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
Sandy Millin
 

Recently uploaded (20)

678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf678020731-Sumas-y-Restas-Para-Colorear.pdf
678020731-Sumas-y-Restas-Para-Colorear.pdf
 
How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17How to Make a Field invisible in Odoo 17
How to Make a Field invisible in Odoo 17
 
Language Across the Curriculm LAC B.Ed.
Language Across the  Curriculm LAC B.Ed.Language Across the  Curriculm LAC B.Ed.
Language Across the Curriculm LAC B.Ed.
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
The French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free downloadThe French Revolution Class 9 Study Material pdf free download
The French Revolution Class 9 Study Material pdf free download
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
Digital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and ResearchDigital Tools and AI for Teaching Learning and Research
Digital Tools and AI for Teaching Learning and Research
 
The geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideasThe geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideas
 
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup   New Member Orientation and Q&A (May 2024).pdfWelcome to TechSoup   New Member Orientation and Q&A (May 2024).pdf
Welcome to TechSoup New Member Orientation and Q&A (May 2024).pdf
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
The Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official PublicationThe Challenger.pdf DNHS Official Publication
The Challenger.pdf DNHS Official Publication
 
The approach at University of Liverpool.pptx
The approach at University of Liverpool.pptxThe approach at University of Liverpool.pptx
The approach at University of Liverpool.pptx
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
Instructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptxInstructions for Submissions thorugh G- Classroom.pptx
Instructions for Submissions thorugh G- Classroom.pptx
 
Introduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp NetworkIntroduction to AI for Nonprofits with Tapp Network
Introduction to AI for Nonprofits with Tapp Network
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...2024.06.01 Introducing a competency framework for languag learning materials ...
2024.06.01 Introducing a competency framework for languag learning materials ...
 

Ann Oper Res (2016) 23615–38DOI 10.1007s10479-014-1578-6.docx

  • 1. Ann Oper Res (2016) 236:15–38 DOI 10.1007/s10479-014-1578-6 From evidence-based policy making to policy analytics Giada De Marchi · Giulia Lucertini · Alexis Tsoukiàs Published online: 27 March 2014 © Springer Science+Business Media New York 2014 Abstract This paper aims at addressing the problem of what characterises decision-aiding for public policy making problem situations. Under such a perspective it analyses concepts like “public policy”, “deliberation”, “legitimation”, “accountability” and shows the need to expand the concept of rationality which is expected to support the acceptability of a public policy. We then analyse the more recent attempt to construct a rational support for policy making, the “evidence-based policy making” approach. Despite the innovation introduced with this approach, we show that it basically fails to address the deep reasons why supporting the design, implementation and assessment of public policies is such a hard problem. We finally show that we need to move one step ahead, specialising decision-aiding to meet the policy cycle requirements: a need for policy analytics. Keywords Public policy · Policy cycle · Evidence-based · Decision aiding ·
  • 2. Policy analytics 1 Introduction Policies are all around us and, directly or indirectly, they influence many aspects of our life. Quite often, people ask themselves how such policies have been conceived, why politicians have decided to implement that policy, and not another one, why it has been implemented in that precise way and with particular resources, how these have been used and so on. As decision analysts we are quite often confronted with “clients”, being public agencies or stakeholders, involved in public decision processes to whom we are expected to provide useful knowledge for such processes. But what is useful knowledge in such a context? Some international agreements, laws and norms, a macroeconomic plan, some politicians’ interests, the national statistics, surveys and polls? This paper aims to discuss “evidence-based policy making”, a recent attempt to summarise such useful knowledge as “evidence” which should G. De Marchi · G. Lucertini · A. Tsoukiàs (B) LAMSADE-CNRS, Universit Paris Dauphine, Paris Cedex 16, France e-mail: [email protected] 123 http://crossmark.crossref.org/dialog/?doi=10.1007/s10479-014- 1578-6&domain=pdf 16 Ann Oper Res (2016) 236:15–38
  • 3. guide policies. It seeks to provide a critical perspective on it and then propose a new concept for supporting policy making: policy analytics.1 We start by considering policies as shapeless objects, modelled by politics, where a set of interrelated actions is aimed at achieving a set of multiple and interrelated goals within a period of time. In a public policy context the nature of “decision process” or “policy cycle” (Lasswell 1956) (we will use them interchangeably) can be characterised by some relevant features. First, the policy cycle consists of a set of interrelated decision processes linked by goals, resources, areas of interest or involved stakeholders. Second, in a public context, once we start considering laws, rights and governance principles it is difficult to identify the person(s) who will have the power to decide: who is(are) the policy maker(s). Third, issues may be ill defined, goals may be unclear, the stakeholders are usually many and difficult to detect. Fourth, actions and policies are interrelated, and so are their consequences, although sometimes seemingly very distant and disconnected. Fifth, the factor time must be introduced. On one hand, public policies in order to be effective and solve problems in a comprehensive and organic manner need a strategic approach, establishing long term agendas; these might be in contrast with the short-term agendas policy makers may have. On the other hand, the longer the time horizon of a policy, the more we need to consider different and unforeseeable risks and uncertainties (Beck 1992).
  • 4. In the last few years this context has become more complex: participation and “bottom up” actions become frequent and are often required by law. Citizens, if and when directly affected by some policy, are (over)informed and becoming active in the policy-making process. They do not wait until some obscure decisions fall from top down, and want to know and be informed about the government decisions and actions. They want to receive explanations before accepting decisions. They are not subjects but agents of democracy and, in this sense, participatory processes become crucial both to have a democratic process, and to avoid oppo- sition phenomena as “not in my backyard”. That is why policy makers now more than ever are expected to be accountable and policy-making “evidence- based” rather than based on unsupported opinions difficult to argue. A transparent relation between decision makers and stakeholders becomes fundamental in order to attain and preserve consensus. Consensus is a very important resource in every public policy process. Most actions policy makers or other relevant stakeholders undertake are guided by consensus seeking and most resources committed within a policy cycle become important in relation with their ability to be trans- formed into consensus (Dente 2011). Thus, policies become instruments “to exercise power and shape the world” (Goodin et al. 2006, p. 3). In 1997 the concept of evidence-based policy making (EBPM) was introduced, in a mod- ern form, by the Blair government (Blair 1994). The idea of
  • 5. creating policies on the basis of available knowledge and research on the specific topic is not new and is generally accepted. However, we need to deepen into the concept and its peculiarity in order to better under- stand exactly what it means. First of all, what is evidence? According to the Oxford English Dictionary, evidence means an “available body of facts or information indicating whether a belief or proposition is true or valid”. However, this definition is far from clear and some- what ambiguous. Why is “evidence” once again highlighted as a support for deciding about policies? The idea of using some form of evidence in order to conceive a policy is not really new. In which sense is its contribution now different? EBPM has been defined as the method or the approach that “helps people make well informed decisions about policies, programmes and projects by putting the best available 1 For a detailed discussion about this concept the reader can see Tsoukiàs et al. (2013). The present paper precedes conceptually the previously mentioned paper although it appears after it. 123 Ann Oper Res (2016) 236:15–38 17 evidence from research at the heart of policy development and implementation” (Davies 1999). It is important to point out that the scope of EBPM is to
  • 6. help and “inform the policy process, rather than aiming directly to affect the eventual goals of the policy” (Sutcliffe and Court 2005). We could say that EBPM essentially consists of “the integration of experience, judgement and expertise with the best available external evidence from systematic research” (Davies 2004). Evidence should include all data from past experiences, as well as information and good practices from the literature reviews. This is certainly important and necessary, but is it also sufficient to make a good policy? Dente (2011) claims that in order to understand and assess a policy, what is really important is the policy making process which led to such a policy, rather the policy itself. Thus, in order to support such a decision process, we need information and knowledge considering the policy cycle as a whole, able to support accountability requirements. Davies (1999, 2004) and Gray (1997) claim that the introduction of EBPM produces a shift away from opinion-based decision making towards evidence-based decision making. This shift is far from easy. On one hand, EBPM seems to be viewed as an objective method to decide, distant from political ideology. On the other, this statement is controversial, and must be better explained. Despite EBPM trying to base policies on “facts” and “evidence” rather than insisting on bureaucracies or on political ideology, it is important to underline to the stakeholders (technical and not) that such “facts” and “evidence” do not provide an unambiguous guide to decision-making. In fact, we know that
  • 7. data can be manipulated, that interpretations are subjective and that good practices are strictly linked with a specific framework. In other words, constructing evidence does not end the analyst’s work or the need for the stakeholders’ critical intelligence. It is not a way to delegate decisions, because values, preferences and decisions should remain a political act. The aim of this paper is to review the literature about policy making and evidence-based policy making (and related issues), highlight the origins, understand the criticisms and con- troversies, while looking for a new perspective which we shall call “policy analytics”. Our main claims are: 1. The policy making process or “policy cycle” is a long term decision process characterised by: – the specific nature of public policies; – the requirements of legitimation, accountability and deliberation; – the existence of multiple public decision processes within the same policy cycle. 2. Supporting the policy cycle cannot be reduced to producing just “evidence” (in terms of data, knowledge, expertise etc.). The analytics providing evidence aiming at supporting general decision making processes are necessary, but not sufficient in the case of public policy making. Constructing evidence should be seen as a specific, purpose-built, type of decision aiding process, and as such, should be
  • 8. methodologically well founded. 3. We need a new and richer concept accounting for all decision aiding activities that aim at supporting the policy cycle: we call this term “policy analytics” and we will briefly introduce some of its main features in this paper. The paper is organised as follows. We start outlining the meaning of some important terms and concepts (Sect. 2). Next, we briefly present a review about the EBPM state of the art (Sect. 3). After that we discuss criticisms, which involves the policy making process and the introduction of evidence within it (Sect. 4). We will then introduce and sketch out the concept of “Policy Analytics” as a new term grouping the activities and knowledge created 123 18 Ann Oper Res (2016) 236:15–38 to support policy making throughout the whole policy cycle (Sect. 5). A concluding section, including future challenges, ends the paper. 2 Terms and concepts 2.1 Public policy To start with, it is important to understand that the concept of public policy (PP) should be wide and abstract enough in order to adapt itself to various
  • 9. applications and contexts. For such reasons, over the past 50years many definitions have been coined to define PPs. They have different meanings as the authors bring into focus different aspects such as processes, stakeholders, objects and decision levels (Anderson 1975; Dente 2011; Dunn 1981; Dye 1972; Hill 1997; Jenkins 1978; Kraft and Furlong 2007). From this literature we can identify six main characteristics of PPs: • the power relations between different stakeholders, • the different institutional levels, • the duration over time, • the use of public resources, • the act of deciding (including deciding not to decide), • the impacts of decisions. However, according to different contexts and goals, different types of policies may result in combining the above characteristics at different levels: long term city waste management is a low institutional level policy with a long time horizon implying a moderate use of public resources potentially involving a small number of stakeholders, while locating a regional landfill often results in a very conflict-ridden (many stakeholders with strong commitments) situation although for a short time, potentially involving several institutional levels. First of all we need to understand the term “public”. Intuitively public is any issue con- cerning the community, something which in a direct or indirect way affects all citizens. Then, in our specific field we shall emphasise that every PP is a
  • 10. process that implies a set of public decisions; thus, it is a public decision process. It is developed over a relatively long period of time and involves different decisional levels, interacting according to a set of determined rules. The process and entailed interactions are developed in order to solve a problem refer- ring to a public issue, or rather a problem in which resources and rationality are public. The concept of “public issue” is not always clear: the issue that the policy will address is an object which conveys a meaning. Naming a public policy is the action of defining such a meaning and it implies the legitimation of this meaning. However, every subject affected by the policy (policy makers, experts, citizens, stakeholders) makes-up his or her own meaning of the policy, legitimated by its name and definition. Practically speaking, stakeholders inter- pret a policy according to their own needs and/or commitment of resources and accordingly exercise their legitimation. Seen from a resources point of view, a public decision is a public choice and it implies an allocation of public resources. Even no- action in a determined field is considered a policy, because it implies the public choice of maintaining the same resource allocation as before. Speaking about public resources, governments/public agencies have to make understandable how and why they use public resources in order to tackle specific issues. The public decision process is requested to be accountable to the citizens in contrast with the complexity of the entire process. Thus, we need an operational definition able to summarise the characteristics introduced:
  • 11. 123 Ann Oper Res (2016) 236:15–38 19 Definition 2.1 We consider a public policy as a public agreement, allocating public resources to a portfolio of actions aiming at achieving a number of objectives set by the public decision maker, considered as an organisation. Such agreement can be interpreted in many different ways depending on who is concerned by the policy. Through this definition of public policy we want to highlight that a policy has a meaning for the stakeholders affected by the policy itself and for the citizens in general, but that such meaning could be neither shared nor consensual. Possibly the policy may achieve both knowledge sharing and consensus about the resource distribution, but this is a potential outcome. A policy does not only pursue quantifiable objectives; it generates a legitimation space, thus producing inclusion and/or exclusion. A legitimation space is an abstract space where the stakeholders reveal (at least partially) their concerns, preferences, values and goals, where they commit and look for resources and where they are able to seek for and create legitimation, namely agreement on decisions and actions, through relations and discussions (see Ostrom’s “action arena” Ostrom and Ostrom 2004, 1971, and Ostanello and Tsoukiàs’ “interaction space” Ostanello and Tsoukiàs 1993). This is a
  • 12. crucial difference with respect to generic policies of the type a private business will typically conceive. Example 2.1 In the following section we shall introduce a running example in order to allow the reader to understand a number of concepts introduced in the paper. The example is taken from a real case study on the design of risk reduction measures for urban and transportation infrastructure (for more information see Mazri et al. 2012, 2014). The case concerns the broader issue of designing risk reduction measures around haz- ardous industrial plants, namely the ones considered by the so called “Seveso directives”. In France the law 699/2003 obliges the “préfet” (the government representative at local level) to produce, for each hazardous plant in the territory under his jurisdiction, a “Technological Risk Reduction Plan (PPRT in French)” aiming at reducing to an acceptable level the risks the population may have to face in case of a major accident occurring in that plant. Such measures concern urban planning actions as well as actions addressing the transportation infrastructure present within a reasonable distance at the plant. Examples of such measures can be establishing areas (around the plant) where houses need to be demolished or deciding to divert a railroad. We consider such PPRT as public policies: – they affect multiple stakeholders;
  • 13. – they use and redistribute public resources (land, money, authority etc.); – their designs result in de facto, but also de jure, participatory decision processes; – there is at least one moment in which the plan is deliberated; – such plans affect a “public issue”, that is citizens’ safety, a controversial issue allowing for multiple interpretations (different stakeholders, such as the local politicians, the scientific advisors, the citizens individually etc., will likely have different interpretations of what safety means and how risks can be reduced). 2.2 Policy making process Since the 1950s, policy making has been interpreted as a process, that is a sequence of inter- active stages or phases. Under such perspective, the policy making process can be considered as developing in time and space, merging actions and intentions, decisions and also a lack of decision-making, impacting on society and on the political system itself. The idea of modelling the policy making process (or cycle) in terms of stages was first put forward by Lasswell (1956). He introduced a model of the process divided in seven stages: 123 20 Ann Oper Res (2016) 236:15–38
  • 14. intelligence,promotion,prescription,invocation,application,termi nation,andappraisal.This set of stages has been contested and criticised, but the model itself has been successful as a framework for subsequently studying policy science and policy analysis (Jann and Wegrich 2007). During the 1960s and 1970s, a number of different process typologies were developed, used to organise and systematize the growing research (for such typologies see Anderson 1975; Brewer and Leon 1983; Brewer 1974; Jenkins 1978; May and Wildavsky 1978). Today the most conventional process to describe a chronology of a policy making process is made up of (Hill 1997; Jann and Wegrich 2007): • agenda setting, • policy formulation, • decision making, • implementation, • evaluation. This kind of policy making process—as presented by Lasswell and others - has been designed like a problem-solving model, according to the rational model of decision-making developed in organisation theory and public administration (Jann and Wegrich 2007). Simon (1947) has pointed out that the real world does not follow such stages. However, this kind of process still counts as a major reference (other models used in public policies: the incremental modelDror1964;Etzioni1967;Lindblom1959;thegarbagecanmode lKingdon1984;March and Olsen 1976, 1989). This origin of the studies on policy making, as sequential stages, will
  • 15. be our basis for interpreting the policy making process as a proper decision-making process. We need to emphasise that a policy cycle goes beyond the public organisations concerned: it is not an internal process to them. A public decision process involves multiple and different organisations and/or individuals. Thus, there is no single rationality to simply follow. This process is characterised by several rationalities, which may conflict, and the process generates a legitimation space in which these rationalities interact (possibly following what Habermas 1984 calls communicative rationality). Example 2.2 We continue with the example about the PPRT. Establishing such a plan implies entering a policy cycle: – agenda setting: typically the political authority (préfet) establishes the issues which need to be considered as critical as far as the citizens’ safety is considered (areas to be analysed, infrastructures to be considered, types of accidents to be taken into account, budget allo- cated and timing of the process, just to mention some of the issues which are typically introduced in the cycle); – policy formulation: different stakeholders, such as field experts, process experts, local focus groups, representatives of other actors and social groups, need to establish what, how and why is going to be used in order to assess risks, mitigation measures and their effectiveness etc.;
  • 16. – decision making: a number of meetings, discussion forums, consultation actions, con- cerning both the whole set of involved stakeholders as well as each group individually are scheduled, aiming at addressing the issues introduced into the agenda and formulated as elements composing the policy (characterising different areas through the level of the incumbent risk, measuring the potential impact of each single action potentially involved in the policy etc.); the process ends when the plan is officially deliberated by the “prfét”; – implementation: the plan is enforced through legal actions, specific projects are scheduled (moving households considered under extreme risk, building protections, implementing risk management procedures etc.), actions communicating the plan contents are performed etc.; 123 Ann Oper Res (2016) 236:15–38 21 – evaluation: at a given time the results of the plan are assessed, possibly against benchmarks and/or targets, besides observing unforeseen consequences. N.B. We do not wish to suggest that these activities necessarily occur in a linear fashion. 2.3 Public deliberation, legitimation and accountability
  • 17. A feature which helps to distinguish a public decision process from other decision processes is “public deliberation”, namely the legislative act. The outcome of the decision is a public issue, and the public authority must communicate it, the citizens must know about it. The publication of an official document which defines and explains the policy is the act that produces the wanted (and unwanted) outcomes and reactions to the decision. The intermediate and final act explain the motivations and the causes of that policy. Public deliberation is also expected to establish accountability of the public decision process and of the public authority itself and, to some extend, their legitimation to the general public or stakeholders. Linked with deliberation we find two concepts, recently used in the field of PP: “Legitima- tion” and “Accountability”. Both are important in order to understand the relations established between stakeholders. They are fundamental in the creation, evolution and maintenance of a conceptual social space that we call a legitimation space, where stakeholders interact creating relationships, goods, services, but above all develop and define their rationality (Habermas 1984). In relation with legitimation, we refer to authorisation and consensus. The need for legit- imisation stems from the relative dimension of power, that makes it frail in terms of collective acknowledgement. Political power does not get identification and legitimation from a tran-
  • 18. scendent order; thus, the recognition of its value, and so the collective acknowledgement, becomes an immanent issue: legitimation is obtained through authority and consensus. On one hand, the legitimacy of the public action and the relative decisions, comes from the law, which gives to the elected the authority to decide and manage public resources. On the other hand consensus is obtained when acting pretends to be rational: more generally speaking when the decision process itself pretends to be “rational” (whatever rationality model we may consider). Actually the fact that different rationalities co-exist within a pol- icy cycle allows to shift our attention to the pretentions of rationality: policy decisions and actions pretend to be rational and look for logical frames allowing them to be perceived as such. However, before proceeding, we need to make a little clarification. When speaking about rationality, in this context, we are not referring to rational planning, a concept used in urban andregionalplanning(Faludi1973;Friedman1987)(criticisedinAle xander2000;Hostovsky 2006). For us, a policy maker has a different role from a “rational planner”. A rational planner adopts a precise model of rationality. Policy makers are able to legitimate their decisions and actions because at any time they adopt the most suitable rationality model. Rational planners feel legitimate because they act “rationally” (according to some model). Policy makers feel rational because they act according to different legitimate models of rationality.
  • 19. How could rationality be legitimising in the field of PP? The concept of rationality is not straightforward,neithertrivial.Inresearch,manyformsofrationality havebeenidentified(the idea of multiple rationalities was introduced first by Weber 1922), all of them aiming at some validity claims (Habermas 1984). We distinguish three different approaches in establishing such validity: • economic rationality (Hammond 1997; Harsanyi 1955; Robbins 1932); policies should maximise the utility of a society seen as the aggregation of the consumers within it; 123 22 Ann Oper Res (2016) 236:15–38 • bounded rationality (Simon 1955); policies should satisfy some subjectively defined deci- sion maker’s requirements for action; • communicative rationality (Habermas 1984); policies should result as consensual artifacts through four validity dimensions: – truth; – scientific support; – normative rightness; – sincerity When talking about accountability we refer to openness and
  • 20. transparency: public adminis- trations can, should and sometimes must show and justify the reasons that support a decision, to some policy or any allocation of public resources. The 2008 EVALSED guide of the European Union (Regional Policy Inforegio 2011) defined it as: Obligation, for the actors participating in the introduction or implementation of a public intervention, to provide political authorities and the general public with information and explanations on the expected and actual results of an intervention, with regard to the sound use of public resources. From a democratic perspective, accountability is an important dimension of evaluation. Public authorities are progressively increasing their requirements for transparency vis-a-vis tax payers, as to the sound use of funds they manage. In this spirit, evaluation should help to explain where public money was spent, what effects it produced and how the spending was justified. Those benefiting from this type of evaluation are political authorities and ultimately citizens. We want to highlight that the EU definition emphasises the concept of accountability as an unavoidable dimension of evaluation (from a democratic point of view). This statement leads us back to the concept of legitimation and lets us understand that these two concepts are complementary (see also the discussion emphasising the difference between “new public management” and “new public governance” in Almquist et al. 2012). The definition also
  • 21. emphasises that evaluation should aim at helping the accountability of policy makers for the use of public resources, the effects of the implemented policies, and the reasons for choosing a specific alternative. In order to improve legitimation (and thus the acceptance of policies) it is important that the stakeholders feel some ownership over the result (understand it, understand the consequences, realise that different points of view have been analysed and compared). Ownership is associated with justified knowledge and beliefs: stakeholders and policy makers need to be able to explain and justify why and what they do to their communities. Example 2.3 We conclude the presentation of the PPRT case explaining the three concepts introduced above. 1. Public Deliberation The process constructing the PPRT is de- facto participatory (besides participation being enforced by law). However, such participation is officially recognised through the establishment of a committee (named CLIC: Comit Local d’Information et Concertation) which is expected to act as the place where all issues are discussed. Each single stakeholder, the CLIC itself, as well as the decision maker (the préfet) perform a number of public acts: releasing a document containing risk analysis, publishing the minutes of a CLIC meeting, releasing an intermediate or the final version of the PPRT. All such actions are public deliberations which characterise the policy cycle and allow the policy making process to be observed.
  • 22. 2. Legitimation The issue here is not just to reach an agreement among the stakeholders about the plan to be deliberated. First of all it is crucial that the whole process is considered as 123 Ann Oper Res (2016) 236:15–38 23 legitimate (relevant stakeholders have been invited, the agenda has been discussed and agreed, unforeseen issues raised by some of them have been seriously considered etc.). Controversial conclusions might be nevertheless accepted or approved (despite opposi- tion) if the process is considered legitimate. Then the result itself should be legitimating in the sense that the public resources redistribution resulting from adopting a specific PPRT needs to address the expectations of the involved stakeholders (although perhaps not exactly as these were considering them). To give a more precise example: if the local transportation agency was expecting to improve safety for the users of their services travelling through the area considered as “risky”, there should be somewhere resources allocated taking this issue into account; perhaps not the ones the agency was originally considering (building a concrete protection), but a reasonable alternative (installing a warning system). In the PPRT case it has been shown that being able to demonstrate
  • 23. to each stakeholder that the actions included in the plan address the concerns they have carried within the discussions allows the stakeholders to feel that the plan “takes care of them”: in other terms that the use, reallocation and distribution of public resources is done for “public purposes” and not for satisfying opposing private ones. 3. Accountability Accountability is related to the process through which the involved stake- holders become able to understand, justify and explain the policy cycle and its outcomes. In the PPRT case it is important for the stakeholders to be able to show how certain information (a risk level), combined with a value structure (fairness in cost distribution) leads to a certain decision (covering part of a railway, paid by the hazard plant generating the risk). It is equally important to show that the way some stakeholders perceive risks has been integrated in the procedures where risk is quantified. Last, but not least it is important to communicate about the potential risks and their consequences in a way that different stakeholders understand at least one common core message (for instance: “we work for your safety”). 3 Evidence-based policy making: state of the art 3.1 Premises “Evidence-based policy making” (EBPM) is a “new” topic that pervades the last decade of social sciences’ debates. However, there is nothing new in the
  • 24. idea of using “evidence” to support decisions. Aristotle (1990) claims that decisions should been informed by knowl- edge. Later, this way of thinking and acting created several philosophical movements around “positivism” (see Comte 1853, 1865; De Saint-Simon 1976; Giddens 1974; Hanfling 1981, Zammito 2004, up to “constructivism” Watzlawick et al. 1967). The interest towards the use of knowledge as rational and logical reasoning, grew until the second half of the 20th century, when these were intended both as cause and effect (Dryzek 2006; Sanderson 2002), as well as the ability to rank all known available alternatives (Ostrom and Ostrom 1971) (see also Bouyssou et al. 2000, 2006). In the beginning, the concept of rational decision was central both for the economic dimension of problem solving and the scientific management of enterprises (Tsoukiàs 2008). In this it was considered possible to use the scientific method to improve policy making. An important figure in this field was Harold Lasswell, committed to the idea of a “policy sciencedemocracy”(Lasswell1948, 1965;LernerandLasswell1951).In1963Buchananand Tullock organised a conference in which the shared interest was the application of “economic reasoning” (commonly considered as a good example of rationality) to collective, political 123
  • 25. 24 Ann Oper Res (2016) 236:15–38 or social decision-making. In 1967, the term “public choice” was adopted to distinguish this area (Ostrom and Ostrom 1971). The public choice approach is related with the theoretical tradition in public administration, formulated by Wilson (1887), later criticised by Herbert Simon. Wilson’s major thesis was that “the principles of good administration are much the same in any system of government” (Ostrom and Ostrom 1971, p. 203), and “Efficiency is attained by perfection in hierarchical ordering of a professionally trained public service” (Ostrom and Ostrom 1971, p. 204). Wilson gave also a strong economic conceptualisation of the term efficiency. He said “the utmost possible efficiency and at the least possible cost of either money or of energy” (Wilson 1887, p. 197 cited in Ostrom and Ostrom 1971, p. 204, see also Ostrom and Ostrom 1971; White 1926; Wilson 1887). Under such a perspective “policy problems were technical questions, resolvable by the systematic application of technical expertise” (Goodin et al. 2006, p. 4). However, already since the ’40s, Simon (1947, 1955, 1959, 1962, 1964, 1969, 1979) strongly criticised the theory implicit in the traditional study of public administration, because there are no reasons to believe in one “omniscient and benevolent despot”. In spite of such criticism, as Dryzek (2006, p. 191) said: these dreams may be long dead, and positivism long rejected even by philosophers of
  • 26. natural science, but the terms “positivist” and “post-positivist” still animate disputes in policy fields. And the idea that policy analysis is about control of cause and effect lives on in optimising techniques drawn from welfare economics and elsewhere, and policy evaluation that seeks only to identify the causal impact of policies. Dryzek’s quote suggests that “positivism” and “post- positivism”, are still alive and indeed we claim that the promotion of EBPM has been a return to such approaches. 3.2 Evolution 3.2.1 From medicine to social science Evidence-based policy making was born from the roots of evidence-based medicine (EBM, Dowie 1996; Sackett et al. 1996) and evidence-based practice (EBP, Melnyk and Fineout- Overholt 2005; Mitchell 1999). Indeed, it is easy to track such roots in the EBPM logic, and the way to understand problems and solutions. EBM and EBP are based on the simple concept of finding the best solution which integrates past experience into the problem-solving process. The practice of EBM needs to integrate individual clinical expertise with the best available external critical evidence from systematic research, in consultation with the patient, in order to understand which treatment suits the patient best. In this sense, we can say (with Solesbury 2001) that EBM and EBP have both an educational and a clinical function. In other
  • 27. words, this kind of evidence is based on a regular assessment through a defined protocol of the evidence coming from all the research. In order to respond to this need of EBM for systematic up-to-date review, the Cochrane Collaboration was initiated in 1993, which deals with the collection of all such information. Subsequently, given the good results obtained in medicine using such an approach, politi- cians were interested in using the same scientific method to support public decisions and legitimise policy making. Given the success of the Cochrane Collaboration in the production of a “gold standard”, the Campbell Collaboration was established in 2000, which aims at systematically reviewing social science in the fields of education, crime, justice and social welfare. 123 Ann Oper Res (2016) 236:15–38 25 3.2.2 EBPM in the UK In 1994, the Labour party termed itself as “New Labour” announcing a new era: “New Labour” was expected to be a party of ideas and ideals but not of outdated ideology. “What counts is what works”. The objectives were radical. The means would be “modern” (Blair 1994). In this first announcement it was possible to recognise the same roots and philosophy
  • 28. pervading EBM and EBP. Moreover, in 1997 when the Labour Party won the general elections they decided to open a new style of policy making. In order to organise and promote it, they published the Modernising Government White Paper (Cabinet Office, London 1999), in which they argue that: government must be willing constantly to re-evaluate what it is doing so as to produce policies that really deal with problems; that are forward-looking and shaped by the evi- dence rather than a response to short-term pressures; that tackle causes not symptoms; that are measured by results rather than activity; that are flexible and innovative rather than closed and bureaucratic; and that promote compliance rather than avoidance or fraud. To meet people’s rising expectations, policy making must also be a process of continuous learning and improvement. (p.15) better focus on policies that will deliver long-term goals. (p.16) Government should regard policy making as a continuous, learning process(...)We must make more use of pilot schemes to encourage innovations and test whether they work.(p.17) encourage innovation and share good practice (p.37) In this document they describe the goals of the new government changing the approach to public policy. This change implied the evolution of the evidence-based method and logic. We
  • 29. can consider the Government White Paper as the Manifesto of United Kingdom’s EBPM, where EBPM covers the same role of EBM, that is to give accountability at the field of policy. Such accountability is promoted by two main forms of evidence (Sanderson 2002): • the first one refers to the goals and then to the effectiveness of the work of the government; • the second one refers to the effective results and, consequently, the knowledge on how well policy works under different circumstances. In practice, policy processes have been viewed as learning processes that have to be studied, analysed and monitored in order to obtain new evidence for building future policies. The expectation was a goal shift of the policy making process: from a short term policy founded on ideology and no-scientific knowledge to a long term policy founded on identified causes of the social problems being faced. Under such a perspective, any components of the policy process based on non scientific arguments are considered a deviation from the “truth/reality” of the problems. In fact, David Blunkett, in his speech in 2000 (Blunkett 2000), emphasised that: This Government has given a clear commitment that we will be guided not by dogma but by an open-minded approach to understanding what works and why. This is central to our agenda for modernising government: using information and knowledge much
  • 30. more effectively and creatively at the heart of policy making and policy delivery. “What works and why” became the UK slogan for EBPM promotion. The following government put emphasis on EBPM, although with some differences. The shift was from 123 26 Ann Oper Res (2016) 236:15–38 policy learning to policy delivery, and thus the need to move away from experimentation and the awareness that what matters most is hard quantitative data. In fact, in recent years EBPM has evolved from a giving attention to any kind of scientific analysis to a great attention on quantitative and economic analysis (Cabinet, Performance and Innovation Unit, London 2001; HM Treasury, London 1997). 3.2.3 Other experiences After being developed in UK, EBPM expanded its influence to other English speaking coun- tries, mainly USA and Australia. In the USA, the most representative event was the foundation in 2001 of the US Coalition for Evidence Based Policy that aims at increasing government effectiveness through the use of rigorous evidence about what works. Evidence is again consciously borrowed by medicine, with the explicit goal of replicating the effectiveness
  • 31. that produced many advances in the field of human health in the field of social policies. Evidence-based policy making was seen as an instrument of rationality that would let society avoiding to waste resources pursuing expensive but ineffective social policies. Evidence is thus a resource-rationing tool (Marston and Watts 2003) in the sense that it indicates the right way to address a social problem, making the country more efficient: focussing the spending only on satisfactory policies. In Australia there is no formal coalition and no explicit formal willingness to apply EBPM, but the language of evidence has spread to many fields of public policy: we can see examples in health and family services, community services, education and immigration. In these fields we can find sentences referring to evidence that implies that EBPM is being actively promoted in a specific way: “research helps to depoliticise educational reform” (Department of Education, Training ad Youth 2000, p. 190) or, as Mark Latham, then leader of the Federal Parliamentary Australian Labor Party, put it in his speech on welfare reform (Latham 2001, p. 1): My conclusion is that we should forget about the grand theories of sociology and the ideologies of the old politics and pursue an evidence based approach to welfare reform Here, evidence is considered as a solution far away from political ideology and thus as an apolitical solution; Smith and Kulynych (2002, p. 163) state that
  • 32. “efficiency becomes the primary political value, replacing discussion of justice and interest”. Is it true? Does the use of evidence mean avoiding ideology? Is efficiency improved using evidence? In the next section we will discuss whether indeed the use of “evidence” can effectively replace “ideology” and “values”. 4 Criticism of EBPM Within the EBPM debate, authors cast doubt on whether introducing evidence in the policy making process is actually innovative. If previously policy making could be described as a “swamp” (Schn 1979) characterised by complexity, uncertainty and ignorance, then EBPM aims to move onto firmer ground in which sound evidence, rather than political ideology or prejudice, could drive policy. The question is whether this confidence in the power of evidence is really a step forward, as EBPM could appear as a return to the old time trust in instrumental rationality. In fact Parson (2002, p. 44) states that: 123 Ann Oper Res (2016) 236:15–38 27 EBPM must be understood as a project focused on enhancing the techniques of man- aging and controlling the policy making process as opposed to either improving the
  • 33. capacities of social science to influence the practices of democracy. Sanderson (2002, p. 1) argues that: “the resurgence of evidence- based policy making might be seen as a reaffirmation of the modernist project, the enduring legacy of the Enlightenment, involving the improvement of the world through the application of reason ”. Actually, in the UK EBPM the focus was on effectiveness, efficiency and value for money. This experience is characterised by a managerial emphasis (Trinder 2000, p. 19). EBPM, in its effort to implement accountability, is linked with an instrumentalist way of managerial reforms that have infiltrated public administration practices in many western democracies over the past three decades (see Almquist et al. 2012). Despite opposite claims these man- agerial reforms can be assimilated by the same technocratic logic, concerned with procedural competence rather than substantive output (Marston and Watts 2003). In the following section we introduce the main issues for which EBPM has been criticised: the existence of multiple evidences; the multiplicity of factors influencing policy making; and the contingent character of evidence. 4.1 Multiple evidences There are many typologies of evidence: the distinction most used is between hard/objective and soft/subjective. The first one includes primary quantitative
  • 34. data collected by researchers from experiments, secondary quantitative social and epidemiological data collected by government agencies, clinical trials and interview or questionnaire-based social surveys. Other sources of evidence, typically devalued as “soft” (Marston and Watts 2003, p. 151), are photographs, literary texts, official files, autobiographical material like diaries and letters, the files of a newspaper and ethnographic and particular observer accounts. Davies defines a scheme (Davies 2004, p. 15) in which he shows that there are seven kinds of evidence that originate from scientific research: impact evidence, implementa- tion evidence, descriptive analytical evidence, public attitudes and understanding, statis- tical modelling, economic evidence, and ethical evidence. Moreover, Davies states that in policy making “privileging any type of research evidence or research methodology, is generally inappropriate” (Davies 2004, p. 11). Thus, a balance between the different kinds of research methodology and general competence of the full range of research meth- ods is required. Otherwise, Sanderson (2002) states the need for developing just impact evidence (as defined by Davies) in order to build policies through long term impact evaluation. Due to different opinions in the debate, in order to avoid misunderstandings in practice, the UK Cabinet Office clarify the meaning of evidence in the White Paper on Modernising Government (Cabinet Office, London 1999) defining it as:
  • 35. expert knowledge; published research; existing research; stakeholder consultations; previous policy evaluations; the Internet; outcomes from consultations; costings of policy options; output from economic and statistical modelling From this definition it seems that this conception of evidence allows for both “conven- tional and unconventional scientific methods”. However, a hierarchy of these is implicitly established in practice. This kind of practice is far from being neutral or objective. Indeed, the selection of the “more appropriate” evidence or the one with “greater weight” is necessarily a limitation of what counts as valid knowledge. The building of a hierarchy of knowledge 123 28 Ann Oper Res (2016) 236:15–38 means that we can consider some forms of knowledge to be more related to reality/truth. If this could be considered correct in some cases, it is always not neutral. The same thing happens each time we choose a theory or a method rather than another. Every theory is based on some hypotheses or interpretations of the complex reality and they are not omni-comprehensive. In choosing what counts as the valid knowledge for policies, policy makers implicitly state their interpretation of reality. For instance, observing the recommended and adopted evidences
  • 36. in the UK experience we can deduce that the interpretation of the reality could be intended as post-positivist, in the sense that the stress is on the cause- effect relationship. This claim is supported by the importance given to the concept of effectiveness and efficiency. These two concepts became in the UK policy evaluation often the first, if not the only, qualification needed by a policy to be implemented (HM Treasury, London 2003). If the discussion up to this point highlighted the problem that around us there are multiple evidences (statistics, surveys, polls etc.) which are difficult to choose from and/or priori- tise, we need to focus on another aspect often neglected or underestimated when talking about evidence. The problem is the multiple interpretations that the same “evidence” may carry. This is all the more important, since evidence is expected to be used within a policy cycle where multiple stakeholders with multiple concerns are involved and who are naturally going to interpret the evidence differently. To make things more complicated, such multiple interpretations can be influenced by how evidence is technically produced. We present two examples to better make our point. Example 4.1 (Air Quality) Consider the case of Air Quality (see Bouyssou et al. 2000). “Evidence” about Air Quality (in France) is expected to be provided by the ATMO index. This index takes into account four pollutants, measured (it does not matter here how) on a scale from 0 to 10 and chooses the maximum among them (the
  • 37. worst). This way to construct the index reflects the approximate knowledge we have about the impact on health of these four pollutants: anyone among them is supposed to be equally unhealthy. However, consider three consecutivemeasurements:thefirstoneattime t1 isthecurrentsituation(atacertainlocation), while t2 and t3 refer to the situation observed after two policies have been implemented, using the same budget. The case is summarised in Table 1. Should we consider the ATMO index to be “evidence” then the policy conducting to situ- ation t3 should be considered as better than the policy conducting to situation t2. Indeed for the ATMO index, the quality of the air did not improve from t1 to t2, while it did from t1 to t3. Obviously this is counter-intuitive! One could claim that the disaggregate information should be used as “evidence”, but then are we sure that each of the four measures does not suffer the same type of problem the ATMO index presents? We shall not discuss here what the more appropriate way to measure Air Quality should be. What we want to emphasise is that the ATMO index can be used as “evidence” about whether an observed situation is “healthy”, but cannot be used as “evidence” about the effectiveness of Air Quality improve- ment policies. In other terms, this index (like any other) allows multiple interpretations which are more or less suitable to the type of assessment we are interested in performing. Such Table 1 Three different measurements of Air Quality
  • 38. Pollutant CO2 SO2 O3 Dust t1 5 5 8 8 t2 3 3 8 2 t3 7 7 7 7 123 Ann Oper Res (2016) 236:15–38 29 interpretations are strongly related, among others, to how the index has been (technically) established. Example 4.2 Statistics about Poverty Some may claim that raw statistics reporting “facts” should be considered as “evidence”. But then consider the following fact: 95% of rural households in country XXXX do not have tap water available. What should be inferred from that? Perhaps that connecting rural households to fresh water distribution is a national priority, requiring an appropriate policy (and corresponding investments). Surprisingly, if we ask the household owners what they think about that, we could discover that this is not a priority for them. They might claim that they do not see the problem. They fetch water from the nearby water pools. Ok! Here is the problem. Typically, water is fetched
  • 39. by women, while the household owners are men. Certainly, men do not see the problem. What happens if we ask the women? Surprisingly, the women also claim that there is no problem! Indeed, a more thorough investigation reveals that going for water is one of the rare occasions they have to get out of home! Even though fetching water is a hard task, it pays because it allows women to have some social life. The story, which is a simplified version of a real one, tells us that raw statistics do not automatically reveal any truth. Facts need to be interpreted in order to be used for any deci- sion process and such interpretations are related to subjective values, constraints, customs, history or social norms, among others. The example tells us that “evidence” does not exist independently from the decision process for which it is expected to be used. Although “facts” exist, choosing the “facts that matter” is a subjective process and interpreting these “facts” is another subjective process. Summarising both examples, we can claim that looking for evidence while considering how to solve a problem is certainly a sound attitude to have and certainly preferable to a purely intuitive approach. However, contrary to the dominant idea that “evidence” should guide policy making, it seems that it is the policy that should guide us in looking for appropriate “evidence”. Actually we should consider questions of the type: • Who needs this evidence? • Why is that (s)he needs this evidence (what is the problem)?
  • 40. • What is the purpose (how the evidence is going to be used)? • Who else is affected by such evidence and how? • What resources do we commit and what do we expect? It turns out that such questions are practically the same ones we need to answer when trying to model a decision aiding process, see Tsoukiàs (2007). Under such a perspective we can still consider that we can follow a scientific approach in aiding policy makers involved in a policy cycle, but without denying subjectivity, political priorities, values or culture, placing them, instead, at the center of the methodology to be used. The problem in policy making is not whether there is enough relevant information, but how to consider, construct and interpret it: “the danger is not that one uses no evidence at all, but that one uses simply the most readily available” (Perri 2002). 4.2 Policy making as a result of many factors “Evidence”isnottheonlydeterminantofpolicymaking,butitisjuston eofseveralfactorsthat influence policy makers in choosing and determining policies. It is sure that EBPM represents an evolution with respect to the traditional approach that largely considers in power, people and politics the only policy making factors (Parson 2002). However, it is clear that we have 123 30 Ann Oper Res (2016) 236:15–38
  • 41. to overcome the “nave” concept of Evidence-Based Policy Making, where research replaces policy, and experts/technicians replace the politicians. Policies are complex “objects” and the policy making process is influenced by several relevant factors. Davies (2004) indicated the following ones: • Experience, Expertise and Judgement Policy making implies several stakeholders each carrying different types of knowledge such as grounded experience (of local groups, citizens, economic actors), expertise (of technical staff, scientists, experts) and judgements (public opinion, elected bodies, com- mittees). Such knowledge is expected to be integrated in the policy making process (Nutley et al. 2003). This could play a significant role when the existing information is imperfect or non-existent (Grimshaw et al. 2003). • Resources Establishing a policy mobilises material and immaterial resources (knowledge, authority, capital, land, etc.) and results the allocation of resources aimed at implementing a plan of actions. Such resources are bounded (and scarce). The result is a quest for efficiency both as far as the policy making process and its outcomes are concerned. This “economic” aspect of the policy making process is perhaps the most studied in terms of supporting methodologies and practices (Cabinet Office, London 2001, 2003; HM Treasury, London 2003; ODPM, Office of the Deputy Prime Minister, London
  • 42. 2000). • Values Values are the essence of policy making. They induce preferences, priorities, judgements and justify actions. They have several different origins: ideology, culture, religion, beliefs, knowledge, discussion etc.. It is unlikely that any policy making process can be legiti- mated without making reference to some set of values. However, it should be noted that values evolve over time in unexpected directions (consider the cases of the value of the environment in the last 50years, the value of women rights in the last 150years or the value of individual freedom in the last 250years). • Habit and Tradition Political institutions have their own organisational inertia. The policy making process is characterised by procedures and patterns often rooted in culture and history, but never- theless constraining the potential outcomes. Several times such constraints appear under the form of fundamental laws (such as constitutions), but equally likely they can appear as socially constructed legitimation processes and outcomes. • Lobbyists, Pressure Groups and Consultants Any policy making process mobilises pressure groups, informal or organised lobbyists as well as the opinion of experts. Such stakeholders are not always visible and have a less systematic influence. However, they play a key role in the participation process, allowing specific concerns, stakes and interests to find their way the
  • 43. discussion. • Pragmatics and Contingencies Policy making, agendas and decisions are influenced by unanticipated contingencies and “emergency” procedures which do not necessary fit with rational policy making. Policies are expected to take into account long term uncertainties as well as the aspirations of future generations. This can be in contradiction with a contingent, short term view of policy making (Phillips Inquiry, London 2001; Royal Society, London 2002). The above list of factors, which are pragmatically considered when conceiving or evaluat- ing a policy, shows that “evidence” needs to be understood in terms of knowledge produced within a decision aiding process and not as objective information revealing the truth. 123 Ann Oper Res (2016) 236:15–38 31 4.3 Evidence as contingent knowledge Young et al. (2002) identify five models in which the relation between research and policy can be shaped and defined, five ways through which knowledge inputs are managed through the policy cycle: • the knowledge-driven model,
  • 44. • the problem-solving model, • the interactive model, • the political/tactical model, • the enlightenment model. These models are used to understand how evidence is thought to shape or inform policy in exploring the assumptions underlying evidence-based policy making. The first two models are the extreme forms; they differ in the direction of influence of the relationship: in the first (knowledge-driven model) research leads policy in a sort of scientific inevitability, whereas in the second (problem-solving model) research priorities follow policy issues. In the interactive model there is no position of influence, and the relationship is characterised as mutual, subtle and complex. The political/tactical model sees research priorities as settled by the political agenda; studies are used to support a political position. In the enlightenment model, however, the benefits of research are indirect because they contribute to the comprehension of the context in which the policies will act. These five models are steps in a ladder between the power of authority and the power of expertise. “Emphasising the role of power and authority at the expense of knowledge and expertise in public affairs seems cynical; emphasising the latter at the expense of the former seems naïve” (Solesbury 2001, p. 9). In the experience of the United Kingdom, we cannot recognise any of these as a dominant model (Wells 2007, p. 25). Sanderson (2002, p. 5) defines a set of variables that are implied in the definition of a model:
  • 45. “the nature of knowledge and evidence; the way in which social systems and policies work; the ways in which evaluation can provide the evidence needed; the basis upon which evaluation is applied in improving policy and practice”. Under such a perspective, the Labour government announcement that a new era in which policy would be shaped by evidence, thereby implying that “the era of ideologically driven politics is over” (Nutley et al. 2003, p. 3) is controversial, to say the least. It is neither neutral, nor uncontested; evidence is a fundamentally ambiguous term. The way through which EBPM has been perceived and practiced reveals an idea of policy making based on a “cause-effect” principle. To simplify, social outcomes are seen as the result of how certain mechanisms work within a certain social context: once we know the mecha- nisms and the context, we can foresee the consequences. This approach has been criticised by constructivists, for whom the “knowledge of the social world is socially constructed and culturally and historically contingent” (Sanderson 2002, p. 6). Sanderson (2002) points out that research does not have the role of producing objectivity, or solutions for policy makers, concluding that constructivism needs to be reconciled with “practical requirements”. 5 Discussion Let us summarise our claims. • Policies have a twofold impact:
  • 46. – they deliberate an allocation of resources aimed at pursuing some objectives (not always measurable though); 123 32 Ann Oper Res (2016) 236:15–38 – they generate a legitimation space, thus producing inclusion (or exclusion), inviting stakeholders to enter within (on leave) it. • Policy making is a long term decision making process with specific characteristics: – it entails participation “de facto” (due to the legitimation associated with any policy); – there are moments, at least one, of public deliberation; – it is expected to be accountable, not only for the involved stakeholders, but also for the citizens in general; – it is guided by the search of legitimation, both for the policy itself and the policy makers. • Policy making should be viewed as a “policy cycle”, from the perception of a problem, to the design of policies, their legitimation, their implementation, their monitoring, their assessment etc.. Under such a perspective, a policy cycle: – requires knowledge aimed at supporting the processes within
  • 47. it; – produces knowledge used both within the cycle and beyond it. • Withinapolicycycleseveraldecisionproblemsarisesuchas:Whichas pectsoftheproblem should be considered as more important? What information is relevant and should be used? Who are the principal stakeholders? Who else is affected by the situation and possible policies? Which resources are allocated, where, how and when? What matters in terms of potential consequences? Decisions (of any type) result from combining factual information with subjective values, opinions and likelihoods. For this reason, decisions are synonymous with responsibility. Under such a perspective, there is nothing like an “objective decision”. Decisions are made by somebody, or a group of people, and reflect his/her/their standpoint within a decision process. The consequence is that there will never exist an “objectively defined policy”. Policies will always reflect what subjectively matters for those implied in the policy cycle. • What could be considered as a legitimated source for values, opinions and likelihoods to be considered by the policy makers in presence of multiple stakeholders and multiple scenarios? The market? A referendum? A focus group? A public debate? A poll? Whatever we adopt we should remember that: – it is a subjective choice to privilege any source of knowledge; – there exist different forms and levels of participation (see
  • 48. Daniell et al. 2010), and these do not necessary result in improving the efficiency of the decision process (sometimes more participation may result in less efficiency); – the validity of any legitimation claim is a social construction, resulting from argumen- tation about facts, norms, values, sincerity and relevance. Our survey about the EBPM movement highlights a number of issues: – there has been a legitimate demand for allowing scientific knowledge, facts, statistics and other information sources to acquire a status within the policy cycle; – despite opposite declarations, there has been a clear trend in considering such “evidence” as the driving force in the process of designing policies, thus claiming for such evidence a status of “objective knowledge”; – such a trend contradicts the nature of the policy cycle both from a substantial point of view (knowledge is not objective, it is functional to some purpose) and from a procedural point of view (there exist many evidences with different possible interpretations, arbitrarily used by the stakeholders within the policy cycle); 123
  • 49. Ann Oper Res (2016) 236:15–38 33 – arguing about policies needs legitimate knowledge; it also produces knowledge which in turn needs to become legitimated. The above discussion leads us to consider the problem of how knowledge is produced (constructed) in order to support decision making. The construction of knowledge (or evidence) in order to design a business policy has already been considered establishing what is called today “Business Analytics”.2 Business analytics has been initially devel- oped mainly for the private sector (Davenport et al. 2010; Davenport and Harris 2007), although applications in other areas, for example health analytics and learning analyt- ics are also growing (see Buckingham Shum 2012; Fitzsimmons 2010). Seen from a very pragmatic point of view, “analytics” is an umbrella term under which many dif- ferent methods and approaches converge. These include statistics, data mining, business intelligence, knowledge engineering and extraction, decision support systems and, to a larger extent, operational research and decision analysis. The key idea consists in devel- oping methods through which it is possible to obtain useful information and knowledge for some purpose, this typically being conducting a decision process in some business application. The distinctive feature in developing “analytics” has been to merge different techniques
  • 50. and methods in order to optimise both the learning dimension as well as its applicability in real decision processes. In recent years “analytics” has been associated with the term “big data” in order to take into account the availability of large data bases (and knowledge bases as well) possibly in open access (Open Data organisations are now becoming increasingly available). Such data come in very heterogeneous forms and a key challenge has been to be able to merge such different sources, in addition to solving the hard algorithmic problems presented by the huge volume of data available. However, the mainstream approach developed in this domain is based on two restrictive hypotheses. The first is that the learning process is basically “data driven” with little (if any) attention paid to the values which may also drive learning. This is with regard both to “what” matters and also to “why” it matters, potentially incorporating considerations of the extent to which different stakeholder perspectives are valued or trusted. The second is that, in order to guarantee the efficiency of the learning process seen, from an algorithmic point of view, as a process of pattern recognition, it is necessary to use “learning benchmarks” against which it is possible to measure the accuracy and efficiency of the algorithms. While this perspective makes sense for many applications of machine learning, it is less clear how it can be useful in cases where learning concerns values, preferences, likelihoods, beliefs and other subjec- tively established information, which is potentially revisable as constructive learning takes
  • 51. place. What does this tell to us as decision analysts? We denote with this term the profession- als/practitioners who are invited to enter the policy cycle as “experts” or as “technical staff” with the explicit or implicit role of producing the decision support knowledge within the cycle. We can summarise the type of demand decision analysts receive, in terms of produc- ing information and knowledge aiming at aiding the stakeholders, the policy makers or the citizens to: – better understand the stakes and issues at play; – better understand the potential consequences of potential actions; – better foresee potential unexpected/unwanted outcomes; – better justify, explain, argue about options, decisions and strategies; 2 This paragraph from Tsoukiàs et al. (2013) 123 34 Ann Oper Res (2016) 236:15–38 – design/construct/conceive new options beyond the existing ones; – improve participation, inclusion and, ultimately, democracy. In doing so decision analysts need to use existing information (facts, science, grounded knowledge, best practices etc.), need to constructively model
  • 52. values, opinions and likelihoods for the stakeholders and need to do so in a meaningful, operational and legitimated way. We denote this set of skills under the term of “policy analytics” (Tsoukiàs et al. 2013): To support policy makers in a way that is meaningful (in a sense of being relevant and adding value to the process), operational (in a sense of being practically feasible) and legitimating (in the sense of ensuring transparency and accountability), decision analysts need to draw on a wide range of existing data and knowledge (including factual information, scientific knowledge, and expert knowledge in its many forms) and to combine this with a constructive approach to surfacing, modelling and understanding the opinions, values and judgements of the range of relevant stakeholders. We use the term “Policy Analytics” to denote the development and application of such skills, methodologies, methods and technologies, which aim to support relevant stakeholders engaged at any stage of a policy cycle, with the aim of facilitating meaningful and informative hindsight, insight and foresight. 6 Conclusion Designing, implementing and assessing public policies is a major challenge for our societies. Our paper was guided by a general underlying question: why is aiding to design, implement and assess public policies so different from other decision aiding skills used when the clients are business oriented and the problem context does not concern public issues?
  • 53. The aim of this paper was twofold. On one hand, we tried to understand why pub- lic policies represent a specific challenge for decision aiding. On the other, we analysed the so-called “evidence-based policy making” literature since it represents the most recent attempt, originating from the clients’ side (the policy makers), to focus on the relation between the policy making process and the technical, scientific, expert, analytical support that such a process demands. The standpoint of our analysis has been clearly decision ana- lytic. We do not underestimate the sociological or political dimension of policy modelling support. We have rather focused on the challenges policy making offers to our discipline and profession. The analysis of the so-called policy cycle shows that policy making is a decision making process with precise characteristics: long time horizon, use, exchange and redistribution of public resources, de-facto participatory nature, deliberative, accountability and legitimation driven. This is related to the specific nature of public policies, which besides being delib- erations about resource allocation, create a legitimation space for stakeholders and citizens. If decision aiding is a process generating knowledge (possibly in an analytic form, but not only) to be used in a decision process, then it is clear that the knowledge required to support policy making processes needs to address such characteristics (for instance addressing the problem of legitimate knowledge or of legitimated argumentation).
  • 54. Under such a perspective our paper suggests that the evidence- based policy making approach, although originating from a legitimated demand, fails to address such challenges. The reason is that evidence does not exist independently from policies and that once identified does not “objectively” drive the policy cycle. However, the demand for using analytic infor- mation in order to support policy making remains valid, but needs to be addressed differently. This new frame, briefly presented in the paper, we call “policy analytics”. 123 Ann Oper Res (2016) 236:15–38 35 Acknowledgments This paper has gone through multiple versions. The evolution of the paper benefitted from the remarks of many colleagues (mostly during the special sessions about policy analytics in the EURO conferences in Vilnius, 2012 and Rome, 2013) among which we would like to mention Valerie Belton and Gilberto Montibeler with whom we wrote a position paper about policy analytics (originally scheduled to appear after this one). The comments of three referees as well as of the guest editors helped us very much in order to improve the paper. The support of a PEPS/PSL-CNRS 2013 grant is acknowledged by the third author. When the first version of this paper was written the first author was with the Department of Architecture at Alghero, University of Sassari, IT, while the second author was with the DIMEG, University of Padova, IT.
  • 55. The support of both is acknowledged. References Alexander,E.R.(2000).Rationalityrevisited:Planningparadigmsin apost-postmodernistperspective.Journal of Planning Education and Research, 19, 242–256. Almquist, R., Grossi, G., van Helden, G.J ., & Reichard, C. (2013). Public sector governance and accountability. Critical Perspectives of Accounting, 24, 479–487. Anderson, J. (1975). Public policy making. New York: Praeger Publishing. 9 editions, last in 2006 with Houghton Miffling. Aristotle.(1990).Nicomacheanethics.Oxford:OxfordUniversityPr ess.Originallypublishedin350bc,english edition by I. Bywater. Beck, U. (1992). Risk society: Towards a new modernity. New Delhi: Sage. Translated from the German Risikogesellschaft, published in 1986. Blair, T. (1994). Labour party manifesto. http://www.labour- party.org.uk/manifestos/1997/ 1997-labour-manifesto.shtml Blunkett, D. (2000). Influence or irrelevance: Can social science improve government. In Speech to the eco- nomic and social research council. Bouyssou, D., Marchant, T., Pirlot, M., Perny, P., Tsoukiàs, A., & Vincke, P. (2000). Evaluation and decision models: A critical perspective. Dordrecht: Kluwer.
  • 56. Bouyssou, D., Marchant, T., Pirlot, M., Tsoukis, A., & Vincke, P. (2006). Evaluation and decision models with multiple criteria. Stepping stones for the analyst. Berlin: Springer. Brewer, G. D. (1974). The policy science emerge: To nature and structure a discipline. Policy Sciences, 5, 239–244. Brewer, G., & de Leon, P. (1983). The foundations of policy analysis. Cole: Brooks. Buckingham Shum, S. (2012). Learning analytics. UNESCO Policy Brief. Cabinet Office, London. (1999). Modernising Government White Paper. Cabinet Office, London. (2001). Regulatory Impact Appraisal. Cabinet Office, London. (2003). The Magenta Book. Cabinet, Performance and Innovation Unit, London. (2001). A discussion paper: Better policy delivery and design. Comte, A. (1853). The positive philosophy of Auguste Comte. Chapman (reissued by Cambridge University Press, 2009). Comte, A. (1865). A general view of positivism. Trubner and Co. (reissued by Cambridge University Press, 2009). Daniell, K. A., Mazri, C., & Tsoukiàs, A. (2010). Real world decision-aiding: A case of participatory water management. In S. French & D. Rios-Insua (Eds.), e- Democracy: A group decision and negotiation per- spective (pp. 125–150). Berlin: Springer.
  • 57. Davenport, T. H., & Harris, J. H. (2007). Competing on analytics: The new science of winning. Harvard: Harvard Business School Press. Davenport, T. H., Harris, J. G., & Morison, R. (2010). Analytics at work: Smarter decisions, better results. Boston: Harvard Business Press. Davies, P. T. (2004). Is evidence-based government possible? Jerry Lee Lecture: http://www.nationalschool. gov.uk/policyhub/downloads/JerryLeeLecture1202041 Davies, P. T. (1999). What is evidence-based education? British Journal of Educational Studies, 47(2), 108– 121. De Saint-Simon, C. H. (1976). Political thought of Saint-Simon. Oxford: Oxford University Press. Dente, B. (2011). Le decisioni di policy. Bologna: Il Mulino. (In Italian). Department of Education, Training ad Youth. (2000). The impact of educational research. Technical report, Higher Education Division. 123 http://www.labour-party.org.uk/manifestos/1997/1997-labour- manifesto.shtml http://www.labour-party.org.uk/manifestos/1997/1997-labour- manifesto.shtml http://www.nationalschool.gov.uk/policyhub/downloads/JerryLe eLecture1202041 http://www.nationalschool.gov.uk/policyhub/downloads/JerryLe eLecture1202041
  • 58. 36 Ann Oper Res (2016) 236:15–38 Dowie, J. (1996). Evidence based medicine. Needs to be within framework of decision making based on decision analysis. British Medical Journal, 20, 170–171. Dror, Y. (1964). Muddling through: “science” or inertia? Public Administration Review, 24, 153–157. Dryzek, J. S. (2006). Policy analysis as critique. In M. Moran, M. Rein, & R. E. Goodin (Eds.), The Oxford handbook of public policy, chapter 9 (pp. 190–203). New York: Oxford University Press. Dunn, W. (1981). Public policy analysis. An introduction. Englewood Cliffs, NJ: Prentice Hall. Dye, T. (1972). Understanding public policy. Englewood Cliffs, NJ: Prentice Hall. Etzioni, A. (1967). Mixed scanning: A third approach to decision making. Public Administration Review, 27, 387–392. Faludi, A. (1973). Planning theory. Oxford: Oxford University Press. Fitzsimmons, P. (2010). Rapid access to information: The key to cutting costs in the NHS. British Journal of Healthcare Management, 16, 448–450. Friedman, J. (1987). Planning in the public domain: From knowledge to actions. Princeton, NJ: Princeton University Press. Giddens, A. (1974). Positivism and sociology. London: Heinemann. Goodin, R. E., Rein, M., & Moran, M. (2006). The public and its policies. In M. Moran, M. Rein, & R. E.
  • 59. Goodin (Eds.), The Oxford handbook of public policy, chapter 1 (pp. 3–35). New York: Oxford University Press. Gray, J. A. M. (1997). Evidence-based healthcare: How to make health policy and management decisions. New York: Churchill Livingstone. Grimshaw, J. M., Thomas, R. M., MacLennan, G., Fraser, C., & Ramsay, C. R. (2003). Effectiveness and efficienciency of guideline dissemination and implementation strategies. Final report. Aberdeen: Health Services Research Unit. Habermas, J. (1984). The theory of communicative action. Oxford: Polity Press. Hammond, P. J. (1997). Rationality in economics. Rivista Internazionale di Scienze Sociali CV, 247–288. Hanfling, O. (1981). Logical positivism. Oxford: Blackwell. Harsanyi,J.C.(1955).Cardinalwelfare,individualisticethics,andin terpersonalcomparisonsofutility.Journal of Political Economy, 63, 309–321. Hill, M. (1997). The public policy process. Harlow, England: Pearson Education Limited. HM Treasury, London (1997). Appraisal and evaluation in central goverment. London: The Green Book. HM Treasury, London (2003) The green book: A guide to appraisal and evaluation. Hostovsky, C. (2006). The paradox of the rational comprehensive model of planning. Tales from waste man- agement planning in Ontario, Canada. Journal of Planning Education and Research, 25, 382–395. Jann, W., & Wegrich, K. (2007). Theories of the policy cycle.
  • 60. In F. Fischer, G. J. Miller, & M. S. Sidney (Eds.), Handbook of public policy analysis (pp. 43–62). Boca Raton, FL: CRC Press. Jenkins, W. (1978). Policy analysis: A political and organizational perspective. Oxford: Martin Robertson. Kingdon, J. W. (1984). Agendas, alternatives and public policies. Boston, NY: Little Brown. Kraft, M., & Furlong, S. R. (2007). Public policy. Politics, analysis and alternatives (2nd ed.). Washington: CQ Press. Lasswell, H. D. (1948). Power and personality. New York: Norton. Lasswell, H. D. (1956). The decision process: Seven categories of functional analysis. College Park: University of Maryland Press. Lasswell, H. D. (1965). World politics and personal security. New York: Free Press. Latham, M. (2001). The myths of the welfare state. In Key note presentation. Brisbane: Institute of Public Administration Australia, Qld Division. Lerner, D., & Lasswell, H. D. (1951). The policy sciences. Stanford: Stanford University Press. Lindblom, A. (1959). The science of muddling through. Public Administration Review, 19, 78–88. March, J. C., & Olsen, J. P. (1976). Ambiguity and choice in organizations. Bergen: Universitetsforlaget. March, J. C., & Olsen, J. P. (1989). Rediscovering intitutions: The organizational basis of politics. New York: The Free Press. Marston, G., & Watts, R. (2003). Tampering with the evidence: A critical appraisal of evidence-based policy-
  • 61. making. The Drawing Board: An Australian Review of Public Affairs, 3, 143–166. May, J. V., & Wildavsky, A. B. (1978). The policy cycle. Beverly Hills: Sage Publications. Mazri, C., Debray, B., Merad, M. & Tsoukiàs A. (2012). Participative design of participation structures: A general approach and some risk management case studies. Cahier du LAMSADE, 327, Universit Paris Dauphine. Mazri, C., Lucertini, G., Olivotto, A., Prod’homme, G. & Tsoukiàs, A. (2014). Protection of transport infrastructures against major accidents in land use planning policies. A decision support approach. Journal of Loss Prevention in the Process Industries, 27, 119–129. 123 Ann Oper Res (2016) 236:15–38 37 Melnyk, B. M., & Fineout-Overholt, E. (2005). Making the case for evidence-based practice. Philadelphia: Lippincott Williams & Wilkins. Mitchell, G. (1999). Evidence-based practice: Critique and alternative view. Nursing Science Quarterly, 12, 30–35. Nutley, S. M., Walter, I., & Davies, H. T. O. (2003). From knowing to doing: A framework for understanding the evidence-into-practice agenda. Evaluation, 9(2), 125–148.
  • 62. ODPM, Office of the Deputy Prime Minister, London (2000). Integrated Policy Appraisal. Ostanello,A.,&Tsoukiàs,A.(1993).Anexplicativemodelofpublicin terorganisationalinteractions.European Journal of Operational Research, 70, 67–82. Ostrom, V., & Ostrom, E. (1971). Public choice: A different approach to the study of public administration. Public Administration Review, 31, 203–216. Ostrom, E., & Ostrom, V. (2004). The quest for meaning in public choice. American Journal of Economics and Sociology, 63(1), 105–147. Parson, W. (2002). From muddling through to muddling up— evidence based policy making and the moderni- sation of British Government. Public Policy and Administration, 17, 43–60. Perri, S. (2002). Can policy making be evidence-based? Journal of Integrated Care, 10, 3–8. Phillips Inquiry, London. (2001). The BSE inquiry: The inquiry into BSE and variant CJD in the United Kingdom. Regional Policy Inforegio. (2011). EVALSED: The resource for the evaluation of socio-economic development. European Commission. [Online; Accessed 06 June 2011]. Robbins, L. (1932). An essay on the nature and significance of economic science. London: Macmillan. Royal Society, London (2002). Foot and Mouth Desease 2001: Lessons To Be Learned Inquiry. Sackett, D. L., Rosenberg, W. M., Gray, J. A., Haynes, R. B., & Richardson, W. S. (1996). Evidence based
  • 63. medicine: What it is and what it isn’t. British Medical Journal, 13, 71–72. Sanderson, I. (2002). Evaluation, policy learning and evidence- based policy making. Public Administration, 80(1), 1–22. Schön, D. A. (1979). Generative metaphor: A perspective on problem-setting in social policy. In A. Ortony (Ed.), Metaphor and thought (pp. 137–163). Cambridge: Cambridge University Press. Simon, H. A. (1947). Administrative behaviour: A study of decision making processes in administrative organizations. New York: MacMillan. Simon, H. A. (1955). A behavioural model of rational choice. The Quarterly Journal of Economics, 69, 99–118. Simon, H. A. (1959). Theories of decision-making in economics and behavioral science. American Economic Review, 49, 253–283. Simon, H. A. (1962). The architecture of complexity. Proceedings of the American Philosophical Society, 106, 467–482. Simon, H. A. (1964). Administrative behavior: A study of decision making process in administrative organi- zations. New York: Mac Millan. Simon, H. A. (1969). The science of the artificial. Cambridge: The MIT Press. Simon, H. A. (1979). Rational decision making in business organisations. American Economic Review, 69, 349–513. Smith, S., & Kulynych, J. (2002). It may be social, but why is it
  • 64. capital? The social construction of social capital and the politics of language. Politics and Society, 30(1), 149–186. Solesbury, W. (2001). Evidence based policy: Whence it came and where it’s going. ESRC UK Centre for Evidence Based Policy and Practice, October 2001. Working Paper 1. Sutcliffe, S. & Court, J. (2005). Evidence-based policy making: What is it? How does it work? What relevance for developing country? London: Overseas Development Institute. Trinder, L. (2000). Introduction: The context of evidence-based practice. In L. Trinder & S. Reynolds (Eds.), Evidence-based practice: A critical appraisal (pp. 1–16). Oxford: Blackwell Science. Tsoukiàs, A., Montibeller, G., Lucertini, G., & Belton, V. (2013). Policy analytics: An agenda for research and practice. EURO Journal on Decision Processes, 1, 115–134. Tsoukiàs, A. (2007). On the concept of decision aiding process: An operational perspective. Annals of Oper- ations Research, 154, 3–27. Tsoukiàs, A. (2008). From decision theory to decision aiding methodology. European Journal of Operational Research, 187, 138–161. Watzlawick, P., Beavin, J. H., & Jackson, D. D. (1967). Pragmatics of human communication. New York: W.W. Norton. Weber, M. (1922). Wirtschaft und Gesellschaft. Tubingen:
  • 65. Mohr. Wells, P. (2007). New labour and evidence based policy making: 1997–2007. People, Place and Policy, 1, 22–29. White, L. D. (1926). Introduction to the study of public administration. New York: The Macmillan Company. 123 38 Ann Oper Res (2016) 236:15–38 Wilson, W. (1887). The study of administration. Political Science Quarterly, 2, 197–222. Young,K.,Ashby,D.,Boaz,A.,&Grayson,L.(2002).Socialsciencea ndtheevidence-basedpolicymovement. Social Policy & Society, 1, 215–224. Zammito, J. H. (2004). A nice derangement of epistemes. Post- positivism in the study of science from Quine to Latour. Chicago: The University of Chicago Press. 123 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. c.10479_2014_Article_1578.pdfFrom evidence-based policy making to policy analyticsAbstract1 Introduction2 Terms and concepts2.1 Public policy2.2 Policy making process2.3 Public deliberation, legitimation and accountability3 Evidence-based
  • 66. policy making: state of the art3.1 Premises3.2 Evolution3.2.1 From medicine to social science3.2.2 EBPM in the UK3.2.3 Other experiences4 Criticism of EBPM4.1 Multiple evidences4.2 Policy making as a result of many factors4.3 Evidence as contingent knowledge5 Discussion6 ConclusionAcknowledgmentsReferences *In assessing feasibility, it is important to identify critical barriers that will prevent the policy from being developed or adopted at the current time. For such policies, it may not be worthwhile to spend much time analyzing other factors (e.g., budget and economic impact). However, by identifying these critical barriers, you can be more readily able to identify when they shift and how to act quickly when there is a window of opportunity. Table 1: Policy Analysis: Key Questions Framing Questions —is it legislative, administrative, regulatory, other? vernment or institution will implement? Will enforcement be necessary? How is it funded? Who is responsible for administering the policy?) s the legal landscape surrounding the policy (e.g., court rulings, constitutionality)?
  • 67. debated previously)? -added of the policy? -term outcomes? consequences of the policy? Criteria Questions Public Health Impact: Potential for the policy to impact risk factors, quality of life, disparities, morbidity and mortality increase access, protect from exposure)? and burden (including impact on risk factor, quality of life, morbidity and mortality)? o What population(s) will benefit? How much? When? o What population(s) will be negatively impacted? How much? When?
  • 68. How? re there gaps in the data/evidence-base? Feasibility* : Likelihood that the policy can be successfully adopted and implemented Political history, environment, and policy debate? opponents? What are their interests and values? perspectives associated with the policy option (e.g., lack of knowledge, fear of change, force of habit)? and high priority issues (e.g., sustainability, economic impact)? Operational developing, enacting, and implementing the policy?
  • 69. implemented, and enforced? Economic and budgetary impacts: Comparison of the costs to enact, implement, and enforce the policy with the value of the benefits Budget from a budgetary perspective? o e.g., for public (federal, state, local) and private entities to enact, implement, and enforce the policy? Economic -savings, costs averted, ROI, cost- effectiveness, cost-benefit analysis, etc.)? o How are costs and benefits distributed (e.g., for individuals, businesses, government)?
  • 70. o What is the timeline for costs and benefits? -base? Point/Counterpoint / 715 Zolas, N., Goldschlag, N., Jarmin, R., Stephan, P., Owen-Smith, J., Rosen, R. F., Allen, B. McFadden, Weinberg, B. A. & Lane, J. I. (2015). Wrapping it up in a person: Examining employment and earnings outcomes for PhD recipients. Science, 360, 1367–1371. BIG DATA AND THE TRANSFORMATION OF PUBLIC POLICY ANALYSIS Ron S. Jarmin and Amy B. O’Hara INTRODUCTION Recent years have seen a growing number of uses of novel “big data” sources to monitor, improve, and study the delivery of public services. By “big data,” we mean data generated as a consequence of government, business, or citizen activities. Big data is often said to be characterized by four Vs: volume, velocity, variety, and verac- ity. Examples include data from cameras and sensors that permit real-time traffic monitoring to more novel uses such as tracking gunshots through networks of audi- tory sensors. The latter allow more granular measurement of
  • 71. criminal activity than traditional police reports (Carr & Doleac, 2015) and are suggestive of how valuable these new data sources can be to policy analysts and applied social scientists. New data from sensors or apps where citizens engage directly with government service providers (Crawford & Walters, 2013) will undoubtedly help transform pub- lic policy analysis. However, many programs directly impact individuals, house- holds, firms, or other private sector organizations. These programs generate their own administrative “big data.” Indeed, the administrative records these programs produce often have volume, an assembly of them provides variety, and depending on source, they have veracity (UNECE, 2013). Improvements in computation and ana- lytics have streamlined processing of such data, allowing analysts to draw actionable intelligence from them. Said differently, compiling rich data resources, especially when linked and analyzed with modern “big data” tools, provides unprecedented opportunity to transform public policy analysis. However, administrative data have varying privacy or confidentiality statutes or regulations making access difficult. To address this, Congress recently approved funding for the Census Bureau to fortify its platform aligning the appropriate data, tools, and researchers to facilitate evidence building. Relevant program data is held
  • 72. in a variety of federal and state agencies. These entities possess differing capabilities to support research, have data that is not prepared for analytical use, and have a variety of access requirements and processes. A federated infrastructure permits data from many sources to be curated, integrated, and provisioned in ways that foster credible and transparent evidence building and adhere to the rules and policies of the relevant data owners. The platform being expanded at the Census Bureau addresses the key barriers to research access while permitting information providers to retain control of their data. Housing the infrastructure at the Census Bureau makes sense as it has broad legal authority to obtain data, a robust infrastructure for curating and integrating data from many sources, and an outstanding record as a data steward. This approach is efficient, consistent with privacy principles, and Journal of Policy Analysis and Management DOI: 10.1002/pam Published on behalf of the Association for Public Policy Analysis and Management 716 / Point/Counterpoint clarifies access procedures by providing one “front door” for policy analysts and researchers needing access to administrative data. Improving access to administrative records and technology to
  • 73. link and analyze these “big data” will transform evidence building. As we highlight below, this change is already underway, with analyses that investigate the impacts of policies over time, across programs, and spanning generations. Moreover, the Census Bureau has used administrative records to measure economic activity since the 1940s and is currently adopting best practices from the private sector to modernize economic measurement (Bostic, Jarmin, & Moyer, 2016). Thus, our big data solution for policy analysis is a straightforward extension of these activities. For social scientists and policy analysts, however, big data sources can com- plement data carefully “designed” for a given purpose. Surveys are designed data: offering a known universe/frame, stable and well-defined questions and content, and documented treatment of missing data. These have been widely used to analyze the impact of policy changes and longitudinal outcomes using weighted samples. Surveys have a lot of variety, but they fall far short on volume, velocity and, increas- ingly, on veracity. Surveys have quality challenges as a result of declining respondent cooperation and participation (Meyer, Mok, & Sullivan, 2015). Studies show that underreporting in surveys (Call et al., 2013; Meyer & Mittag, 2015) impacts under- standing of welfare and Medicaid participation, using administrative data to show the measurement differences. Using the sources together holds
  • 74. great promise to pro- duce blended statistics. For example, blended statistics could help income studies beset by right tail problems such as top-coding, undercoverage, and underreport- ing (Burkhauser et al., 2012). Income studies have turned to administrative data (Auten, Gee, & Turner, 2013; Chetty, Hendren, & Katz, 2014) despite limited infor- mation on the left tail. Surveys have traditionally offered the only view of individuals over time. Longitudinal surveys such as the National Longitudinal Survey, Panel Study on Income Dynamics, and National Center for Education Statistics surveys that have been mainstays in education evaluations are giving way to studies relying on administrative data (Figlio, Karbownik, & Salvanes, 2015). ILLUSTRATIVE USES OF ADMINISTRATIVE DATA FOR POLICY ANALYSIS In this section, we discuss several recent papers as “isolated uses” of using a variety of administrative data sets.1 These studies are typically conducted by teams that obtained access to confidential data through arrangements with administrative or statistical agencies or through collaboration with agency researchers. We argue that this narrowly defined data access, underdeveloped data infrastructure, and inadequate metadata and tools prevent all but highly expert researchers to draw inferences from the data. We will return to access issues in the next section but
  • 75. want to illustrate more specifically how “big data” can aid in policy analysis. Intergenerational mobility studies have been transformed using longitudinal tax data. The Joint Statistical Research Program of the Statistics of Income Division at the IRS enables studies that use long panels of tax returns to observe individuals over time. Isolated access2 for specific tax administration research resulted in the 1 While not our focus here, the recent empirical literature features studies using an increasing number and variety of private sector data sets. The potential of these data resources for public policy analysis is only beginning to be explored. Einav and Levin (2014a, 2014b) review recent work using private data. Researchers affiliated with NSF-Census Research Network have been exploring the use of Twitter data to understand labor market transitions and personal financial management tools to understand household income and spending patterns (see Gelman et al., 2014). 2 See http://www.sciencemag.org/news/2014/05/how-two- economists-got-direct-access-irs-tax-records. Journal of Policy Analysis and Management DOI: 10.1002/pam Published on behalf of the Association for Public Policy Analysis and Management Point/Counterpoint / 717 Equality of Opportunity project,3 deemed “big data” by lead researcher Raj Chetty.
  • 76. Through that data access, Chetty, Hendren, and Katz (2014) analyzed geography and high-mobility areas. Also using tax data and collaborating with IRS analysts, a Stanford team has studied intergenerational mobility (Mitnik et al., 2015). Johnson, Massey, and O’Hara (2015) discuss the potential and challenges of using tax data, survey data, and other administrative data to study mobility. Chetty, Hendren, and Katz’s (2016) follow-up on the Moving to Opportunity (MTO) experiment assesses long-term outcome for the voucher program that started in 1994. Earlier studies failed to find impacts on earnings or employment rates of adults and older children from MTO treatments (e.g., Kling, Liebman, & Katz, 2007; Oreopoulos, 2003; Sanbonmatsu et al., 2006). However, using tax returns from 1996 to 2012, Chetty, Hendren, and Katz (2016) found that moves to lower poverty neigh- borhoods significantly improved college attendance rates and earnings for children who were young (below age 13) when their families moved. In his summary of MTO studies, Rothwell (2015) notes, “As Chetty and his colleagues show, even a few extra years of data can make a large difference.” To conduct the study, Chetty, Hendren, and Katz (2016) had isolated access to the MTO data as well as tax return data. Our final example of isolated access is led by the University of Michigan’s Institute for Research on Science and Innovation (IRIS).4 IRIS is a