2. production regarding crime. It
then examines how a specific understanding of crime and acting
upon such knowledge violates
established criminal procedure rules. It concludes with a
discussion of the socio-political context
of algorithmic justice.
Keywords
Algorithm, bias, criminal justice, machine learning, sentencing
Algorithmic governance: The context
Our world runs on big data, algorithms and artificial
intelligence (AI), as social networks
suggest whom to befriend, algorithms trade our stocks, and even
romance is no longer a
statistics-free zone (Webb, 2013). In fact, automated decision-
making processes already
influence how decisions are made in banking (O’Hara and
Mason, 2012), payment sec-
tors (Gefferie, 2018) and the financial industry (McGee, 2016),
as well as in insurance
(Ambasna-Jones, 2015; Meek, 2015), education (Ekowo and
Palmer, 2016; Selingo,
2017) and employment (Cohen et al., 2015; O’Neil, 2016).
Applied to social platforms,
they have contributed to the distortion of democratic processes,
such as general
Corresponding author:
Aleš Završnik, Institute of Criminology at the Faculty of Law,
University of Ljubljana, Poljanski nasip 2,
Ljubljana, SI-1000, Slovenia.
Email: [email protected]
876762EUC0010.1177/1477370819876762European Journal of
CriminologyZavršnik
3. research-article2019
Article
2021, Vol. 18(5) 623–642
https://uk.sagepub.com/en-gb/journals-permissions
https://journals.sagepub.com/home/euc
mailto:[email protected]
http://crossmark.crossref.org/dialog/?doi=10.1177%2F14773708
19876762&domain=pdf&date_stamp=2019-09-18
elections, with ‘political contagion’, similar to the ‘emotional
contagion’ of the infamous
Facebook experiment (Kramer et al., 2014) involving hundreds
of millions of individuals
for various political ends, as revealed by the Cambridge
Analytica whistle-blowers in
2018 (Lewis and Hilder, 2018).
This trend is a part of ‘algorithmic governmentality’ (Rouvroy
and Berns, 2013) and
the increased influence of mathematics on all spheres of our
lives (O’Neil, 2016). It is a
part of ‘solutionism’, whereby tech companies offer technical
solutions to all social prob-
lems, including crime (Morozov, 2013). Despite the strong
influence of mathematics and
statistical modelling on all spheres of life, the question of
‘what, then, do we talk about
when we talk about “governing algorithms”?’ (Barocas et al.,
2013) remains largely unan-
swered in the criminal justice domain. How does the justice
sector reflect the trend of the
4. ‘algorithmization’ of society and what are the risks and perils of
this? The importance of
this issue has triggered an emerging new field of enquiry,
epitomized by critical algorithm
studies. They analyse algorithmic biases, filter bubbles and
other aspects of how society
is affected by algorithms. Discrimination in social service
programmes (Eubanks, 2018)
and discrimination in search engines, where they have been
called ‘algorithms of oppres-
sion’ (Noble, 2018), are but some examples of the concerns that
algorithms, big data and
machine learning trigger in the social realm. Predictive policing
and algorithmic justice
are part of the larger shift towards ‘algorithmic governance’.
Big data, coupled with algorithms and machine learning, has
become a central theme
of intelligence, security, defence, anti-terrorist and crime policy
efforts, as computers
help the military find its targets and intelligence agencies
justify carrying out massive
pre-emptive surveillance of public telecommunications
networks. Several actors in the
‘crime and security domain’ are using the new tools:1 (1)
intelligence agencies (see, for
example, the judgment of the European Court of Human Rights
in Zakharov v. Russia in
2015, No. 47143/06, or the revelations of Edward Snowden in
2013); (2) law enforce-
ment agencies, which are increasingly using crime prediction
software such as PredPol
(Santa Cruz, California), HunchLab (Philadelphia), Precobs
(Zürich, Munich) and
Maprevelation (France) (see Egbert, 2018; Ferguson, 2017;
Wilson, 2018); and (3) crim-
5. inal courts and probation commissions (see Harcourt, 2015a;
Kehl and Kessler, 2017).
The use of big data and algorithms for intelligence agencies’
‘dragnet’ investigations
raised considerable concern after Snowden’s revelations
regarding the ’National Security
Agency’s access to the content and traffic data of Internet users.
Predictive policing has
attracted an equal level of concern among scholars, who have
addressed ‘the rise of pre-
dictive policing’ (Ferguson, 2017) and the ‘algorithmic patrol’
(Wilson, 2018) as the new
predominant method of policing, which thus impacts other
methods of policing. Country-
specific studies of predictive policing exist in Germany (Egbert,
2018), France (Polloni,
2015), Switzerland (Aebi, 2015) and the UK (Stanier, 2016). A
common concern is the
predictive policing allure of objectivity, and the creative role
police still have in creating
inputs for automated calculations of future crime: ‘Their
choices, priorities, and even
omissions become the inputs algorithms use to forecast crime’
(Joh, 2017a). Scholars
have shown how public concerns are superseded by the market-
oriented motivations and
aspirations of companies that produce the new tools (Joh,
2017b). Human rights advo-
cates have raised numerous concerns regarding predictive
policing (see Robinson and
Koepke, 2016). For instance, a coalition of 17 civil rights
organizations has listed several
624 European Journal of Criminology 18(5)
6. risks, such as a lack of transparency, ignoring community
needs, failing to monitor the
racial impact of predictive policing and the use of predictive
policing primarily to inten-
sify enforcement rather than to meet human needs (ACLU,
2016).
In Anglo-American legal systems, tools for assessing
criminality have been used in
criminal justice settings for a few decades. But now these tools
are being enhanced with
machine learning and AI (Harcourt, 2015b), and other European
countries are also looking
at these systems for several competing reasons, such as to
shrink budgets, decrease legiti-
macy and address the overload of cases. The trend to
‘algorithmize’ everything (Steiner,
2012) has thus raised the interest of policy-makers. The
European Union Fundamental
Rights Agency warns that discrimination in data-supported
decision-making is ‘a funda-
mental area particularly affected by technological development’
(European Union Agency
for Fundamental Rights, 2018). The Council of Europe’s
European Commission for the
Efficiency of Justice adopted a specific ‘European Charter on
the Use of AI in Judicial
Systems’ in 2018 to mitigate the above-mentioned risks
specifically in justice sector.
In general, courts use such systems to assess the likelihood of
the recidivism or flight
of those awaiting trial or offenders in bail and parole
7. procedures. For instance, the well-
known Arnold Foundation algorithm, which is being rolled out
in 21 jurisdictions in the
USA (Dewan, 2015), uses 1.5 million criminal cases to predict
defendants’ behaviour in
the pre-trial phase. Similarly, Florida uses machine learning
algorithms to set bail
amounts (Eckhouse, 2017). These systems are also used to
ascertain the criminogenic
needs of offenders, which could be changed through treatment,
and to monitor interven-
tions in sentencing procedures (Kehl and Kessler, 2017). Some
scholars are even dis-
cussing the possibility of using AI to address the solitary
confinement crisis in the USA
by employing smart assistants, similar to Amazon’s Alexa, as a
form of ‘confinement
companion’ for prisoners. Although at least some of the
proposed uses seem outrageous
and directly dangerous, such as inferring criminality from face
images, the successes of
other cases in the criminal justice system seem harder to dispute
or debunk. For instance,
in a study of 1.36 million pre-trial detention cases, scholars
showed that a computer
could predict whether a suspect would flee or re-offend better
than a human judge
(Kleinberg et al., 2017).
The purpose of this article is two-fold: first, it examines the
more fundamental changes
in knowledge production in criminal justice settings occurring
due to over-reliance on
the new epistemological transition – on knowledge supposedly
generated without a pri-
ori theory. Second, it shows why automated predictive decision-
8. making tools are often at
variance with fundamental liberties and also with the
established legal doctrines and
concepts of criminal procedure law. Such tools discriminate
against persons in lower
income strata and less empowered sectors of the population
(Eubanks, 2018; Harcourt,
2015a’; Noble, 2018; O’Neil, 2016). The following sections of
this article map the nov-
elties of automated decision-making tools – defined as tools
utilizing advances in big
data, algorithms, machine learning and ‘’AI – in criminal justice
settings.
Existing research on algorithmic justice
Legal scholars have analysed the risks stemming from
automated decision-making and
prediction in several domains. For example, in examining
automation applied to credit
625Završnik
scoring, Citron and Pasquale (2014) identified the opacity of
such automation, arbitrary
assessments and disparate impacts. Computer scientists have
warned against the reduc-
tionist caveats of big data and ‘exascale’ computers: ‘computers
cannot replace arche-
typal forms of thinking founded on Philosophy and
Mathematics’ (Koumoutsakos,
2017). Research on the role of automation in exacerbating
discrimination in search
9. engines (Noble, 2018) and social security systems (Eubanks,
2018) showed how the new
tools are dividing societies along racial lines. However,
research on the impacts of auto-
mated decision-making systems on fundamental liberties in the
justice sector has not
gained enough critical reflection.
Legal analysis of the use of algorithms and AI in the criminal
justice sector have
focused on the scant case law on the topic, especially on the
landmark decision in
Loomis v. Wisconsin (2016) in the USA, in which the Supreme
Court of Wisconsin
considered the legality of using the COMPAS (Correctional
Offender Management
Profiling for Alternative Sanctions) risk assessment software in
criminal sentencing.
The Loomis case raised two related legal issues, that is, due
process and equal protec-
tion (Kehl and Kessler, 2017). In the same year, ProPublica’s
report on ‘Machine bias’
(Angwin et al., 2016) triggered fierce debates on how
algorithms embed existing
biases and perpetuate discrimination. By analysing the
diverging assessments of the
outcomes of the COMPAS algorithm – by ProPublica, on the
one hand, and the algo-
rithm producer Northpointe, on the other – scholars have
identified a more fundamen-
tal difference in the perception of what it means for an
algorithm to be ‘successful’
(Flores et al., 2016). Probation algorithm advocates and critics
are both correct, given
their respective terms, claims the mathematician Chouldechova
(2016), since they
10. pursue competing notions of fairness.
Along with these concerns, scholars dubbed the trend to use
automated decision-
making tools in the justice sector ‘automated justice’ (Marks et
al., 2015), which causes
discrimination and infringes the due process of law guaranteed
by constitutions and
criminal procedure codes. Similarly, Hannah-Moffat (2019)
warns that big data tech-
nologies can violate constitutional protections, produce a false
sense of security and be
exploited for commercial gain. ‘There can and will be
unintended effects, and more
research will be needed to explore unresolved questions about
privacy, data ownership,
implementation capacities, fairness, ethics, applications of
algorithms’ (Hannah-
Moffat, 2019: 466).
According to Robinson (2018), three structural challenges can
arise whenever a law
or public policy contemplates adopting predictive analytics in
criminal justice: (1) what
matters versus what the data measure; (2) current goals versus
historical patterns; and (3)
public authority versus private expertise.
Based on research on the automation of traffic fines, O’Malley
(2010) showed how a
new form of ‘simulated justice’ has emerged. By complying
with automated fining
regimes – whereby traffic regulation violations are
automatically detected and processed
and fines enforced – we buy our freedom from the disciplinary
apparatuses but we suc-
11. cumb to its ‘simulated’ counterpart at the same time, claims
O’Malley. The judicial-dis-
ciplinary apparatus still exists as a sector of justice, but a large
portion of justice is
– concerning the ‘volumes of governance’ – ‘simulated’: ‘the
majority of justice is deliv-
ered by machines as ‘a matter of one machine “talking” to
another’ (O’Malley, 2010).
626 European Journal of Criminology 18(5)
Scholars are also critical of the purposes of the use of machine
learning in the criminal
justice domain. Barabas et al. (2017) claim that the use of
actuarial risk assessments is
not a neutral way to counteract the implicit bias and to i ncrease
the fairness of decisions
in the criminal justice system. The core ethical concern is not
the fact that the statistical
techniques underlying actuarial risk assessments might
reproduce existing patterns of
discrimination and the historical biases that the data reflect, but
rather the purpose of risk
assessments (Barabas et al., 2017).
One strand of critique of automated decision-making tools
emphasizes the limited
role of risk management in the multiple purposes of sentencing.
Namely, minimizing the
risk of recidivism is only one of the several goals in
determining a sentence, along with
equally important considerations such as ensuring
accountability for past criminal behav-
12. iour. The role of automated risk assessment in the criminal
justice system should not be
central (Kehl and Kessler, 2017). It is not even clear whether
longer sentences for riskier
prisoners might decrease the risk of recidivism. On the contrary,
prison leads to a slight
increase in recidivism owing to the conditions in prisons, such
as the harsh emotional
climate and psychologically destructive nature of prisonization
(Gendreau et al., 1999).
Although cataloguing risks and alluding to abstract notions of
‘fairness’ and ‘trans-
parency’ by advocating an ‘ethics-compliant approach’ is an
essential first step in
addressing the new challenges of big data and algorithms, there
are more fundamental
shifts in criminal justice systems that cannot be reduced to mere
infringements of indi-
vidual rights. What do big data, algorithms and machine
learning promise to deliver in
criminal justice and how do they challenge the existing
knowledge about crime and ideas
as to the appropriate response to criminality?
New language and new knowledge production
Today, law enforcement agencies no longer operate merely in
the paradigm of the ex post
facto punishment system but also employ ex ante preventive
measures (Kerr and Earle,
2013). The new preventive measures stand in sharp contrast to
various legal concepts
that have evolved over the decades to limit the executive power
to legal procedures. New
concepts and tools are being invented to understand crime
13. (knowledge production) and
to act upon this knowledge (crime control policy). These
include, for instance, ‘meaning
extraction’, ‘sentiment analysis’, ‘opinion mining’ and
‘computational treatment of sub-
jectivity’, all of which are rearranging and blurring the
boundaries in the security and
crime control domain.
For instance, the post-9/11 German anti-terrorist legislation
invented the notion of
a ‘sleeping terrorist’, based on the presumed personal
psychological states and other
circumstances of known terrorists. Such preventive
‘algorithmic’ identification of a
target of state surveillance criminalized remotely relevant
antecedents of crime and
even just mental states. The notion presents a form of automated
intelligence work
whereby the preventive screening of Muslims was conducted
because of the ‘general
threat situation’ in the post-9/11 period in order to identify
unrecognized Muslim fun-
damentalists as ‘terrorist sleepers’. The idea of preventive
screening as a dragnet type
of investigation was that the known characteristics of terrorists
are enough to infer the
criminality of future criminals. The legislation was challenged
before the Federal
627Završnik
Constitutional Court, which limited the possibility of such
14. screening (4 April 2006) by
claiming that the general threat situation that existed after the
terrorist attacks on the
World Trade Center on 9/11 was not sufficient to justify that the
authorities start such
investigations (Decision 1BvR 518/02). The legislation blurred
the start and finish of
the criminal procedure process. The start of a criminal
procedure is the focal point
when a person becomes a suspect, when a suspect is granted
rights, for example
Miranda rights, that are aimed at remedying the inherent
imbalances of power between
the suspect and the state. The start of the criminal procedure
became indefinite and
indistinct (see Marks, et al., 2015) and it was no longer clear
when a person ‘trans-
formed’ into a suspect with all the attendant rights.
Similarly, the notion of a ‘person of interest’ blurs the existing
concepts of a ‘suspect’.
In Slovenia, the police created a Twitter analysis tool during the
so-called ‘Occupy
Movement’ to track ‘persons of interest’ and for sentiment
analysis. The details of the
operation remain a secret because the police argued that the
activity falls under the pub-
licly inaccessible regime of ‘police tactics and methods’. With
such reorientation, the
police were chasing the imagined indefinable future. They were
operating in a scenario
that concerned not only what the person might have committed,
but what the person
might become (see Lyon, 2014) – the focus changed from past
actions to potential future
personal states and circumstances.
15. The ‘smart city’ paradigm is generating similar inventions. In
the Eindhoven
Living Lab, consumers are tracked for the purpose of
monitoring and learning about
their spending patterns. The insights are then used to nudge
people into spending
more (Galič, 2018). The Living Lab works on the idea of
adjusting the settings of the
immediate sensory environment, such as changing the music or
the street lighting.
Although security is not the primary objective of the Living
Lab, it is a by-product
of totalizing consumer surveillance. The Lab treats consumers
like Pavlov’s dogs,
because the Lab checks for ‘escalated behaviour’ to arrive at
big-data-generated
determinations of when to take action. The notion of escalated
behaviour includes
yelling, verbal or otherwise aggressive conduct or signs that a
person has lost self-
control (de Kort, 2014). Such behaviour is targeted in order for
it to be defused
(Galič, 2018). The goals of the intervention are then to induce
subtle changes, such
as ‘a decrease in the excitation level’.’ This case shows how the
threshold for inter-
vention in the name of security drops: interventions are made as
soon as the behav-
iour is considered ‘escalated’.
To conclude, the described novel concepts, that is, ‘terrorist
sleeper’, and so on, are
producing new knowledge about crime and providing new
thresholds and justifications
to act upon this knowledge. The new mathematical language
16. serves security purposes
well, writes Amoore (2014). However, the new concepts, such
as ‘meaning extraction’,
‘sentiment analysis’ and ‘opinion mining’, are blurring the
boundaries in the security and
crime control domain. The concepts of a ‘suspect’, ‘accused’ or
‘convicted’ person serve
as regulators of state powers. Similarly, the existing standards
of proof serve as thresh-
olds for ever more interference in a person’s liberties. But these
new concepts and math-
ematical language no longer sufficiently confine agencies or
prevent abuses of power.
The new mathematical language is helping to tear down the
hitherto respected walls of
criminal procedure rules.
628 European Journal of Criminology 18(5)
Risks and pitfalls of algorithmic justice
Domain transfer – what gets lost in translation?
What are the specific risks stemming from the use of big data,
algorithms and machine
learning in the criminal justice domain?
The statistical modelling used in criminal justice originates
from other domains, such
as earthquake prediction (for example Predpol), where ground-
truth data are more reli-
able and the acceptability of false predictions is different.
Optimization goals may differ
17. across domains. For instance, online marketers want to capture
users’ attention as much
as possible and would rather send more – albeit irrelevant –
products. Such ‘false-posi-
tive’ rankings in marketing are merely nuisances. In predictive
policing, optimization on
the side of ‘false positives’ has significantly more profound
consequences. Treating
innocent individuals as criminals severely interferes with
fundamental liberties.
Concerning data, first of all, criminality – by default – is never
fully reported. The
dark figure of crime is a ‘black box’ that can never be properly
encompassed by algo-
rithms. The future is then calculated from already selected facts
about facts. Predictions
can be more accurate in cases where ‘reality’ does not change
dramatically and where the
data collected reflect ‘reality’ as closely as possible. Second,
crime is a normative phe-
nomenon, that is, it depends on human values, which change
over time and place.
Algorithmic calculations can thus never be accurately calibrated
given the initial and
changing set of facts or ‘reality’.
Criminal procedure codes are a result of already realized
balancing acts as to what
should be optimized. The existing legal doctrines in
constitutional and criminal law
already embody decisions regarding the strength of the
competing values, such as effi-
ciency and fairness. For instance, in a democratic, liberal
response to crime, it is better to
let 10 criminals walk free than to sentence one innocent person
18. – this is the litmus test of
authoritarian and democratic political systems, and tech-savvy
experts are now negotiat-
ing these balancing acts.
Such balancing acts are fairness trade-offs: what to optimize for
in which domain is
different. Discussion of the fairness of the COMPAS probation
algorithm shows how the
discussion unfolded on different levels. Whereas ProPublica
claimed that COMPAS was
biased towards black defendants by assigning them higher risk
scores, the developer of
the algorithm, Northpointe (now Equivant), argued that in fact
the algorithm directly
reflected past data, according to which blacks more often
committed a crime upon being
released (Spielkamp, 2017). ProPublica compared the false
positives of both groups:
blacks were rated a higher risk but re-offended less frequently,
whereas white defendants
were rated a lower risk and re-offended more frequently. In
other words, a disproportion-
ate number of black defendants were ‘false positives’: they were
classified by COMPAS
as high risk but subsequently were not charged with another
crime. These differences in
scores – the higher false-positive rates for blacks – amounted to
unjust discrimination.
On the other hand, Northpointe (now Equivant) argued that
COMPAS was equally good
at predicting whether a white or black defendant classified as
high risk would re-offend
(an example of a concept called ‘predictive parity’). The two
camps seem to perceive
fairness differently. There are a few fairness criteria that can be
19. used and these ‘cannot
all be simultaneously satisfied when recidivism prevalence
differs across groups’
629Završnik
(Chouldechova, 2016). These then are the fairness trade-offs:
‘Predictive parity, equal
false-positive error rates, and equal false-negative error rates
are all ways of being “fair”,
but are statistically impossible to reconcile if there are
differences across two groups –
such as the rates at which white and black people are being
rearrested’ (Courtland, 2018).
There are competing notions of fairness and predictive
accuracy, which means a simple
transposition of methods from one domain (for example,
earthquake prediction to sen-
tencing) can lead to distorted views of what algorithms
calculate. The current transfer of
statistical modelling in the criminal justice domain neglects the
balance between compet-
ing values or, at best, re-negotiates competing values without
adequate social consensus.
A similar refocusing was carried out by Barabas et al. (2017),
who argued ‘that a core
ethical debate surrounding the use of regression in risk
assessments is not simply one of
bias or accuracy. Rather, it’s one of purpose.’ They are critical
of the use of machine
learning for predicting crime; instead of risk scores, they
20. recommend ‘risk mitigation’ to
understand the social, structural and psychological drivers of
crime.
Building databases
Databases and algorithms are human artefacts. ‘Models are
opinions embedded in math-
ematics [and] reflect goals and ideology’ (O’Neil, 2016: 21). In
creating a statistical
model, choices need to be made as to what is essential (and
what is not) because the
model cannot include all the nuances of the human condition.
There are trustworthy
models, but these are based on the high-quality data that are fed
into the model continu-
ously as conditions change. Constant feedback and testing
millions of samples prevent
the self-perpetuation of the model.
Defining the goals of the problem and deciding what training
data to collect and how
to label the data, among other matters, are often done with the
best intentions but with
little awareness of their potentially harmful downstream effects.
Such awareness is even
more needed in the crime control domain, where the ‘training
data’ on human behaviour
used to train algorithms are of varying quality.
On a general level, there are at least two challenges in
algorithmic design and appli-
cation. First, compiling a database and creating algorithms for
prediction always
require decisions that are made by humans. ‘It is a human
process that includes several
21. stages, involving decisions by developers and managers. The
statistical method is only
part of the process for developing the final rules used for
prediction, classification or
decisions’ (European Union Agency for Fundamental Rights,
2018). Second, algo-
rithms may also take an unpredictable path in reaching their
objectives despite the
good intentions of their creators.
Part of the first question relates to the data and part to the
algorithms. In relation to
data, the question concerns how data are collected, cleaned and
prepared. There is an
abundance of data in one part of the criminal justice sector; for
example, the police col-
lect vast amounts of data of varying quality. These data depend
on victims reporting
incidents, but these reports can be more or less reliable and
sometimes they do not even
exist. For some types of crime, such as cybercrime, the denial
of victimization is often
the norm (Wall, 2007). Financial and banking crime remains
underreported, even less
prosecuted or concluded by a court decision. There is a base-
rate problem in such types
630 European Journal of Criminology 18(5)
of crime, which hinders good modelling. The other problem
with white-collar crime is
that it flies below the radar of predictive policing software.
22. Furthermore, in the case of poor data, it does not help if the
data-crunching machine
receives more data. If the data are of poor quality, then ‘garbage
in garbage out’.
The process of preparing data in criminal justice is inherently
political – someone
needs to generate, protect and interpret data (Gitelman, 2013).
Building algorithms
The second part of the first question relates to building
algorithms. If ‘models are opin-
ions embedded in mathematics’ (O’Neil, 2016: 21), legal
professionals working in crimi-
nal justice systems need to be more technically literate and able
to pose relevant questions
to excavate values embedded in calculations. Although scholars
(Pasquale, 2015) and
policy-makers alike (European Union Agency for Fundamental
Rights, 2018) have been
vocal in calling for the transparency of algorithms, the goal of
enabling detection and the
possibility of rectifying discriminatory applications misses the
crucial point of machine
learning and the methods of neural networks. These methods are
black-boxed by default.
Demanding transparency without the possibility of
explainability remains a shallow
demand (Veale and Edwards, 2018). However, third-party audits
of algorithms may pro-
vide certification and auditing schemes and increase trust in
algorithms in terms of their
fairness.
Moreover, building better algorithms is not the solution –
23. thinking about problems
only in a narrow mathematical framework is reductionist. A tool
might be good at pre-
dicting who will fail to appear in court, for example, but it
might be better to ask why
people do not appear and, perhaps, to devise interventions, such
as text reminders or
transportation assistance, that might improve appearance rates.
‘What these tools often
do is help us tinker around the edges, but what we need is
wholesale change’, claims
Vincent Southerland (Courtland, 2018). Or, as Princeton
computer scientist Narayanan
argues, ‘It’s not enough to ask if code executes correctly. We
also need to ask if it makes
society better or worse’ (Hulette, 2017).
Runaway algorithms
The second question deals with the ways algorithms take
unpredictable paths in reaching
their objectives. The functioning of an algorithm is not neutral,
but instead reflects
choices about data, connections, inferences, interpretations and
inclusion thresholds that
advance a specific purpose (Dwork and Mullingan, 2013). In
criminal justice settings,
algorithms calculate predictions via various proxies. For
instance, it is not possible to
calculate re-offending rates directly, but it is possible through
proxies such as age and
prior convictions. Probation algorithms can rely only on
measurable proxies, such as the
frequency of being arrested. In doing so, algorithms must not
include prohibited criteria,
such as race, ethnic background or sexual preference, and
24. actuarial instruments have
evolved away from race as an explicit predictor. However, the
analysis of existing proba-
tion algorithms shows how prohibited criteria are not directly
part of calculations but are
still encompassed through proxies (Harcourt, 2015b). But, even
in cases of such
631Završnik
‘runaway algorithms’,’ the person who is to bear the effects of a
particular decision will
often not even be aware of such discrimination. Harcourt shows
how two trends in
assessing future risk have a significant race-related impact: (1)
a general reduction in the
number of predictive factors used, and (2) an increased focus on
prior criminal history,
which forms part of the sentencing guidelines of jurisdictions in
the USA (Harcourt,
2015b). Criminal history is among the strongest predictors of
arrest and a proxy for race
(Harcourt, 2015b). In other words, heavy reliance on the
offender’s criminal history in
sentencing contributes more to racial disparities in incarceration
than dependence on
other robust risk factors less bound to race. For instance, the
weight of prior conviction
records is apparent in the case of Minnesota, which has one of
the highest black/white
incarceration ratios. As Frase (2009) explains, disparities are
substantially greater in
prison sentences imposed and prison populations than regarding
25. arrest and conviction.
The primary reason is the heavy weight that sentencing
guidelines place on offenders’
prior conviction records.
Blurring probability, causality and certainty
Risk assessments yield probabilities, not certainties. They
measure correlations and not
causations. Although they may be very good at finding
correlations, they cannot judge
whether or not the correlations are real or ridiculous
(Rosenberg, 2016). Correlations
thus have to be examined by a human; they have to be ascribed
a meaning. Just as biases
can slip into the design of a statistical model, they can also slip
into the interpretation –
the interpreter imposes political orientations, values and
framings when interpreting
results. Typically, data scientists have to work alongside a
domain-specific scientist to
ascribe meaning to the calculated results.
For instance, at what probability of recidivism should a prisoner
be granted parole?
Whether this threshold ought to be a 40 percent or an 80 percent
risk of recidivism is
an inherently ‘political’ decision based on the social, cultural
and economic condi-
tions of the given society. Varying social conditions are
relevant; for example,
crowded prisons may lead to a mild parole policy, whereas
governance through fear
or strong prison industry interests that influence decision-
making may lead to a
harsher policy. Decision-making based on the expected risk of
26. recidivism can be more
precise if certainty as to the predicted rate of offending is added
– with the introduc-
tion of the Bayesian optimization method (Marchant, 2018).
Models that are based
not solely on the expected risk rate but also on the certainty of
the predicted risk rate
can be more informative.
However, although a human decision is more precise because
the expected values
can be compared according to the certainty of those values, it
still needs to be made
with regard to what is too much uncertainty to ground the
probation decision on the
expected risk rate. Quantifying uncertainty by analysing the
probability distribution
around the estimated risk thus enables comparisons between
different risk assessments
based on different models, but it does not eliminate the need to
draw a line, that is, to
decide how much weight should be given to the certainty of the
expected risk rate.
There is still a need to ascribe meaning to the probability and
‘translate’ it in the con-
text of the judicial setting.
632 European Journal of Criminology 18(5)
Human or machine bias
Human decision-making in criminal justice settings is then
often flawed, and stereotypi-
27. cal arguments and prohibited criteria, such as race, sexual
preference or ethnic origin,
often creep into judgments. Research on biases in probation
decisions, for instance, has
demonstrated how judges are more likely to decide ‘by default’
and refuse probation
towards the end of sessions, whereas immediately after having a
meal they are more
inclined to grant parole (Danziger et al., 2011). Can algorithms
help prevent such embar-
rassingly disproportionate and often arbitrary courtroom
decisions?
The first answer can be that no bias is desirable and a computer
can offer such an
unbiased choice architecture. But algorithms are ‘fed with’ data
that is not ‘clean’ of
social, cultural and economic circumstances. Even formalized
synthetic concepts, such
as averages, standard deviations, probability, identical
categories or ‘equivalences’, cor-
relation, regression, sampling, are ‘the result of a historical
gestation punctuated by hesi-
tations, retranslations, and conflicting interpretations’
(Desrosières, 2002: 2). De-biasing
would seem to be needed.
However, cleaning data of such historical and cultural baggage
and dispositions may
not be either possible or even desirable. In analysing language,
Caliskan et al. (2017)
claim that, first, natural language necessarily contains human
biases. The training of
machines on language corpora entails that AI will inevitably
absorb the existing biases in
a given society (Caliskan et al., 2017). Second, they claim that
28. de-biasing may be pos-
sible, but this would lead to ‘fairness through blindness’
because prejudice can creep
back in through proxies (Caliskan et al., 2017). Machine biases
are then an inevitable
consequence also in algorithmic decision-making systems, and
such biases have been
documented in criminal justice settings (see Angwin et al.,
2016). Moreover, big data
often suffer from other biases, such as confirmation bias,
outcome bias, blind-spot bias,
the availability heuristic, clustering illusions and bandwagon
effects.
The second dilemma concerns whether de-biasing is even
desirable. If machine deci-
sion-making is still the preferred option, then engineers
building statistical models should
carry out a de-biasing procedure. However, constitutions and
criminal procedure codes
have all been adopted through a democratic legislative process
that distilled the prevail-
ing societal interests, values, and so on of the given society.
Although it is not difficult to
be critical of this process of ‘distillation’, which is itself
extremely partial or even ‘cap-
tured’ by the ‘rich and powerful’, it is still relatively open to
scrutiny in comparison with
a process of de-biasing conducted behind closed doors by
computer scientists in a labora-
tory. The point is that de-biasing entails that inherently political
decisions are to be made,
for example as to what is merely gendered and what is sexist
language that needs to be
‘cleaned’, or what is hate speech targeting minorities and which
differential treatment
29. should be deemed to be discriminatory. In a machine-based
utopia, such decisions would
thereby be relegated to the experts of the computer science
elite. In this sense, de-biasing
is not even desirable.
Do we as a society prefer human bias over machine bias? Do we
want to invest in
human decision-makers or in computerized substitutes?
Judges already carry out risk assessments on a daily basis, for
example when deciding
on the probability of recidivism. This process always subsumes
human experience,
633Završnik
culture and even biases. They may work against or in favour of
a given suspect. Human
empathy and other personal qualities that form part of
flourishing human societies are in
fact types of bias that overreach statistically measurable
‘equality’. Such decision-mak-
ing can be more tailored to the needs and expectations of the
parties to a particular judi-
cial case (Plesničar and Šugman Stubbs, 2018). Should not such
personal traits and skills
or emotional intelligence be actively supported and nurtured in
judicial settings? Bernard
Harcourt (2006) explains how contemporary American politics
has continually privi-
leged policing and punishment while marginalizing the welfare
state and its support for
30. the arts and the commons. In the light of such developments,
increasing the quality of
human over computerized judgement should be the desired goal.
Judicial evolution and decision-making loops
Proponents of self-learning models claim that what is needed is
the patience to make
things ‘really’ work: ‘smarter’ algorithms have to be written,
and algorithms should mon-
itor existing algorithms ad infinitum (Reynolds, 2017). The
programmes will be pro-
vided with feedback in terms of rewards and punishments, and
they will automatically
progress and navigate the space of the given problem – which is
known as ‘reinforce-
ment learning’. The logics become caught up in escalating and
circular decision-making
loops ad infinitum.
However, this insatiable demand reveals itself as the ideology
of big data. The argu-
ment exposes an apologetic gesture: what is needed is patience
and the motivation to
make things work. The argument claims that technology is not
yet sufficiently perfected
but that self-learning machines may remedy the current
deficiencies. The inaccuracies
resulting from algorithmic calculations are then also
deficiencies in their own right. This
argument neglects the fact that social life is always more
picturesque and users of the
systems will have to incorporate new parameters of criminal
cases on a regular basis.
Judicial precedents by default started as outliers. Newly
31. occurring circumstances
receive more weight and completely change the reasoning that
prevailed in former prec-
edents. However, if the logic becomes caught up in escalating
and circular decision-
making loops, then the new circumstances of a case will never
change the existing
reasoning. Legal evolution may thus be severely hindered.
Subjectivity and the case-specific narrative
More than a decade ago Franko Aas (2005) identified how
biometric technology used in
criminal justice settings had been changing the focus from
‘narrative’ to supposedly
more ‘objective’ and unbiased ‘computer databases’. Franko
was critical of the transition
and characterized it as going ‘from narrative to database’.
Instead of the narrative, she
furthermore claimed, the body has come to be regarded as a
source of unprecedented
accuracy because ‘the body does not lie’ (Franko Aas, 2006).
With AI tools, supposedly objective value-free ‘hard-core’
science’ is even further
replacing subjectivity and the case-specific narrative. Today,
criminal justice systems are
on the brink of taking yet another step in the same direction –
from the database towards
automated algorithmic-based decision-making. Whereas the
database was still a static
634 European Journal of Criminology 18(5)
32. pool of knowledge that needed some form of intervention by the
decision-makers, for
example so as to at least include it in the context of a specific
legal procedure, automated
decision-making tools adopt a decision independently. The
narrative, typically of a risky
individual, is not to be trusted and only the body itself can
provide a reliable confession.
For example, a DNA sample can reveal the home country of a
refugee and not their
explanation of ’their origin. Today, moreover, even decision-
makers are not to be
believed. Harcourt (2015b) succinctly comments that, for
judicial practitioners, the use
of automated recommendation tools is acceptable despite the
curtailment of the discre-
tion of the practitioners, whose judgement is often viewed
negatively and associated with
errors. Relying on automated systems lends an aura of
objectivity to the final decision
and diffuses the burden of responsibility for their choices
(Harcourt, 2015a).
In the transition towards the complete de-subjectivation in the
decision-making pro-
cess, a sort of erasure of subjectivity is at work (see Marks et
al., 2015). The participants
in legal proceedings are perceived as the problem and
technology as the solution in the
post-human quest: ‘The very essence of what it means to be
human is treated less as a
feature than a bug’ (Rushkoff, 2018). It is ‘a quest to transcend
all that is human: the
body, interdependence, compassion, vulnerability, and
complexity.’ The transhumanist
33. vision too easily reduces all of reality to data, concluding that
‘humans are nothing but
information-processing objects’ (Spiekermann et al., 2017). If
perceived in such a man-
ner, machines can easily replace humans, which can then be
tuned to serve the powerful
and uphold the status quo in the distribution of social power.
Psychological research into human–computer interfaces also
demonstrates that auto-
mated devices can fundamentally change how people approach
their work, which in turn
leads to new kinds of errors (Parasuraman and Riley, 1997). A
belief in the superior
judgement of automated aids leads to errors of commission –
when people follow an
automated directive despite contradictory information from
other more reliable sources
of information because they fail to either check or discount that
information (Skitka
et al., 2000). It is a new type of automation bias, where the
automated decisions are rated
more positively than neutral (Dzindolet et al., 2003). Even in
the existing stage of non-
binding semi-automated decision-making systems, the process
of arriving at a decision
changes. The perception of accountability for the final decision
changes too. The deci-
sion-makers will be inclined to tweak their own estimates of
risk to match the model’s
(see Eubanks, 2018).
Human rights implications
Scholars and policy-makers have dealt extensively with the
impacts of predictive model-
34. ling methods on the principle of non-discrimination and
equality (European Union
Agency for Fundamental Rights, 2018) and the implications for
personal data protection
(for example, Brkan, 2017). Although these are pertinent, the
impacts of predictive mod-
elling methods in criminal justice settings go well beyond these
two concerns. Let us first
examine the impact on the principle of equality.
Over-policing as an outcome of the use of predictive policing
software – when the
police patrol areas with more crime, which in turn amplifies the
need to police these
areas already policed – has been shown to be a prime example
of the ‘vicious circle’
635Završnik
effect in the crime control domain (Ferguson, 2017). However,
under-policing is even
more critical. First, the police do not scrutinize some areas as
much as others, which
leads to a disproportionate feeling and experience of justice.
Second, some types of
crime are more likely to be prosecuted than others. The central
principle of legality – the
requirement that all crimes be prosecuted ex officio, as opposed
to the principle of oppor-
tunity – is thus not respected. The crimes more likely committed
by the middle or upper
classes then fly under the radar of criminal justice systems.
Such crimes often become
35. ‘re-labelled’ as ‘political scandals’ or merely occupational
‘risk’. Greater criminality in
the banking system as a whole has been ignored (Monaghan and
O’Flynn, 2013) and is
increasingly regarded as part of the ‘normal’ part of
financialized societies. This is not to
claim that there are no options for the progressive use of big
data and AI in the ‘fight’
against ‘crimes of the powerful’. For instance, the Slovenian
Ministry of Finance’s finan-
cial administration is using machine learning to detect tax-
evasion schemes and tax fraud
(Spielkamp, 2019), which could be directed towards the most
powerful multinational
enterprises and their manipulations with transfer pricing.
However, predictive policing
software has not been used to such ends to the same extent as
for policing ‘the poor’.
Algorithmic decision-making systems may collide with several
other fundamental lib-
erties. Similar to ‘redlining’, the ‘sleeping terrorist’ concept (as
discussed above) infringes
upon the presumption of innocence. The mere probability of a
match between the attributes
of known terrorists and a ‘sleeping’ one directs the watchful eye
of the state to the indi-
vidual. Similarly, there is a collision with the principle of
legality, that is lex certa, which
requires the legislature to define a criminal offence in a
substantially specific manner.
Standards of proof are thresholds for state interventions into
individual rights.
However, the new language of mathematics, which helps define
new categories such as
36. a ‘person of interest’, re-directs law enforcement agency
activities towards individuals
not yet considered a ‘suspect’. The new notions being invented
contravene the estab-
lished standards of proof in criminal procedure.
In the specific context of a criminal trial, several rights pertain
to the principle of the
equality of arms in judicial proceedings and the right to a fair
trial. Derivative procedural
rights, such as the right to cross-examine witnesses, should be
interpreted so as to also
encompass the right to examine the underlying rules of the risk-
scoring methodology. In
probation procedures, this right should entail ensuring that a
convicted person has the
possibility to question the modelling applied – from the data fed
into the algorithm to the
overall model design (see the Loomis case).
Systems of (semi-)automated decision-making should respect a
certain set of rights
pertaining to tribunals. That is, the principle of the natural court
requires that the criteria
determining which court (or a specific judge thereof) is
competent to hear the case be
clearly established in advance (the rule governing the allocation
of cases to a particular
judge within the competent court, thus preventing forum
shopping), and the right to an
independent and impartial tribunal.
Conclusion
Automated analysis of judicial decisions using AI is supposed
to boost efficiency in the
37. climate of the neoliberal turn and political pressure ‘to achieve
more with less’. In such
636 European Journal of Criminology 18(5)
a context, automated decision-making in criminal justice proves
itself to be an ‘algorith-
mic avatar’ of neoliberalism (Benbouzid, 2016; Desrosières,
2002).
There is a prevailing sentiment that the AI tools will vaporize
biases and heuristics
inherent in human judgement and reasoning, which will in turn
increase the legitimacy
of criminal justice agencies and confine infliction of
punishment to ‘pure’ scientific
method and ‘reason’. We can trace the idea to confine criminal
justice to ‘pure’ scientific
method to Cesare Beccaria, who claimed in his seminal booklet
On Crimes and
Punishments ([1764] 1872) that criminal justice must be
confined to reason. In the con-
text of the post-enlightenment, this meant changing the rules of
criminal procedure, such
as prohibition of torture, and the rules of substantive criminal
law, such as detaching
crime from sin (Adler et al., 1998: 51). Beccaria laid down
several crime policy princi-
ples pertaining to the criminal justice system, such as the
principle of separation of pow-
ers, according to which judges may only apply the law (by
following Montesquieu). His
ideas helped humanize criminal procedure and legitimized the
38. then newly emerging
classes and modern state, when the feudal values of the Ancien
Régime started to melt.
Similarly, today we can observe how justifications for
introducing AI into criminal jus-
tice again allude to ‘reason’ and ‘objectivity’ – as if judges are
not reliable enough to
make unbiased decisions. The introduction of AI into criminal
justice settings also rein-
forces the social power of the new emerging digital elite, which
capitalizes on the ideol-
ogy of big data and algorithmic impartiality.
However, delivering justice with new tools – big data,
algorithms and machine learn-
ing – does not lead to a de-biased digital bonanza. As the article
shows, de-biasing of
language and criminal justice data may not even be possible.
Moreover, since de-biasing
would override the democratic procedures designed to prevent
abuses of the elite, it is
not even desired.
If the decision-maker is biased, then the logical move is to
eliminate the human from
the decision-making loop. However, in reality, eliminating the
human factor proved not
to be the key to solving the problem. Automated systems further
exacerbate the problems
they were employed to improve (see Eubanks 2018: 80–1). This
article shows how the
trend towards de-subjectivation brought by technology may lead
to unintended conse-
quences, such as hindering legal evolution, reinforcing the
‘eternal past’, eliminating
case-specific narratives and erasing subjectivity. Moreover,
39. automation in the criminal
justice system directly interferes with several consti tutional
liberties and rights of defence
in the criminal procedure.
However, one should’ not entirely neglect the value of
automation in criminal justice
settings. Automation can increase knowledge about the criminal
justice system itself. If
the structure of a specific judicial procedure leads to
discriminatory and arbitrary out-
comes, then the introduction of automated tools may improve
justice. But the question of
what to improve remains open: human decision-makers or
computerized substitutes? We
should learn from the current experiments showing how
automation can be counterpro-
ductive. What advocates of the automated decision-making
systems neglect is the impor-
tance of the ability to bend the rules and reinterpret them
according to social circumstances
(Eubanks, 2018).
If criminal justice procedures are being computerized, we are
surely likely to ask
ourselves ‘what is being left out?’ ‘Judicial opinions are
themselves works of literature
637Završnik
with rich hypertextual potential’, claims Birkhold (2018). Also,
we want empathetic
judges (Plesničar and Šugman Stubbs, 2018) and not executory
40. ‘cold-blooded’ machines.
Criminal procedure has several conflicting goals. If the end goal
of the criminal proce-
dure is to find the historical truth, then Miranda rights and
exclusionary rules are an
absolute obstacle in the truth-discovering process. If its goal is
to reconcile the parties
and solve a dispute between an accused and the state, then these
rules have an important
place in criminal procedure. Their goal is to facilitate meaning
and other values from the
process, such as respect for the rule of law.
Finally, we must acknowledge the fact that, in criminal justice
systems, some proce-
dures should not be subjected to automatizatio n (similarly in the
context of policing, by
Oswald et al., 2018). There is simply too high an impact upon
society and upon the
human rights of individuals for them to be influenced by a
reduced human agency rele-
gated to machines.
Acknowledgements
I am grateful to the reviewers for their comments. Special
thanks are also owed to the participants
at the colloquium ‘Automated Justice: Algorithms, Big Data and
Criminal Justice Systems’, held at
the Collegium Helveticum in Zürich, 20 April 2018, for thei r
helpful comments and suggestions.
Funding
The author(s) disclosed receipt of the following financial
support for the research, authorship, and/
41. or publication of this article: The research leading to this article
has received funding from the
EURIAS Fellowship Programme (the European Commission
Marie Skłodowska-Curie Actions –
COFUND Programme – FP7) and the Slovenian Research
Agency, research project ‘Automated
Justice: Social, Ethical and Legal Implications’, no. J5-9347.
ORCID iD
Aleš Završnik https://orcid.org/0000-0002-4531-2740
Note
1. For the purposes of this article, the notions ‘new tools’ and
‘automated decision-making
tools’ are used to encapsulate the terms ‘big data’,
‘algorithms’, ‘machine learning’ and ‘arti-
ficial intelligence’. Each of these notions is highly contested
and deserves its own discussion,
but a detailed analysis of these tools is not central to identifying
the social and legal implica-
tions of their use in criminal justice systems.
References
ACLU (American Civil Liberties Union) (2016) Statement of
Concern About Predictive Policing
by ACLU and 16 Civil Rights Privacy, Racial Justice, and
Technology Organizations, 31
August. URL (accessed 4 September 2019):
https://www.aclu.org/other/statement-concern-
about-predictive-policing-aclu-and-16-civil-rights-privacy-
racial-justice.
Adler F, Mueller GOW and Laufer WS (1998) Criminology, 3rd
42. edn. Boston, MA: McGraw-Hill.
Aebi C (2015) Evaluation du système de prédiction de
cambriolages résidentiels PRECOBS. MA
thesis, École des Sciences Criminelles, Université de Lausanne,
Switzerland.
Ambasna-Jones M (2015) The smart home and a data
underclass. The Guardian, 3 August.
638 European Journal of Criminology 18(5)
https://orcid.org/0000-0002-4531-2740
https://www.aclu.org/other/statement-concern-about-predictive-
policing-aclu-and-16-civil-rights-privacy-racial-justice
https://www.aclu.org/other/statement-concern-about-predictive-
policing-aclu-and-16-civil-rights-privacy-racial-justice
Amoore L (2014) Security and the incalculable. Security
Dialogue 45(5): 423–439.
Angwin J, Larson J, Mattu S and Kirchner L (2016) Machine
bias: There’s software used across
the country to predict future criminals. And it’s biased against
blacks. ProPublica, 23 May.
URL (accessed 4 September 2019):
https://www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing.
Barabas C, Dinakar K, Ito J, Virza M and Zittrain J (2017)
Interventions over predictions: Reframing
the ethical debate for actuarial risk assessment. Cornell
University. arXiv:1712.08238 [cs,
stat].
43. Barocas S, Hood S and Ziewitz M (2013) Governing algorithms:
A provocation piece, 29 March.
URL (accessed 4 September 2019):
https://ssrn.com/abstract=2245322.
Beccaria CB ([1764] 1872) An Essay on Crimes and
Punishments. With a Commentary by M. de
Voltaire. A New Edition Corrected. Albany: W.C. Little & Co.
URL (accessed 4 September
2019): https://oll.libertyfund.org/titles/2193.
Benbouzid B (2016) Who benefits from the crime?
Books&Ideas, 31 October.
Birkhold MH (2018) Why do so many judges cite Jane Austen in
legal decisions? Electric
Literature, 24 April.
Brkan M (2017) Do algorithms rule the world? Algorithmic
decision-making in the framework of
the GDPR and beyond. SSRN Scholarly Paper, 1 August.
Caliskan A, Bryson JJ and Narayanan A (2017) Semantics
derived automatically from language
corpora contain human-like biases. Science 356(6334): 183–
186.
Chouldechova A (2016) Fair prediction with disparate impact: A
study of bias in recidivism pre-
diction instruments. Cornell University. arXiv:1610.07524 [cs,
stat].
Citron DK and Pasquale FA (2014) The scored society: Due
process for automated predictions.
Washington Law Review 89. University of Maryland Legal
Studies Research Paper No.
44. 2014–8: 1–34.
Cohen JE, Hoofnagle CJ, McGeveran W, Ohm P, Reidenberg
JR, Richards NM, Thaw D and Willis
LE (2015) Information privacy law scholars’ brief in Spokeo,
Inc. v. Robins, 4 September
2015. URL (accessed 4 September 2019):
https://ssrn.com/abstract=2656482.
Courtland R (2018) Bias detectives: The researchers striving to
make algorithms fair. Nature
558(7710): 357–360.
Danziger S, Levav J and Avnaim-Pesso L (2011) Extraneous
factors in judicial decisions.
Proceedings of the National Academy of Sciences 108(17):
6889–6892.
De Kort Y (2014) Spotlight on aggression. Intelligent Lighting
Institute, Technische Universiteit
Eindhoven 1: 10–11.
Desrosières A (2002) The Politics of Large Numbers: A History
of Statistical Reasoning.
Cambridge, MA: Harvard University Press.
Dewan S (2015) Judges replacing conjecture with formula for
bail. New York Times, 26 June.
Dwork C and Mullingan DK (2013) It’s not privacy, and it’s not
fair. Stanford Law Review 66(35):
35–40.
Dzindolet MT, Peterson SA, Pomranky RA, Pierce LG and Beck
HP (2003) The role of trust in
automation reliance. International Journal of Human-Computer
45. Studies 58(6): 697–718.
Eckhouse L (2017) Opinion | Big data may be reinforcing racial
bias in the criminal justice system.
Washington Post, 2 October.
Egbert S (2018) About discursive storylines and techno-fixes:
The political framing of the implemen-
tation of predictive policing in Germany. European Journal for
Security Research 3(2): 95–114.
Ekowo M and Palmer I (2016) The Promise and Peril of
Predictive Analytics in Higher Education.
New America, Policy Paper, 24 October.
Eubanks V (2018) Automating Inequality: How High-Tech
Tools Profile, Police, and Punish the
Poor. New York: St Martin’s Press.
639Završnik
https://www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing
https://www.propublica.org/article/machine-bias-risk-
assessments-in-criminal-sentencing
https://ssrn.com/abstract=2245322
https://oll.libertyfund.org/titles/2193
https://ssrn.com/abstract=2656482
European Union Agency for Fundamental Rights (2018)
#BigData: Discrimination in data-sup-
ported decision making. FRA Focus, 29 May.
Ferguson AG (2017) The Rise of Big Data Policing:
46. Surveillance, Race, and the Future of Law
Enforcement. New York: NYU Press.
Flores AW, Bechtel K and Lowenkamp CT (2016) False
positives, false negatives, and false analy-
ses: A rejoinder to ‘Machine bias: There’s software used across
the country to predict future
criminals. and it’s biased against blacks’. Federal Probation
Journal 80(2): 9.
Franko Aas K (2005) Sentencing in the Age of Information:
From Faust to Macintosh. London:
Routledge-Cavendish.
Franko Aas K (2006) ‘The body does not lie’: Identity, risk and
trust in technoculture. Crime,
Media, Culture 2(2): 143–158.
Frase RS (2009) What explains persistent racial
disproportionality in Minnesota’s prison and jail
populations? Crime and Justice 38(1): 201–280.
Galič M (2018) Živeči laboratoriji in veliko podatkovje v
praksi: Stratumseind 2.0 – diskusija
živečega laboratorija na Nizozemskem. In: Završnik A (ed.)
Pravo v dobi velikega podatko-
vja. Ljubljana: Institute of Criminology at the Faculty of Law.
Gefferie D (2018) The algorithmization of payments. Towards
Data Science, 27 February. URL
(accessed 4 September 2019):
https://towardsdatascience.com/the-algorithmization-of-pay-
ments-how-algorithms-are-going-to-change-the-payments-
industry-5dd3f266d4c3.
Gendreau P, Goggin C and Cullen FT (1999) The effects of
47. prison sentences on recidivism. User
Report: 1999-3. Department of the Solicitor General Canada,
Ottawa, Ontario.
Gitelman L (ed.) (2013) ‘Raw Data’ Is an Oxymoron.
Cambridge, MA: MIT Press.
Hannah-Moffat K (2019) Algorithmic risk governance: Big data
analytics, race and information
activism in criminal justice debates. Theoretical Criminology.
23(4): 453–470.
Harcourt BE (2006) Against Prediction: Profiling, Policing, and
Punishing in an Actuarial Age.
Reprint edition. Chicago: University of Chicago Press.
Harcourt BE (2015a) Exposed: Desire and Disobedience in the
Digital Age. Cambridge, MA:
Harvard University Press.
Harcourt BE (2015b) Risk as a proxy for race. Federal
Sentencing Reporter 27(4): 237–243.
Hulette D (2017) Patrolling the intersection of computers and
people. Princeton University,
Department of Computer Science, News, 2 October. URL
(accessed 4 September 2019): https://
www.cs.princeton.edu/news/patrolling-intersection-computers-
and-people.
Joh EE (2017a) Feeding the machine: Policing, crime data, &
algorithms. William & Mary Bill of
Rights Journal 26(2): 287–302.
Joh EE (2017b) The undue influence of surveillance technology
companies on policing. New York
University Law Review 92: 101–130.
48. Kehl DL and Kessler SA (2017) Algorithms in the criminal
justice system: Assessing the use of
risk assessments in sentencing. URL (accessed 4 September
2019): http://nrs.harvard.edu/
urn-3:HUL.InstRepos:33746041.
Kerr I and Earle J (2013) Prediction, preemption, presumption:
How big data threatens big picture
privacy. Stanford Law Review Online 66(65): 65–72.
Kleinberg J, Lakkaraju H, Leskovec J, Ludwig J and
Mullainathan S (2017) Human Decisions
and Machine Predictions. Working Paper 23180, National
Bureau of Economic Research,
Cambridge, MA.
Koumoutsakos P (2017) Computing . Data .. Science . . .
Society: On connecting the dots. Talk
at the Collegium Helveticum, Zurich, 9 March. URL (accessed 4
September 2019): https://
www.cse-lab.ethz.ch/computing-data-science-society-on-
connecting-the-dots/.
Kramer ADI, Guillory JE and Hancock JT (2014) Experimental
evidence of massive-scale emo-
tional contagion through social networks. Proceedings of the
National Academy of Sciences
111(24): 8788–8790.
640 European Journal of Criminology 18(5)
https://towardsdatascience.com/the-algorithmization-of-
payments-how-algorithms-are-going-to-change-the-payments-
industry-5dd3f266d4c3
https://towardsdatascience.com/the-algorithmization-of-
50. Marks A, Bowling B and Keenan C (2015) Automatic justice?
Technology, crime and social con-
trol. SSRN Scholarly Paper, 19 October. In: Brownsword R,
Scotford E and Yeung K (eds)
The Oxford Handbook of the Law and Regulation of
Technology. Oxford University Press,
forthcoming. Queen Mary School of Law Legal Studies
Research Paper No. 211/2015; TLI
Think! Paper 01/2015. URL (accessed 4 September 2019):
https://ssrn.com/abstract=2676154.
Meek A (2015) Data could be the real draw of the internet of
things – but for whom? The Guardian,
14 September.
Monaghan LF and O’Flynn M (2013) The Madoffization of
society: A corrosive process in an age
of fictitious capital. Critical Sociology 39(6): 869–887.
Morozov E (2013) To Save Everything, Click Here:
Technology,
Solution
ism, and the Urge to Fix
Problems that Don’t Exist. London: Allen Lane.
Noble SU (2018) Algorithms of Oppression: How Search
Engines Reinforce Racism, 1st edn. New
York: NYU Press.
51. O’Hara D and Mason LR (2012) How bots are taking over the
world. The Guardian, 30 March.
O’Malley P (2010) Simulated justice: Risk, money and
telemetric policing. British Journal of
Criminology 50(5): 795–807.
O’Neil C (2016) Weapons of Math Destruction: How Big Data
Increases Inequality and Threatens
Democracy. New York: Crown.
Oswald M, Grace J, Urwin S, et al. (2018) Algorithmic risk
assessment policing models:
Lessons from the Durham HART model and ‘experimental’
proportionality. Information &
Communications Technology Law 27(2): 223–250.
Parasuraman R and Riley V (1997) Humans and automation:
Use, misuse, disuse, abuse. Human
Factors 39(2): 230–253.
Pasquale F (2015) The Black Box Society: The Secret
Algorithms That Control Money and
Information. Cambridge, MA: Harvard University Press.
52. Plesničar MM and Šugman Stubbs K (2018) Subjectivity,
algorithms and the courtroom. In:
Završnik A (ed.) Big Data, Crime and Social Control. London:
Routledge, Taylor & Francis
Group, 154–176.
Polloni C (2015) Police prédictive: la tentation de ‘ dire quel
sera le crime de demain’. Rue89, 27 May.
Reynolds M (2017) AI learns to write its own code by stealing
from other programs. New Scientist,
25 February.
Robinson D and Koepke L (2016) Stuck in a pattern. Early
evidence on ‘predictive policing’ and
civil rights. Upturn.
Robinson DG (2018) The challenges of prediction: Lessons
from criminal justice. 14 I/S: A
Journal of Law and Policy for the Information Society 151.
URL (accessed 4 September
2019): https://ssrn.com/abstract=3054115.
Rosenberg J (2016) Only humans, not computers, can learn or
predict. TechCrunch, 5 May.
53. Rouvroy A and Berns T (2013) Algorithmic governmentality
and prospects of emancipation.
Réseaux No. 177(1): 163–196.
Rushkoff D (2018) Future human: Survival of the richest.
OneZero, Medium, 5 July.
641Završnik
https://collegium.ethz.ch/wp-
content/uploads/2018/01/180420_automated_justice.pdf
https://ssrn.com/abstract=2676154
https://ssrn.com/abstract=3054115
Selingo J (2017) How colleges use big data to target the
students they want. The Atlantic, November 4.
Skitka LJ, Mosier K and Burdick MD (2000) Accountability and
automation bias. International
Journal of Human-Computer Studies 52(4): 701–717.
Spiekermann S, Hampson P, Ess C, et al. (2017) The ghost of
transhumanism & the sentience
54. of existence. URL (accessed 4 September 2019):
http://privacysurgeon.org/blog/wp-content/
uploads/2017/07/Human-manifesto_26_short-1.pdf.
Spielkamp M (2017) Inspecting algorithms for bias. MIT
Technology Review, 6 December.
Spielkamp M (ed.) (2019) Automating Society. Berlin: AW
AlgorithmWatch.
Stanier I (2016) Enhancing intelligence-led policing: Law
enforcement’s big data revolution.
In: Bunnik A, Cawley A, Mulqueen M and Zwitter A (eds) Big
Data Challenges: Society,
Security, Innovation and Ethics. London: Palgrave Macmillan,
97–113.
Steiner C (2012) Automate This. New York: Penguin.
Veale M and Edwards L (2018) Clarity, surprises, and further
questions in the Article 29 Working
Party draft guidance on automated decision-making and
profiling. Computer Law and
Security Review 34(2): 398–404.
Wall DS 2007 Cybercrime. The Transformation of Crime in the
55. Information Age. Cambridge, UK;
Malden, MA: Polity Press.
Webb A (2013) Why data is the secret to successful dating.
Visualised. The Guardian, 28 January.
Wilson D (2018) Algorithmic patrol: The futures of predictive
policing. In: Završnik A (ed.) Big
Data, Crime and Social Control. London, New York: Routledge,
Taylor & Francis Group,
108–128.
642 European Journal of Criminology 18(5)
http://privacysurgeon.org/blog/wp-
content/uploads/2017/07/Human-manifesto_26_short-1.pdf
http://privacysurgeon.org/blog/wp-
content/uploads/2017/07/Human-manifesto_26_short-1.pdf
C
hapter 6
G
57. ponent of the global econom
y,
accounting for m
ore than 8%
of the
w
orld’s total gross dom
estic product
(G
D
P).
•Spending on health activities can be
divided into tw
o categories:
•M
oney spent on personal health
•M
58. oney spent on public health
Figure 6-1
H
igh-incom
e countries spend a high
percentage of their gross dom
estic product
(G
D
P) on health.
Data from Global Burden of Disease Health Financing
Collaborator Network. Evolution and patterns of global health
financing 1995–2014: Development assistance for health, and
government, prepaid private, and out-of-pocket health
spending in 184 countries. Lancet 2017; 389:1981–2004.
59. Personal and Public H
ealth
(cont’d)
•
Personal health expenses relate to the health of one
individual or fam
ily.
•
E
xam
ples: purchasing antibiotics, paying for a
m
idw
ife, buying test strips for self-m
onitoring of
blood glucose levels
60. •
Public health expenses relate to shared activities
that protect a com
m
unity, a nation, or the global
population at large.
•
E
xam
ples: investigating and containi ng outbreaks,
m
arketing m
ass polio vaccination days, using
insecticides in outdoor areas to kill m
osquitoes,
developing evidence-based guidelines for screening
for chronic diseases and m
61. anaging them
Personal and Public H
ealth
(cont’d)
•
W
orldw
ide, m
ore than $9 trillion w
as spent on health care
in 2015.
•
C
osts could rise to $16 trillion per year by 2030.
•
62. T
he am
ount of m
oney spent on healthcare services for the
average resident each year is m
uch higher in high-incom
e
countries than it is in low
-incom
e countries, even after
adjusting for differences in the cost of living.
•
T
here is a diversity of m
echanism
s for paying for personal
health expenses.
63. •
M
ost public health activities in higher-incom
e countries
are funded by taxes; public health initiatives in low
er-
incom
e countries are often financed w
ith a com
bination of
governm
ental and external support.
Figure 6-2
H
ealth spending per capita (2014).
64. Data from Health system financing profile by country.
Geneva: WHO Global Health Expenditure Database; 2017.
Figure 6-3
Total spending on health care per capita by
country incom
e level (2014).
Data from Global Burden of Disease Health Financing
Collaborator
Network. Evolution and patterns of global health financing
1995–
2014: development assistance for health, and government,
prepaid
private, and out-of-pocket health spending in 184 countries.
Lancet
2017; 389:1981–2004.
65. Figure 6-4
G
overnm
ents in high-incom
e countries use tax
revenue to pay for m
ost health services; in
low
-incom
e countries, a m
ore diverse set of
funders pay for health activities.
Financing
•
F
66. inancing = the provision of m
oney for a particular
activity and the m
anagem
ent of that investm
ent.
•
Financing for global health is allocated to both
personal and public functions.
•
Som
e global health funding helps low
er-incom
e
countries expand the personal healthcare services that
they offer to residents.
•
67. Som
e global health funding supports global health
governance, the developm
ent and dissem
ination of
new
health technologies, pandem
ic preparedness and
response, and other public health functions.
•
T
here are also expenses that blend the personal and
public health categories.
H
ealth System
68. s
6.2
H
ealth System
•
A
health system
includes all of the people, facilities,
products, resources, and organizational structures that
deliver health services to a population.
•
W
H
O
’s 6 core building blocks:
1.
69. T
he provision of effective personal and population-
based healthcare services
2.
A
w
ell-trained and productive health w
orkforce that is
able to provide quality care to all populations
3.
A
strong health inform
ation system
(H
IS) that
collects, analyzes, and dissem
inates the inform
70. ation
about population health and health system
s
perform
ance that is critical for health system
decision-
m
aking
H
ealth System
(cont’d)
•W
H
O
’s 6 core building blocks:
71. 4.
A
ccess to essential m
edicines, m
edical
devices, vaccines, and other health
technologies
5.
A
health financing system
that enables
everyone to access affordable services
w
hen they are needed (and at the sam
e
tim
e provides incentives not to overuse
73. evelopm
ent G
oals aim
by 2030 to “achieve universal
health coverage, including financial risk protection, access to
quality
essential healthcare services, and access to safe, effective,
quality, and
affordable m
edicines and vaccines for all” (SD
G
3.8).
•
U
niversal health coverage (U
H
C
)is present w
74. hen everyone in a
country has access to high-quality health services (including
preventive
care, diagnosis, treatm
ent, and rehabilitation) and everyone is protected
from
m
ajor health-associated financial shocks via a tax-based financing
system
or a health insurance plan.
•
In places w
here patients and their fam
ilies pay out-of-pocket for m
ost health
services, the poorest households are often excluded from
accessing quality
75. care.
•
C
ountries that spread the cost of health services across the w
hole population
(through tax revenue or m
andatory participation in highly regulated
insurance plans) enable everyone to access the services that are
included in
the national health plan.
Figure 6-5
U
niversal health coverage spreads the cost
burden for health services across the w
hole
76. population.
Data from World health report 1999. Geneva: WHO; 1999.
H
ealth System
s Financing
•
G
overnm
ents aim
ing to achieve U
H
C
m
ust m
ake
77. difficult decisions about w
hich goods and services
to cover under the national health plan.
•
R
esource lim
itations m
ay m
ean that only part of a
com
prehensive strategy for im
proving health can be
publicly funded.
•
H
ealth system
strengthening requires a process of
78. identifying priorities and resources, strategizing about
the policies that w
ill achieve key goals, transform
ing
those ideas into operational action plans, and then
im
plem
enting changes and tracking progress tow
ard
m
eeting targets.
H
ealth System
s Financing
(cont’d)
79. •
G
overnm
ent officials m
ust also m
ake critical
determ
inations about how
m
uch funding can be allocated
to the health system
and how
m
uch m
ust be dedicated to
m
aintaining other necessary services.
80. •
Increases in spending on health often require decreases in
funding for education and other social services.
•
T
he governm
ents of high-incom
e countries w
ith aging
populations usually allocate m
ore of their budget to health
than to education.
•
L
M
IC
s w
81. ith a large proportion of children in their
populations usually allocate m
ore funding to education than
to health.
•
T
hese spending priorities are reflected in surveys that ask
residents about their perceptions of social services and their
overall quality of life.
Figure 6-6
Satisfaction w
ith health care quality is highest
in high-incom
e countries.
Data from World health statistics 2016. Geneva: WHO; 2016.
82. P
aying for P
ersonal
H
ealth
6.3
Paying for Personal H
ealth
•E
ach country has a unique m
ix of
strategies for funding personal
health, but there are som
e general
83. patterns by country incom
e level.
Figure 6-7
Total spending on health by payer and country incom
e level.
Data from Global Burden of Disease Health Financing
Collaborator Network.
Evolution and patterns of global health financing 1995–2014:
development
assistance for health, and government, prepaid private, and out-
of-pocket
health spending in 184 countries. Lancet 2017; 389:1981–2004.
Personal H
ealth in H
igh-Incom
84. e
C
ountries
•
M
ost high-incom
e countries have a governm
ent-
sponsored healthcare system
that is paid for through
general tax revenue, m
andatory paym
ents into a
governm
ent-run social security system
, or other
types of com
85. pulsory contributions.
•
H
ealth services are typically provided at
governm
ent health facilities or at private facilities
that receive m
ost of their funds from
the
governm
ent.
•
T
he health financing and delivery system
in the
U
86. nited States is a notable exception to the general
global trend.
Personal H
ealth in M
iddle-
Incom
e C
ountries
•In m
ost m
iddle-incom
e countries,
governm
ents pay for a portion of health
costs but the rem
aining m
87. oney spent on
health is expended in the form
of out-of-
pocket (O
O
P
) paym
ents, cash
disbursem
ents m
ade by patients and their
fam
ilies in order to receive health services.
•T
he range of services covered by
governm
88. ental healthcare plans varies
w
idely.
Figure 6-8
Sources of funding for health in featured countries. (Prepaid
private spending includes private insurance and spending by
nongovernm
ental organizations.)
Data from Global Burden of Disease Health Financing
Collaborator Network.
Evolution and patterns of global health financing 1995–2014:
development
assistance for health, and government, prepaid private, and out-
of-pocket
health spending in 184 countries. Lancet 2017; 389:1981–2004.
89. Personal H
ealth in L
ow
-Incom
e
C
ountries
•
In low
-incom
e countries, m
ost healthcare services are paid for
out-of-pocket on a pay-as-you-go basis.
•
A
m
ix of public, not-for-profit private, and for-profit form
90. al and
inform
al healthcare providers are available in urban areas, but
there m
ay be very few
services in rural areas.
•
Som
e basic clinical services that have been deem
ed necessary for
achieving high-priority global health goals are paid for by dom
estic
governm
ents and international donors to ensure that these services
are available to everyone w
ho needs them
.
91. •
For other health conditions, both public (governm
ental) and private
healthcare facilities m
ay charge user fees and require additional
paym
ent for m
edicines and supplies.
•
Increasing access to affordable health services for the m
ost
vulnerable populations is one of the m
ajor goals for health system
strengthening in m
ost low
93. ealth insurance is intended to
protect insured people from
incurring
overw
helm
ing expenses if they
happen to develop an expensive
health condition.
H
ealth Insurance (cont’d)
•
H
ealth insurance system
s, w
hether private or public,
are funded based on the principle of pooled risk.
94. •
P
ooled risk
assum
es that if m
any low
-risk people and
a few
high-risk people all pay prem
ium
s to the
insurance system
over m
any years, then there w
ill be a
pot of m
oney that can be used to pay for m
95. ajor
illnesses and injuries w
hen they occur.
•
O
nly a few
people w
ill develop a very serious chronic
condition or suffer a catastrophic injury, but because
everyone is at risk of unexpected health crises, m
ost
people are w
illing to pay for protection against a
lifetim
e of unm
anageable, im
96. poverishing debt as a
result of one m
edical incident.
H
ealth Insurance in the U
SA
•
T
he country that spends the m
ost on health each
year, by far, is the U
nited States.
•
T
he U
97. SA
has a health system
that is unique am
ong
high-incom
e countries because it is not a universal
health coverage system
.
•
In 2015, about 91%
of A
m
ericans had health
insurance coverage and 9%
had no health insurance.
•
O
98. f the insured individuals, about tw
o-thirds had
private health insurance and about one-third w
ere on a
governm
ent plan.
•
N
early all health services w
ere provided at private
facilities.
Figure 6-9
Total spending on health care per capita in
featured countries (2014).
Data from Global Burden of Disease Health
99. Financing Collaborator Network. Evolution and
patterns of global health financing 1995–2014:
development assistance for health, and
government, prepaid private, and out-of-pocket
health spending in 184 countries. Lancet 2017;
389:1981–2004.
H
ealth Insurance in the U
SA
(cont’d)
•
M
ost w
orking-aged A
m
ericans and their children
100. have em
ploym
ent-based private health insurance.
•
P
rem
ium
: a m
onthly fee paid for health insurance
•
D
eductible: the am
ount that an insured person m
ust
spend out-of-pocket on health care each year (in
addition to prem
ium
101. s) before the insurance com
pany
begins paying for health services
•
C
opay: a fixed fee that is paid out-of-pocket by an
insured patient w
hen receiving routine health services
•
C
o-insurance: a percentage of the costs of care that is
paid out-of-pocket by an insured patient
H
ealth Insurance in the U
SA
102. (cont’d)
•T
he m
ajor governm
ental insurance plans
provide healthcare coverage for older
adults, low
-incom
e households, and
m
ilitary personnel.
•M
edicare is the federal health funding
system
for people w
ho are ages 65 and
older and people w
103. ith serious perm
anent
disabilities.
•M
edicaid
is a federal program
that provides
funding to states to support state-sponsored
health coverage for very low
-incom
e
citizens.
H
ealth Insurance in the U
SA
104. (cont’d)
•
W
hile health insurance in the U
nited States w
as
originally designed to cover only the catastrophic
expenses from
serious illnesses or injuries that few
individuals could afford to pay, m
any insurance
plans now
pay for preventive care and m
inor health
problem
105. s because the insurance com
panies have
determ
ined that they save m
oney w
hen m
inor
conditions are treated before they becom
e m
ajor
problem
s.
•
T
he com
pany m
ay provide incentives for people w
106. ith
these chronic diseases to participate in disease
m
anagem
ent program
s that catch em
erging problem
s
early and avert the need for expensive em
ergency care.
H
ealth Insurance in O
ther
C
ountries
107. •
Som
e other high-incom
e countries use health
insurance as part of their strategy for ensuring
universal health care coverage.
•
For exam
ple, in G
erm
any every resident m
ust
belong to a highly regulated “sickness fund.”
•
E
m
ployers pay half of the costs for em
ployees, and the
108. governm
ent covers the full expense for children and
for unem
ployed adults.
•
Inpatient care is provided at both public and private
hospitals, and m
ost outpatient care is provided at
private clinics.
•
T
he paym
ents that providers receive for their services
are identical no m
atter w
here they w
ork.
109. H
ealth Insurance in O
ther
C
ountries (cont’d)
•
H
ealth insurance is also being used by a grow
ing num
ber
of residents of m
iddle-incom
e countries (L
M
IC
110. s) so that
they can access advanced care from
high-quality private
health providers.
•
For exam
ple, a large proportion of higher-incom
e
B
razilians purchase private insurance plans and seek
m
edical and surgical care at private facilities.
•
E
veryone in B
razil can access free prim
ary and em
111. ergency
health care at public facilities—
an im
portant right
guaranteed under B
razil’s constitution—
but the public
health system
offers a lim
ited range of services and
technologies.
•
H
ealth insurance allow
s w
ealthier households to access a
112. greater range of health services, procedures, m
edications,
and equipm
ent from
their preferred providers.
P
aying for G
lobal
H
ealth Interventions
6.5
Paying for G
lobal H
ealth
113. Interventions
•
T
he m
oney spent on global public health initiatives com
es from
a
different set of sources than the m
oney that pays for individual
health care.
•
L
ocal and national governm
ental spending provides the m
ajority of
funding for public health interventions around the w
orld.
114. •
G
lobal public health is funded by a com
bination of grants from
one
country to another, grants and loans from
intergovernm
ental
agencies, and gifts from
private-sector foundations, businesses, and
individuals.
•
T
he best financing m
echanism
s for new
115. global health initiatives
are sources that are stable and sustainable over tim
e, that are new
funding lines rather than m
oney redirected from
another program
,
and that are m
anaged efficiently w
ithout dem
anding heavy
adm
inistrative costs or burdening recipient populations.
Figure 6-10
116. Typical pathw
ay from
global health funders to
im
plem
enters.
D
onor M
otivations
•
D
onor m
otivations
•
For the governm
ents of high-incom
117. e countries, health
funding for low
er-incom
e countries is part of foreign
policy strategies for building trade alliances and
protecting hom
eland security.
•
M
ultilateral lending groups m
ay consider global health
projects to be good financial investm
ents.
•
Philanthropic organizations m
ay view
118. global health as
a tool for reducing poverty and prom
oting hum
an
flourishing.
•
L
arge corporations m
ay use global health w
ork to
cultivate custom
er loyalty in new
m
arkets, take
advantage of tax incentives, and foster a shared sense
of purpose am
ong em
119. ployees.
D
onor M
otivations (cont’d)
•M
ost of these rationales for funding
global health yield benefits for both
the recipients and the donors.
•T
he best global health projects
achieve goals that are m
utually
beneficial to all involved parties.
O
121. •
O
fficial developm
ent assistance (O
D
A
) = m
oney
given by the governm
ent of a high-incom
e country
to the governm
ent of a low
-incom
e country to
support socioeconom
ic developm
122. ent.
•
A
lthough som
e aid is given sim
ply to fight poverty,
aid is often tied to the political and econom
ic interests
of the donor country.
•
M
ost O
D
A
is donated to low
-and m
126. ent assistance com
m
itm
ents,
including the com
m
itm
ent by m
any developed
countries to achieve the target of 0.7%
of gross
national incom
e (G
N
I) for O
D
127. A
to developing
countries and 0.15%
to 0.20%
of G
N
I to least
developed countries” (SD
G
17.2).
•In 2015, the five donor nations that provided
the greatest am
ount of O
D
A
in total dollars
w
128. ere the U
nited States, the U
nited K
ingdom
,
G
erm
any, Japan, and France.
O
fficial D
evelopm
ent A
ssistance
(O
D
129. A
) (cont’d)
•A
s a percentage of their gross national
incom
e (G
N
I), the largest donors w
ere
Sw
eden, N
orw
ay, L
uxem
bourg,
D
enm
130. ark, the N
etherlands, and the
U
nited K
ingdom
, w
hich all spent at
least0.7%
of their G
N
I on O
D
A
.
•T
he U
131. nited States spent 0.17%
of its G
N
I
on O
D
A
, a rate far below
the 0.7%
target
in the SD
G
s even though the U
nited
States had the w
orld’s largest O
D
133. In 2015, the U
nited States spent about $32 billion on
hum
anitarian and other foreign aid, w
hich w
as about 0.9%
of total national governm
ent spending.
•
W
hen the $17 billion spent on foreign m
ilitary and security
assistance (w
hich is only a sm
all portion of the m
ilitary
134. budget used for international hum
anitarian operations and
other joint responses w
ith allies) is com
bined w
ith non-
m
ilitary/security foreign aid, the total spending on foreign
assistance w
as about 1.3%
of national governm
ental
spending.
•
A
id m
ay be given in the form
135. of cash transfers, equipm
ent
and com
m
odities (such as food and com
puters), training
and expert advice, or infrastructure developm
ent.
O
D
A
from
the U
SA
(cont’d)
137. M
ost m
ilitary aid flow
s through the D
epartm
ent of D
efense
(D
O
D
).
•
T
he U
.S. governm
ent considers foreign aid to be a critical
138. contributor to national security because aid supports
econom
ic grow
th, prom
otes stability, and com
bats illegal
activities.
•
T
he top recipients of non-m
ilitary/security O
D
A
from
the
U
nited States in 2015 w
139. ere A
fghanistan, Jordan, Pakistan,
K
enya, E
thiopia, South Sudan, Syria, and the D
R
C
.
•
A
ll of these countries w
ere engaged in civil conflicts or
located adjacent to conflict areas and housing large refugee
populations.
Figure 6-11
140. Foreign aid expenditures by the U
nited States
in 2015 by spending category, w
ith and
w
ithout m
ilitary/security assistance.
Data from Tarnoff C, Lawson ML. Foreign aid:
An introduction to U.S. programs and policy.
Washington: Congressional Research Service
(CRS); 2016.
D
evelopm
ent A
ssistance for
H
142. H
)= donor aid for health = O
D
A
designated for health activities.
•D
A
H
is an im
portant com
ponent of the
health budget in low
-incom
e countries .
•G
lobally, m
143. ore than $20 billion of
O
D
A
w
as spent on global health in
2015.
D
evelopm
ent A
ssistance for
H
ealth (D
A
H
144. ) (cont’d)
•T
he U
nited States allocated nearly $10 billion
of its foreign aid budget to global health
activities in 2015, m
aking it the largest
contributor of D
A
H
both in term
s of the total
budget for D
A
H
and the percentage of its
145. foreign aid budget assigned to D
A
H
.
•
A
bout 70%
of those funds w
ere dedicated to
H
IV
/A
ID
S, tuberculosis, and m
alaria program
s.
•
146. O
ther supported activities w
ere in the areas of
neglected tropical diseases, reproductive health,
child health, nutrition, w
ater and sanitation, and
global health security.
Figure 6-12
D
evelopm
ent assistance for health (D
A
H
) is an
im
147. portant com
ponent of total spending on
health in low
-incom
e countries.
Data from Global Burden of Disease Health Financing
Collaborator Network. Evolution and patterns of global
health financing 1995–2014: Development assistance for
health, and government, prepaid private, and out-of-pocket
health spending in 184 countries. Lancet 2017; 389:1981–
2004.
Figure 6-13
The U
nited States is a large donor of
developm
ent assistance for health (D
148. A
H
).
Data from Financing global health 2015: Development
assistance steady on the path to new Global Goals. Seattle:
Institute for Health Metrics and Evaluation (IHME); 2016.
FD
I and R
em
ittances
•
T
he SD
G
s em
phasize that O
149. D
A
is only part of the plan for
funding developm
ent activities.
•
T
he SD
G
s call for action to “strengthen dom
estic resource
m
obilization, including through international support to
developing countries, to im
prove dom
estic capacity for
150. tax and other revenue collection (SD
G
17.1) and to
“m
obilize additional financial resources for developing
countries from
m
ultiple sources,” including foreign direct
investm
ents and rem
ittances (SD
G
17.3).
•
F
oreign direct investm
ent (F
151. D
I)= business investm
ents
m
ade by corporations and individuals in other countries.
•
R
em
ittances
= funds transferred by international w
orkers
back to fam
ily m
em
bers in their hom
e com
m
152. unities.
FD
I and R
em
ittances (cont’d)
•T
he total am
ount of O
D
A
globally
in 2015 neared $150 billion (about
0.3%
of G
N
153. I in D
A
C
countries);
about $765 billion in FD
I w
as
invested in L
M
IC
s, and about $430
billion in rem
ittances w
ere sent to
L
M
155. A
, bilateral aid
and m
ultilateral aid.
•B
ilateral aid = m
oney given directly from
one
country (usually a high-incom
e country) to
another country (usually a low
er-incom
e
country).
•M
ultilateral aid = funding pooled from
m
156. any
donor countries.
•
T
he largest m
ultilateral organizations include the
U
nited N
ations, the W
orld B
ank and other
developm
ent banks, and the E
uropean U
nion.
157. M
ultilateral A
id (cont’d)
•
M
ultilateral organizations receive tw
o types of funds from
m
em
ber nations:
•
A
ssessed contributions are m
andatory dues calculated from
each country’s econom
158. ic and population statistics.
•
V
oluntary contributions are extra funds a country opts to
donate.
•
M
andatory funds go to the general budget of the
m
ultilateral organizations.
•
V
oluntary contributions can be designated as core
(unrestricted) or non-core (restricted) funding.
•
C
ore funding can be used by the m
159. ultilateral organization
on any projects they deem
to be priorities.
•
N
on-core funding is given for a specific purpose by the
donor and m
ust be spent on that particular activity.
M
ultilateral A
id (cont’d)
•In 2013, about 59%
of O
D
A
160. w
as
bilateral O
D
A
distributed by bilateral
agencies, about 28%
w
as core
m
ultilateral O
D
A
from
assessed and
voluntary contributions, and about 13%
161. w
as earm
arked non-core bilateral aid
distributed through m
ultilateral
organizations to designated recipient
countries.
T
he W
orld B
ank and IM
F
•
Tw
o m
162. ultilateral institutions have played a unique role in
financing econom
ic developm
ent projects because they
offer both loans
(borrow
ed m
oney that m
ust be repaid
w
ith interest) and grants
(m
oney that does not have to be
repaid): the W
orld B
ank and International M
163. onetary Fund.
•
B
oth institutions w
ere founded in 1944 during a sum
m
it
held at B
retton W
oods, N
ew
H
am
pshire.
•
B
oth are headquartered in W
164. ashington, D
C
.
•
B
oth are ow
ned by their nearly 180 m
em
ber nations.
•
B
oth m
ay require recipient countries to im
plem
ent
econom
ic policy reform
165. s as a condition of receiving loans.
•
B
ut the tw
o institutions have distinct functions and m
odes
of operating.
T
he W
orld B
ank
•T
he W
orld B
ank