SlideShare a Scribd company logo
1 of 63
3/22/2020 Prediction, persuasion, and the jurisprudence of
behaviourism: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932-
b932-4bf6-926e-
093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo
aWIm… 1/14
Title:
Database:
Prediction, persuasion, and the jurisprudence of behaviourism
By: Frank Pasquale, Glyn Cashwell,
17101174, , Vol. 68, Issue 1
ProjectMUSE
Prediction, persuasion, and the jurisprudence of
behaviourism
There is a growing literature critiquing the unreflective
application of big data, predictive analytics, artificial
intelligence, and
machine-learning techniques to social problems. Such methods
may reflect biases rather than reasoned decision making. They
also
may leave those affected by automated sorting and categorizing
unable to understand the basis of the decisions affecting them.
Despite these problems, machine-learning experts are feeding
judicial opinions to algorithms to predict how future cases will
be
decided. We call the use of such predictive analytics in judicial
contexts a jurisprudence of behaviourism as it rests on a
fundamentally Skinnerian model of cognition as a black-boxed
transformation of inputs into outputs. In this model, persuasion
is
passé; what matters is prediction. After describing and
critiquing a recent study that has advanced this jurisprudence of
behaviourism,
we question the value of such research. Widespread deployment
of prediction models not based on the meaning of important
precedents and facts may endanger the core rule-of-law values.
artificial intelligence; cyber law; machine learning;
jurisprudence; predictive analysis
I Introduction
A growing chorus of critics are challenging the use of opaque
(or merely complex) predictive analytics programs to monitor,
influence, and assess individuals’ behaviour. The rise of a
‘black box society’ portends profound threats to individual
autonomy;
when critical data and algorithms cannot be a matter of public
understanding or debate, both consumers and citizens are unable
to
comprehend how they are being sorted, categorized, and
influenced.[ 2]
A predictable counter-argument has arisen, discounting the
comparative competence of human decision makers. Defending
opaque
sentencing algorithms, for instance, Christine Remington (a
Wisconsin assistant attorney general) has stated: ‘We don’t
know what’s
going on in a judge’s head; it’s a black box, too.’[ 3] Of course,
a judge must (upon issuing an important decision) explain why
the
decision was made; so too are agencies covered by the
Administrative Procedure Act obliged to offer a ‘concise
statement of basis
and purpose’ for rule making.[ 4] But there is a long tradition of
realist commentators dismissing the legal justifications adopted
by
judges as unconvincing fig leaves for the ‘real’ (non-legal)
bases of their decisions.
In the first half of the twentieth century, the realist disdain for
stated rationales for decisions led in at least two directions:
toward
more rigorous and open discussions of policy considerations
motivating judgments and toward frank recognition of judges as
political actors, reflecting certain ideologies, values, and
interests. In the twenty-first century, a new response is
beginning to emerge:
a deployment of natural language processing and machine-
learning (ML) techniques to predict whether judges will hear a
case and, if
so, how they will decide it. ML experts are busily feeding
algorithms with the opinions of the Supreme Court of the United
States,
the European Court of Human Rights, and other judicial bodies
as well as with metadata on justices’ ideological commitments,
past
Listen American Accent
http://app-na.readspeaker.com/cgi-
bin/rsent?customerid=5845&lang=en_us&readid=rs_full_text_c
ontainer_title&url=http%3A%2F%2Feds.a.ebscohost.com%2Fed
s%2Fdetail%2Fdetail%3Fvid%3D1%26sid%3D80a53932-b932-
4bf6-926e-
093727bceef6%2540sessionmgr4007%26bdata%3DJkF1dGhUeX
BlPXNoaWImc2l0ZT1lZHMtbGl2ZQ%253d%253d&speedValue
=medium&download=true&audiofilename=Predictionpersuasion
andthejurisprudence-FrankPasquale-20180327
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:void(0);
3/22/2020 Prediction, persuasion, and the jurisprudence of
behaviourism: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932-
b932-4bf6-926e-
093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo
aWIm… 2/14
voting record, and myriad other variables. By processing data
related to cases, and the text of opinions, these systems purport
to
predict how judges will decide cases, how individual judges
will vote, and how to optimize submissions and arguments
before them.
This form of prediction is analogous to forecasters using big
data (rather than understanding underlying atmospheric
dynamics) to
predict the movement of storms. An algorithmic analysis of a
database of, say, 10,000 past cumulonimbi sweeping over Lake
Ontario
may prove to be a better predictor of the next cumulonimbus’s
track than a trained meteorologist without access to such a data
trove.
From the perspective of many predictive analytics approaches,
judges are just like any other feature of the natural world – an
entity
that transforms certain inputs (such as briefs and advocacy
documents) into outputs (decisions for or against a litigant).
Just as
forecasters predict whether a cloud will veer southwest or
southeast, the user of a ML system might use machine-readable
case
characteristics to predict whether a rainmaker will prevail in the
courtroom.
We call the use of algorithmic predictive analytics in judicial
contexts an emerging jurisprudence of behaviourism, since it
rests on a
fundamentally Skinnerian model of mental processes as a black-
boxed transformation of inputs into outputs.[ 5] In this model,
persuasion is passé; what matters is prediction.[ 6] After
describing and critiquing a recent study typical of this
jurisprudence of
behaviourism, we question the value of the research program it
is advancing. Billed as a method of enhancing the legitimacy
and
efficiency of the legal system, such modelling is all too likely
to become one more tool deployed by richer litigants to gain
advantages over poorer ones.[ 7] Moreover, it should raise
suspicions if it is used as a triage tool to determine the priority
of cases.
Such predictive analytics are only as good as the training data
on which they depend, and there is good reason to doubt such
data
could ever generate in social analysis the types of ground truths
characteristic of scientific methods applied to the natural world.
While fundamental physical laws rarely if ever change, human
behaviour can change dramatically in a short period of time.
Therefore, one should always be cautious when applying
automated methods in the human context, where factors as basic
as free will
and political change make the behaviour of both decision
makers, and those they impact, impossible to predict with
certainty.[ 8]
Nor are predictive analytics immune from bias. Just as judges
bring biases into the courtroom, algorithm developers are prone
to
incorporate their own prejudices and priors into their
machinery.[ 9] In addition, biases are no easier to address in
software than in
decisions justified by natural language. Such judicial opinions
(or even oral statements) are generally much less opaque than
ML
algorithms. Unlike many proprietary or hopelessly opaque
computational processes proposed to replace them, judges and
clerks can
be questioned and rebuked for discriminatory behaviour.[ 10]
There is a growing literature critiquing the unreflective
application of
ML techniques to social problems.[ 11] Predictive analytics may
reflect biases rather than reasoned decision making.[ 12] They
may
also leave those affected by automated sorting and categorizing
unable to understand the basis of the decisions affecting them,
especially when the output from the models in anyway affects
one’s life, liberty, or property rights and when litigants are not
given
the basis of the model’s predictions.[ 13]
This article questions the social utility of prediction models as
applied to the judicial system, arguing that their deployment
may
endanger core rule-of-law values. In full bloom, predictive
analytics would not simply be a camera trained on the judicial
system,
reporting on it, but it would also be an engine of influence,
shaping it. Attorneys may decide whether to pursue cases based
on such
systems; courts swamped by appeals or applications may be
tempted to use ML models to triage or prioritize cases. In work
published to widespread acclaim in 2016, Nikolaos Aletras,
Dimitrios Tsarapatsanis, Daniel Preoţiuc-Pietro, and Vasileios
Lampos
made bold claims about the place of natural language processing
(NLP) in the legal system in their article Predicting Judicial
Decisions of the European Court of Human Rights: A Natural
Language Processing Perspective.[ 14] They claim that
‘advances in
Natural Language Processing (NLP) and Machine Learning
(ML) provide us with the tools to automatically analyse legal
materials,
so as to build successful predictive models of judicial
outcomes.’[ 15] Presumably, they are referring to their own
work as part of
these advances. However, close analysis of their ‘systematic
study on predicting the outcome of cases tried by the European
Court of
Human Rights based solely on textual content’ reveals that their
soi-disant ‘success’ merits closer scrutiny on both positive and
normative grounds.
3/22/2020 Prediction, persuasion, and the jurisprudence of
behaviourism: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932-
b932-4bf6-926e-
093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo
aWIm… 3/14
The first question to be asked about a study like Predicting
Judicial Decisions is: what are its uses and purposes? Aletras
and
colleagues suggest at least three uses. First, they present their
work as a first step toward the development of ML and NLP
software
that can predict how judges and other authorities will decide
legal disputes. Second, Aletras has clearly stated to media that
artificial
intelligence ‘could also be a valuable tool for highlighting
which cases are most likely to be violations of the European
Convention of
Human Rights’ – in other words, that it could help courts triage
which cases they should hear.[ 16] Third, they purport to
intervene in
a classic jurisprudential debate – whether facts or law matter
more in judicial determinations.[ 17] Each of these aims and
claims
should be rigorously interrogated, given shortcomings of the
study that the authors acknowledge. Beyond these
acknowledged
problems, there are even more faults in their approach which
cast doubt on whether the research program of NLP-based
prediction of
judicial outcomes, even if pursued in a more realistic manner,
has anything significant to contribute to our understanding of
the legal
system.
Although Aletras and colleagues have used cutting edge ML and
NLP methods in their study, their approach metaphorically
stacks
the deck in favour of their software and algorithms in so many
ways that it is hard to see its relevance to either practising
lawyers or
scholars. Nor is it plausible to state that a method this crude,
and disconnected from actual legal meaning and reasoning,
provides
empirical data relevant to jurisprudential debates over legal
formalism and realism. As more advanced thinking on artificial
intelligence and intelligence augmentation has already
demonstrated, there is an inevitable interface of human meaning
that is
necessary to make sense of social institutions like law.
II Stacking the deck: ‘predicting’ the contemporaneous
The European Court of Human Rights (ECtHR) hears cases in
which parties allege that their rights under the articles of the
European
Convention of Human Rights were violated and not remedied by
their country’s courts.[ 18] The researchers claim that the
textual
model has an accuracy of ‘79% on average.’[ 19] Given
sweepingly futuristic headlines generated by the study
(including ‘Could AI
[Artificial Intelligence] Replace Judges and Lawyers?’), a
casual reader of reports on the study might assume that this
finding means
that, using the method of the researchers, those who have some
aggregation of data and text about case filings can use that data
to
predict how the ECtHR will decide a case, with 79 per cent
accuracy.[ 20] However, that would not be accurate. Instead, the
researchers used the ‘circumstances’ subsection in the cases
they claimed to ‘predict,’ which had ‘been formulated by the
Court
itself.’[ 21] In other words, they claimed to be ‘predicting’ an
event (a decision) based on materials released simultaneously
with the
decision. This is a bit like claiming to ‘predict’ whether a judge
had cereal for breakfast yesterday based on a report of the
nutritional
composition of the materials on the judge’s plate at the exact
time she or he consumed the breakfast.[ 22] Readers can (and
should)
balk at using the term ‘prediction’ to describe correlations
between past events (like decisions of a court) and
contemporaneously
generated, past data (like the circumstances subsection of a
case). Sadly, though, few journalists breathlessly reporting the
study by
Aletras and colleagues did so.
To their credit, though, Aletras and colleagues repeatedly
emphasize how much they have effectively stacked the deck by
using
ECtHR-generated documents themselves to help the ML/NLP
software they are using in the study ‘predict’ the outcomes of
the cases
associated with those documents. A truly predictive system
would use the filings of the parties, or data outside the filings,
that was in
existence before the judgement itself. Aletras and colleagues
grudgingly acknowledge that the circumstances subsection
‘should not
always be understood as a neutral mirroring of the factual
background of the case,’ but they defend their method by stating
that the
‘summaries of facts found in the “Circumstances” section have
to be at least framed in as neutral and impartial a way as
possible.’[
23] However, they give readers no clear guide as to when the
circumstances subsection is actually a neutral mirroring of
factual
background or how closely it relates to records in existence
before a judgment that would actually be useful to those
aspiring to
develop a predictive system.
Instead, their ‘premise is that published judgments can be used
to test the possibility of a text-based analysis for ex ante
predictions
of outcomes on the assumption that there is enough similarity
between (at least) certain chunks of the text of published
judgments
and applications lodged with the Court and/or briefs submitted
by parties with respect to pending cases.’[ 24] But they give us
few
compelling reasons to accept this assumption since almost any
court writing an opinion to justify a judgment is going to
develop a
facts section in ways that reflect its outcome. The authors state
that the ECtHR has ‘limited fact finding powers,’ but they give
no
3/22/2020 Prediction, persuasion, and the jurisprudence of
behaviourism: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932-
b932-4bf6-926e-
093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo
aWIm… 4/14
sense of how much that mitigates the cherry-picking of facts or
statements about the facts problem. Nor should we be comforted
by
the fact that ‘the Court cannot openly acknowledge any kind of
bias on its part.’ Indeed, this suggests a need for the Court to
avoid
the types of transparency in published justification that could
help researchers artificially limited to NLP better understand
it.[ 25]
The authors also state that in the ‘vast majority of cases,’ the
‘parties do not seem to dispute the facts themselves, as
contained in the
“Circumstances�� subsection, but only their legal
significance.’ However, the critical issues here are, first, the
facts themselves
and, second, how the parties characterized the facts before the
circumstances section was written. Again, the fundamental
problem of
mischaracterization – of ‘prediction’ instead of mere correlation
or relationship – crops up to undermine the value of the study.
Even in its most academic mode – as an ostensibly empirical
analysis of the prevalence of legal realism – the study by
Aletras and
colleagues stacks the deck in its favour in important ways.
Indeed, it might be seen as assuming at the outset a version of
the very
hypothesis it ostensibly supports. This hypothesis is that
something other than legal reasoning itself drives judicial
decisions. Of
course, that is true in a trivial sense – there is no case if there
are no facts – and perhaps the authors intend to make that trivial
point.[
26] However, their language suggests a larger aim, designed to
meld NLP and jurisprudence. Given the critical role of meaning
in the
latter discipline, and their NLP methods’ indifference to it, one
might expect an unhappy coupling here. And that is indeed what
we
find.
In the study by Aletras and colleagues, the corpus used for the
predictive algorithm was a body of ECtHR’s ‘published
judgments.’
Within these judgments, a summary of the factual background
of the case was summarized (by the Court) in the circumstances
section of the judgments, but the pleadings themselves were not
included as inputs.[ 27] The law section, which ‘considers the
merits
of the case, through the use of legal argument,’ was also input
into the model to determine how well that section alone could
‘predict’
the case outcome.[ 28]
Aletras and colleagues were selective in the corpus they fed to
their algorithms. The only judgments that were included in the
corpus
were those that passed both a ‘prejudicial stage’ and a second
review.[ 29] In both stages, applications were denied if they did
not
meet ‘admissibility criteria,’ which were largely procedural in
nature.[ 30] To the extent that such procedural barriers were
deemed
‘legal,’ we might immediately have identified a bias problem in
the corpus – that is, the types of cases where the law entirely
determined the outcome (no matter how compelling the facts
may have been) were removed from a data set that was
ostensibly fairly
representative of the universe of cases generally. This is not a
small problem either; the overwhelming majority of applications
were
deemed inadmissible or struck out and were not reportable.[ 31]
But let us assume, for now, that the model only aspired to offer
data about the realist/formalist divide in those cases that did
meet the
admissibility criteria. There were other biases in the data set.
Only cases that were in English, approximately 33 per cent of
the total
ECtHR decisions, were included.[ 32] This is a strange omission
since the NLP approach employed here had no semantic content
–
that is, the meaning of the words did not matter to it.
Presumably, this omission arose out of concerns for making data
coding and
processing easier. There was also a subject matter restriction
that further limited the scope of the sample. Only cases
addressing
issues in Articles 3, 6, and 8 of the ECHR were included in
training and in verifying the model. And there is yet another
limitation:
the researchers then threw cases out randomly (so that the data
set contained an equal number of violation/no violation cases)
before
using them as training data.[ 33]
III Problematic characteristics of the ECtHR textual ‘predictive’
model
The algorithm used in the case depended on an atomization of
case language into words grouped together in sets of one-, two-,
three-,
and four-word groupings, called n-grams.[ 34] Then, 2,000 of
the most frequent n-grams, not taking into consideration
‘grammar,
syntax and word order,’ were placed in feature matrices for each
section of decisions and for the entire case by using the vectors
from
each decision.[ 35] Topics, which are created by ‘clustering
together n-grams,’ were also created.[ 36] Both topics and n-
grams were
used to ‘to train Support Vector Machine (SVM) classifiers.’ As
the authors explain, an ‘SVM is a machine learning algorithm
that
has shown particularly good results in text classification,
especially using small data sets.’[ 37] Model training data from
these
opinions were ‘n-gram features,’ which consist of groups of
words that ‘appear in similar contexts.’[ 38] Matrix
mathematics, which
3/22/2020 Prediction, persuasion, and the jurisprudence of
behaviourism: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932-
b932-4bf6-926e-
093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo
aWIm… 5/14
are manipulations on two-dimensional tables, and vector space
models, which are based on a single column within a table, were
programmed to determine clusters of words that should be
similar to one another based on textual context.[ 39] These
clusters of
words are called topics. The model prevented a word group
from showing up in more than one topic. Thirty topics, or sets
of similar
word groupings, were also created for entire court opinions.
Topics were similarly created for entire opinions for each
article.[ 40]
Since the court opinions all follow a standard format, the
opinions could be easily dissected into different identifiable
sections.[ 41]
Note that these sorting methods are legally meaningless. N-
grams and topics are not sorted the way a treatise writer might
try to
organize cases or a judge might try to parse divergent lines of
precedent. Rather, they simply serve as potential independent
variables
to predict a dependent variable (was there a violation, or was
there not a violation, of the Convention).
Before going further into the technical details of the study, it is
useful to compare it to prior successes of ML in facial or
number
recognition. When a facial recognition program successfully
identifies a given picture as an image of a given person, it does
not
achieve that machine vision in the way a human being’s eye and
brain would do so. Rather, an initial training set of images (or
perhaps even a single image) of the person are processed,
perhaps on a 1,000-by-1,000-pixel grid. Each box in the grid
can be
identified as either skin or not skin, smooth or not smooth,
along hundreds or even thousands of binaries, many of which
would never
be noticed by a human being. Moreover, such parameters can be
related to one another; so, for example, regions hued as ‘lips’ or
‘eyes’ might have a certain maximum length, width, or ratio to
one another (such that a person’s facial ‘signature’ reliably has
eyes
that are 1.35 times as long as they are wide). Add up enough of
these ratios for easily recognized features (ears, eyebrows,
foreheads,
and so on), and software can quickly find a set of mathematical
parameters unique to a given person – or at least unique enough
that
an algorithm can predict that a given picture is, or is not, a
picture of a given person, with a high degree of accuracy. The
technology
found early commercial success with banks, which needed a
way to recognize numbers on cheques (given the wide variety of
human
handwriting). With enough examples of written numbers
(properly reduced to data via dark or filled spaces on a grid),
and
computational power, this recognition can become nearly
perfect.
Before assenting too quickly to the application of such methods
to words in cases (as we see them applied to features of faces),
we
should note that there are not professions of ‘face recognizers’
or ‘number recognizers’ among human beings. So while
Facebook’s
face recognition algorithm, or TD Bank’s cheque sorter, do not
obviously challenge our intuitions about how we recognize
faces or
numbers, applying ML to legal cases should be marked as a
jarring imperialism of ML methods into domains associated
with a rich
history of meaning (and, to use a classic term from the
philosophy of social sciences, Verstehen). In the realm of face
recognizing,
‘whatever works’ as a pragmatic ethic of effectiveness
underwrites some societies’ acceptance of width/length ratios
and other
methods to assure algorithmic recognition and classification of
individuals.[ 42] The application of ML approaches devoid of
apprehension of meaning in the legal context is more troubling.
For example, Aletras and colleagues acknowledge that there are
cases where the model predicts the incorrect outcome because
of the similarity in words in cases that have opposite results. In
this
case, even if information regarding specific words that triggered
the SVM classifier were output, users might not be able to
easily
determine that the case was likely misclassified.[ 43] Even with
confidence interval outputs, this type of problem does not
appear to
have an easy solution. This is particularly troubling for due
process if such an algorithm, in error, incorrectly classified
someone’s
case because it contained language similarities to another very
different case.[ 44] When the cases are obviously misclassified
in this
way, models like this would likely ‘surreptitiously embed
biases, mistakes and discrimination, and worse yet, even
reiterate and
reinforce them on the new cases processed.’[ 45] So, too, might
a batch of training data representing a certain time period when
a
certain class of cases were dominant help ensure the dominance
of such cases in the future. For example, the ‘most predictive
topic’
for Article 8 decisions included prominently the words ‘son,
body, result, Russian.’ If the system were used in the future to
triage
cases, ceteris paribus, it might prioritize cases involving sons
over daughters or Russians over Poles.[ 46] But if those future
cases do
not share the characteristics of the cases in the training set that
led to the ‘predictiveness’ of ‘son’ status or ‘Russian’ status,
their
prioritization would be a clear legal mistake.
Troublingly, the entire ‘predictive’ project here may be riddled
with spurious correlations. As any student of statistics knows, if
one
tests enough data sets against one another, spurious correlations
will emerge. For example, Tyler Vigen has shown a very tight
correlation between the divorce rate in Maine and the per capita
consumption of margarine between 2000 and 2009.[ 47] It is
unlikely that one variable there is driving the other. Nor is it
likely that some intervening variable is affecting both butter
3/22/2020 Prediction, persuasion, and the jurisprudence of
behaviourism: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932-
b932-4bf6-926e-
093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo
aWIm… 6/14
consumption and divorce rates in a similar way, to ensure a
similar correlation in the future. Rather, this is just the type of
random
association one might expect to emerge once one has thrown
enough computing power at enough data sets.
It is hard not to draw similar conclusions with respect to Aletras
and colleagues’ ‘predictive’ project. Draw enough variations
from
the ‘bag of words,’ and some relationships will emerge. Given
that the algorithm only had to predict ‘violation’ or ‘no
violation,’
even a random guessing program would be expected to have a
50 per cent accuracy rate. A thought experiment easily deflates
the
meaning of their trumpeted 79 per cent ‘accuracy.’ Imagine that
the authors had continual real time surveillance of every aspect
of
the judges’ lives before they wrote their opinions: the size of
the buttons on their shirts and blouses, calories consumed at
breakfast,
average speed of commute, height and weight, and so forth.
Given a near infinite number of parameters of evaluation, it is
altogether
possible that they could find that a cluster of data around
breakfast type, or button size, or some similarly irrelevant
characteristics,
also added an increment of roughly 29 per cent accuracy to the
baseline 50 per cent accuracy achieved via randomness (or
always
guessing violation). Should scholars celebrate the ‘artificial
intelligence’ behind such a finding? No. Ideally, they would
chuckle at it,
as readers of Vigen’s website find amusement at random
relationships between, say, number of letters in winning words
at the
National Spelling Bee and number of people killed by venomous
spiders (which enjoys a 80.57 per cent correlation).
This may seem unfair to Aletras and colleagues since they are
using so much more advanced math than Vigen is. However,
their
models do not factor in meaning, which is of paramount
importance in rights determinations. To be sure, words like
‘burial,’ ‘attack,’
and ‘died’ do appear properly predictive, to some extent, in
Article 8 decisions and cause no surprise when they are
predictive of
violations.[ 48] But what are we to make of inclusion of words
…
YearQuarterLocationCarClass Revenue NumCars
2017Q1DowntownEconomy $976,803 6,137
2017Q1AirportEconomy $1,047,031 5,773
2015Q3DowntownEconomy $804,931 5,564
2016Q4AirportEconomy $958,989 5,370
2016Q1DowntownEconomy $750,562 5,048
2015Q3AirportEconomy $733,215 4,917
2016Q4DowntownEconomy $735,993 4,751
2016Q3DowntownEconomy $712,136 4,703
2016Q2DowntownEconomy $670,068 4,459
2015Q4AirportEconomy $639,838 4,256
2015Q4AirportPremium $663,293 4,137
2016Q3AirportPremium $688,190 4,081
2015Q4DowntownPremium $623,279 4,072
2017Q1AirportPremium $709,705 4,024
2017Q2AirportPremium $721,899 4,008
2016Q2AirportPremium $626,117 3,773
2017Q2DowntownEconomy $600,403 3,748
2016Q3AirportEconomy $620,543 3,665
2016Q1AirportPremium $590,987 3,621
2015Q3DowntownPremium $540,136 3,584
2015Q4DowntownEconomy $531,619 3,582
2015Q2AirportEconomy $501,606 3,470
2016Q1AirportEconomy $521,223 3,406
2015Q1AirportEconomy $469,217 3,387
2016Q2DowntownPremium $522,789 3,283
2017Q2AirportEconomy $621,746 3,282
2015Q2DowntownPremium $487,304 3,274
2016Q4AirportPremium $564,853 3,260
2015Q3AirportPremium $504,843 3,194
2016Q3DowntownPremium $517,084 3,185
2016Q1DowntownPremium $444,067 2,840
2015Q2DowntownEconomy $396,037 2,839
2015Q1DowntownEconomy $374,342 2,817
2016Q4DowntownPremium $450,598 2,748
2017Q1DowntownPremium $451,848 2,695
2015Q1DowntownPremium $370,169 2,537
2015Q1AirportPremium $375,634 2,507
2016Q2AirportEconomy $384,966 2,277
2015Q2AirportPremium $316,848 2,057
2017Q2DowntownPremium $344,292 2,008
Excel Project 3 – MS Excel
Summer 2018
Use the project description HERE to complete this activity. For
a review of the complete rubric used in grading
this exercise, click on the Assignments tab, then on the title
Excel Project #3. Click on Show Rubrics if the
rubric is not already displayed.
Summary
Create a Microsoft Excel file with four worksheets that provides
extensive use of Excel capabilities for charting.
The charts will be copied into a Microsoft PowerPoint file and
the student will develop appropriate findings and
recommendations based on analysis of the data.
A large rental car company has two metropolitan locations, one
at the airport and another centrally located in
downtown. It has been operating since 2015 and each location
summarizes its car rental revenue quarterly.
Both locations rent two classes of cars: economy and premium.
Rental revenue is maintained separately for
the two classes of rental vehicles.
The data for this case resides in the file
summer2018rentalcars.txt and can be downloaded by clicking
on the
Assignments tab, then on the data tile name. It is a text file
(with the file type .txt).
Do not create your own data, you must use the data provided
and only the data provided.
Default Formatting. All labels, text, and numbers will be Arial
10, There will be $ and comma and
decimal point variations for numeric data, but Arial 10 will be
the default font and font size.
Step Requirement
Points
Allocated
Comments
1
Open Excel and save a blank workbook with the following
name:
a. “Student’s First InitialLast Name Excel Project 3”
Example: JSmith Excel Project 3
b. Set Page Layout Orientation to Landscape
0.2
Use Print Preview to review
how the first worksheet
would print.
2 Change the name of the worksheet to Analysis by. 0.1
3
In the Analysis by worksheet:
a. Beginning in Row 1, enter the four labels in column
A (one label per row) in the following order:
Name:, Class/Section:, Project:, Date Due:
b. Place a blank row between each label. Please note
the colon : after each label.
c. Align the labels to the right side in the cells
It may be necessary to adjust the column width so the four
labels are clearly visible.
0.3
Format for text in column
A:
• Arial 10 point
• Normal font
• Right-align all four
labels in the cells
4
In the Analysis by worksheet with all entries in column C:
a. Enter the appropriate values for your Name, Class
and Section, Project, Date Due across from the
appropriate label in column A.
0.2
Format for text in column
C:
• Arial 10 point
Step Requirement
Points
Allocated
Comments
b. Use the formatting in the Comments column (to the
right).
• Bold
• Left-align all four
values in the cells
5
a. Create three new worksheets: Data, Slide 2, Slide 3.
Upon completion, there must be the Analysis by
worksheet as well as the three newly created
worksheets.
b. Delete any other worksheets.
0.2
6
After clicking on the blank cell A1 (to select it) in the Data
worksheet:
a. Import the text file summer2018rentalcars.txt into
the Data worksheet.
b. Adjust all column widths so there is no data or column
header truncation.
Though the intent is to import the text file into the Data
worksheet, sometimes when text data is imported into a
worksheet, a new worksheet is created. If this happens,
delete the blank Data worksheet, and then rename the new
worksheet which HAS the recently imported data as “Data.”
It may be necessary to change Revenue data to Currency
format (leading $ and thousands separators) with NO
decimal points and to change NumCars data to Number
format with NO decimal points, but with the comma
(thousands separator) because of the import operation.
This may or may not occur, but in case it does it needs to be
corrected. Adjust all column widths so there is no data or
column header truncation.
0.3
Format for all data (field
names, data text, and data
numbers)
• Arial 10 point.
• Normal font
The field names must be in
the top row of the
worksheet with the data
directly under it in rows.
This action may not be
necessary as this is part of
the Excel table creation
process. The data must
begin in Column A..
7
In the Data worksheet:
a. Create an Excel table with the recently imported
data.
b. Pick a style with the styles group to format the table
(choose a style that shows banded rows, i.e., rows
that alternate between 2 colors).
c. The style must highlight the field names in the first
row. These are your table headers.
d. Ensure NO blank cells are part of the specified data
range.
e. Ensure that Header Row and Banded Rows are
selected in the Table Style Options Group Box. Do
NOT check the Total Row.
0.5
Some adjustment may be
necessary to column
widths to ensure all field
names and all data are
readable (not truncated or
obscured).
8
In the Data worksheet,
a. Sort the entire table by Year (Ascending).
b. Delete rows that contain 2015 data as well as 2017
data.
The resulting table must consist of Row 1 labels followed by
2016 data, with NO empty cells or rows within the table.
0.2
9
In the Data worksheet:
0.4
Step Requirement
Points
Allocated
Comments
a. Select the entire table (data and headers) using a
mouse.
b. Copy the table to the both the Slide 2 as well as the
Slide 3 worksheets.
c. The upper left-hand corner of the header/data must
be in cell A1 on Slide 2 and Slide 3
Adjust columns widths if necessary to ensure all data and
field names are readable.
10
In the Slide 2 worksheet, based solely on the 2016 data:
a. Create a Pivot Table that displays the total number of car
rentals for each car class and the total number of car
rentals for each of the four quarters of 2016. A grand
total for the total number of rentals must also be
displayed. The column labels must be the two car
classes and the row labels must be the four quarters.
b. Place the pivot table two rows below the data beginning
at the left border of column A. Ensure that the formatting
is as listed in the Comments column.
c. Create a Pivot Table that displays the total number of car
rentals for each location and the total number of car
rentals for each of the four quarters of 2016. A grand
total for the total number of rentals must also be
displayed. The column labels must be the two locations
and the row labels must be the four quarters.
d. Place this pivot table two rows below the above pivot
table beginning at the left border of column A. Ensure
that the formatting is as listed in the Comments column.
Adjust the column widths as necessary to preclude data and
title and label truncation.
2.0
Format (for both pivot
tables):
• Number format with
comma separators (for
thousands)
• No decimal places
• Arial 10 point
• Normal
11
In the Slide 2 worksheet, based solely on the 2016 data:
a. Using the pivot table created in Step 10 a, create a bar or
column chart that displays the number of car rentals by
car class for the four 2016 quarters. Both car types and
quarters must be clearly visible.
b. Add a title that reflects the information presented by the
chart.
c. Position the top of the chart in row 1 and two or three
columns to the right of the data table. Use this same
type of bar or column chart for the remaining three charts
to be created.
d. Using the pivot table created in 10 c, create a bar or
column chart that displays the number of car rentals by
location for the four 2016 quarters. Both locations and
quarters must be clearly visible.
e. Add a title that reflects the information presented by the
chart.
f. Left-align this chart with the left side of the first chart and
below it. The same type of bar or column chart must be
used throughout this project.
1.8
The charts must allow a
viewer to determine
approximate number or car
rental by car class (first
chart) and number of car
rentals by location (second
chart)
The charts must have no
more than eight bars or
columns.
ALL FOUR charts must be
the same “format.”
Formatted: Font: (Default) Arial, Font color: Black
Step Requirement
Points
Allocated
Comments
12
In the Slide 3 worksheet, based solely on the 2016 data:
a. Create a Pivot Table that displays the total revenue for
each car class and the total revenue for each of the four
quarters of 2016. A grand total for the total revenue must
also be displayed. The column labels must be the two
car classes and the row labels must be the four quarters.
b. Place the pivot table two rows below the data beginning
at the left border of column A.
c. Create a Pivot Table that must displays the total revenue
for each location and the total revenue for each of the
four quarters of 2016. A grant total for the total revenue
must also be displayed. The column labels must be the
two locations and the row labels must be the four
quarters..
d. Place this pivot table two rows below the above pivot
table beginning at the left border of column A.
Adjust the column widths as necessary to preclude data and
title and label truncation.
2.0
Format (for both pivot
tables):
• Currency ($) with
comma separators (for
thousands)
• No decimal places
• Arial 10 point
Normal
13
In the Slide 3 worksheet, based solely on the 2016 data:
a. Using the pivot table created in Step 12 a, create a bar
or column chart that displays the revenue from car
rentals by car class for the four 2016 quarters. Ensure
both car types and quarters are clearly visible.
b. Add a title that reflects the information presented by the
chart.
c. Position the top of the chart in row 1 and two or three
columns to the right of the data table. The same type of
bar chart must be used throughout this project.
d. Using the pivot table created in Step 12 c, create a bar or
column chart that displays the revenue from car rentals
by location for the four 2016 quarters. Ensure both
locations and quarters are clearly visible.
e. Add a title that reflects the information presented by the
chart.
f. Left-align this chart with the left side of the first chart and
below it. The same type of bar chart must be used
throughout this project.
1.8
The charts must allow a
viewer to determine
approximate number or car
rental by car class (first
chart) and number of car
rentals by location (second
chart)
The charts must have no
more than eight bars or
columns.
ALL FOUR charts must be
the same “format.”
14
a. Open a new, blank Power Point presentation file.
b. Save the Presentation using the following name:
“Student’s First Initial Last Name Presentation”
Example: JSmith Presentation
0.1
Step Requirement
Points
Allocated
Comments
15
Slides are NOT Microsoft Word documents viewed
horizontally. Be brief. Full sentences are not needed. Blank
space in a slide enhances the viewer experience and
contributes to readability.
Slide 1:
a. Select an appropriate Design to maintain a
consistent look and feel for all slides in the
presentation. Blank slides with text are not
acceptable.
b. This is your Title Slide.
c. Select an appropriate title and subtitle layout that
clearly conveys the purpose of your presentation.
d. Name, Class/Section, and Date Due must be
displayed.
0.8
No speaker notes required.
Remember, the title on
your slide must convey
what the presentation is
about. Your Name,
Class/Section, and Date
Due can be used in the
subtitle area.
16
Slide 2:
a. Title this slide "Number of Cars Rented in 2016"
b. Add two charts created in the Slide 2 worksheet of
the Excel file
c. The charts must be the same type and equal size and
be symmetrically placed on the slide.
d. A bullet or two of explanation of the charts may be
included, but is not required if charts are self-
explanatory.
e. Use the speaker notes feature to help you discuss the
bullet points and the charts (four complete sentences
minimum).
1.1
Ensure that there are no
grammar or spelling errors
on your chart and in your
speaker notes.
17
Slide 3:
a. Title this slide "Car Rental Revenue in 2016"
b. Add two charts, created in the Slide 3 worksheet of
the Excel file.
c. The charts must be the same type and equal size and
be symmetrically placed on the slide.
d. A bullet or two explanation of the charts may be
included, but is not required if charts are self-
explanatory.
e. Use the speaker notes feature to help you discuss the
bullet points and the charts (four complete sentences
minimum).
1.1
Ensure that there are no
grammar or spelling errors
on your chart and in your
speaker notes.
18
Slide 4:
a. Title this slide "And in Conclusion….."
b. Write and add two major bullets, one for findings and
one for recommendations.
c. There must be a minimum of one finding based on
slide 2 and one finding based on slide 3. Findings are
facts that can be deduced by analyzing the charts.
What happened? Trends? Observations?
d. There must be a minimum of one recommendation
based on slide 2 and one recommendation based on
slide 3. Recommendations are strategies or
suggestions to improve or enhance the business
based on the findings above.
1.1
Ensure that there are no
grammar or spelling errors
on your chart and in your
speaker notes.
Step Requirement
Points
Allocated
Comments
e. Use the speaker notes feature to help you discuss the
findings and recommendations (four complete
sentences minimum).
19
Add a relevant graphic that enhances the recommendations
and conclusions on slide 4. If a photo is used, be sure to cite
the source. The source citation must be no larger than Font
size of 6, so it does not distract from the content of the slide.
0.2
20
Create a footer using "Courtesy of Your Name" so that is
shows on all slides including the Title Slide. The text in this
footer must be on the left side of the slides IF the theme
selected allows. Otherwise let the theme determine the
position of this text.
0.2
Replace the words "Your
Name" with your actual
name.
21
Create a footer for automated Slide Numbers that appears
on all slides except the Title Slide. The page number must
be on the right side of the slides IF the theme selected
allows. Otherwise let the theme determine the position of the
page number.
Ensure that your name does appear on every slide, but the
page numbers start on slide #2. This will involve slightly
different steps to accomplish both.
0.2
Depending upon the theme
you have chosen, the page
number or your name may
not appear in the lower
portion of the slide. That is
ok, as long as both appear
somewhere on the slides.
22 Apply a transition scheme to all slides. 0.1
One transition scheme
may be used OR different
schemes for different
slides
23
Apply an animation on at least one slide. The animation may
be applied to text or a graphic.
0.1
TOTAL 15.0
Be sure you submit BOTH the Excel file and the PowerPoint
file in the appropriate Assignment
folder (Excel Project #3).
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 1/9
Title:
Database:
Big Data Analytics for Rapid, Impactful, Sustained, and
Efficient (RISE)
Humanitarian Operations. By: Swaminathan, Jayashankar M.,
Production &
Operations Management, 10591478, Sep2018, Vol. 27, Issue 9
Business Source Premier
Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Humanitarian Operations
There has been a significant increase in the scale and scope of
humanitarian efforts over the last decade.
Humanitarian operations need to be—rapid, impactful,
sustained, and efficient (RISE). Big data offers many
opportunities to enable RISE humanitarian operations. In this
study, we introduce the role of big data in
humanitarian settings and discuss data streams which could be
utilized to develop descriptive, prescriptive, and
predictive models to significantly impact the lives of people in
need.
big data; humanitarian operations; analytics
Introduction
Humanitarian efforts are increasing on a daily basis both in
terms of scale and scope. This past year has been
terrible in terms of devastations and losses during hurricanes
and earthquake in North America. Hurricanes Harvey
and Irma are expected to lead to losses of more than $150
billion US dollars due to damages and lost productivity
(Dillow [ 8] ). In addition, more than 200 lives have been lost
and millions of people have suffered from power
outages and shortage of basic necessities for an extended period
of time in the United States and the Caribbean. In
the same year, a 7.1 earthquake rattled Mexico City killing
more than 150 people and leaving thousands struggling
to get their lives back to normalcy (Buchanan et al. [ 2] ). Based
on the Intergovernmental Panel on Climate
Change, NASA predicts that global warming could possibly lead
to increase in natural calamities such as drought,
intensity of storms, hurricanes, monsoons, and mid‐latitude
storms in the upcoming years. Simultaneously, the
geo‐political, social, and economic tensions have increased the
need for humanitarian operations globally; such
impacts have been experienced due to the crisis in Middle East,
refugees in Europe, the systemic needs related to
drought, hunger, disease, and poverty in the developing world,
and the increased frequency of random acts of
terrorism. According to the Global Humanitarian Assistance
Report, 164.2 million people across 46 countries
needed some form of humanitarian assistance in 2016 and 65.6
million people were displaced from their homes,
the highest number witnessed thus far. At the same time, the
international humanitarian aid increased to all time
high of $27.3 billion US dollars from $16.1 billion US dollars
in 2012. Despite that increase, common belief is that
Listen American Accent
http://app-na.readspeaker.com/cgi-
bin/rsent?customerid=5845&lang=en_us&readid=rs_full_text_c
ontainer_title&url=http%3A%2F%2Feds.a.ebscohost.com%2Fed
s%2Fdetail%2Fdetail%3Fvid%3D2%26sid%3D2956bf6a-6493-
4df7-9888-
5624b87bfb48%2540sessionmgr4007%26bdata%3DJkF1dGhUe
XBlPXNoaWImc2l0ZT1lZHMtbGl2ZQ%253d%253d&speedVal
ue=medium&download=true&audiofilename=BigDataAnalyticsf
orRapid-SwaminathanJayashankarM-20180901
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:void(0);
javascript:void(0);
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 2/9
funding is not sufficient to meet the growing humanitarian
needs. Therefore, humanitarian organizations will
continue to operate under capacity constraints and will need to
innovate their operations to make them more
efficient and responsive.
There are many areas in which humanitarian operations can
improve. Humanitarian operations are often blamed
for being slow or unresponsive. For example, the most recent
relief efforts for Puerto Rico have been criticized for
slow response. These organizations also face challenges in
being able to sustain a policy or best practice for an
extended period of time because of constant turnover in
personnel. They are often blamed for being inefficient in
how they utilize resources (Vanrooyen [ 29] ). Some of the
reasons that contribute to their inefficiency include
operating environment such as infrastructure deficiencies in the
last mile, socio‐political tensions, uncertainty in
funding, randomness of events and presence of multiple
agencies and stake holders. However, it is critical that
humanitarian operations show high level of performance so that
every dollar that is routed in these activities is
utilized to have the maximum impact on the people in need.
Twenty‐one donor governments and 16 agencies have
pledged at the World Humanitarian Summit in 2016 to find at
least one billion USD in savings by working more
efficiently over the next 5 years (Rowling [ 24] ).
We believe the best performing humanitarian operations need to
have the following characteristics—they need to
be Rapid, they have to be Impactful in terms of saving human
lives, should be effective in terms of providing
Sustained benefits and they should be highly Efficient. We coin
RISE as an acronym that succinctly describes the
characteristics of successful humanitarian operations and it
stands for Rapid, Impactful, Sustained, and Efficient.
One of the major opportunities for improving humanitarian
operations lies in how data and information are
leveraged to develop above competencies. Traditionally,
humanitarian operations have suffered from lack of
consistent data and information (Starr and Van Wassennhove [
26] ). In these settings, information comes from a
diverse set of stakeholders and a common information
technology is not readily deployable in remote parts of the
world. However, the Big Data wave that is sweeping through all
business environments is starting to have an
impact in humanitarian operations as well. For example, after
the 2010 Haiti Earthquake, population displacement
was studied for a period of 341 days using data from mobile
phone and SIM card tracking using FlowMinder. The
data analysis allowed researchers to predict refugee locations 3
months out with 85% accuracy. This analysis
facilitated the identification of cholera outbreak areas (Lu et al.
[ 18] ). Similarly, during the Typhoon Pablo in
2012, the first official crisis map was created using social media
data that gave situation reports on housing,
infrastructure, crop damage, and population displacement using
metadata from Twitter. The map became
influential in guiding both UN and Philippines government
agencies (Meier [ 21] ).
Big Data is defined as large volume of structured and
unstructured data. The three V's of Big Data are Volume,
Variety, and Velocity (McCafee and Brynjolfsson [ 19] ). Big
Data Analytics examines large amounts of data to
uncover hidden patterns and correlations which can then be
utilized to develop intelligence around the operating
environment to make better decisions. Our goal in this article is
to lay out a framework and present examples
around how Big Data Analytics could enable RISE humanitarian
operations.
Humanitarian Operations—Planning and Execution
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 3/9
Planning and Execution are critical aspects of humanitarian
operations that deal with emergencies (like
hurricanes) and systemic needs (hunger). All humanitarian
operations have activities during preparedness phase
(before) and disaster phase (during). Emergencies also need
additional focus on the recovery phase (after).
Planning and execution decisions revolve around Where, When,
How, and What. We will take the UNICEF RUTF
supply chain for the Horn of Africa (Kenya, Ethiopia, and
Somalia) as an example. RUTF (ready to use
therapeutic food) also called Plumpy’ Nut is a packaged protein
supplement that can be given to malnourished
children under the age of 5 years. The supplement was found to
be very effective; therefore, the demand for RUTF
skyrocketed, and UNICEF supply chain became over stretched
(Swaminathan [ 27] ). UNICEF supply chain
showed many inefficiencies due to long lead times, high
transportation costs, product shortages, funding
uncertainties, severe production capacity constraints, and
government regulations (So and Swaminathan [ 25] ).
Our analysis using forecasted demand data from the region
found that it was important to determine where
inventory should be prepositioned (in Kenya or in Dubai). The
decision greatly influenced the speed and efficiency
of distribution of RUTF. The amount of prepositioned inventory
also needed to be appropriately computed and
operationalized (Swaminathan et al. [ 28] ). Given that the
amount of funding and timing showed a lot of
uncertainty, when funding was obtained, and how inventory was
procured and allocated, dramatically influenced
the overall performance (Natarajan and Swaminathan [ 22] , [
23] ). Finally, understanding the major roadblocks to
execution and addressing those for a sustained solution had a
great impact on the overall performance. In the
UNICEF example, solving the production bottleneck in France
was critical. UNICEF was able to successfully
diversify its global supply base and bring in more local
suppliers into the network. Along with the other changes
that were incorporated, UNICEF RUTF supply chain came
closer to being a RISE humanitarian operations and
estimated that an additional one million malnourished children
were fed RUTF over the next 5 years (Komrska
et al. [ 15] ). There are a number of other studies that have
developed robust optimization models and analyzed
humanitarian settings along many dimensions. While not an
exhaustive list, these areas include humanitarian
transportation planning (Gralla et al. [ 13] ), vehicle
procurement and allocation (Eftekar et al. [ 9] ), equity and
fairness in delivery (McCoy and Lee [ 20] ), funding processes
and stock‐outs (Gallien et al. [ 12] ), post‐disaster
debris operation (Lorca et al. [ 17] ), capacity planning (Deo et
al. [ 6] ), efficiency drivers in global health
(Berenguer et al. [ 1] ), and decentralized decision‐making (Deo
and Sohoni [ 5] ). In a humanitarian setting, the
following types of questions need to be answered.
Where
a.Where is the affected population? Where did it originate?
Where is it moving to?
b.Where is supply going to be stored? Where is the supply
coming from? Where will the distribution points be
located?
c.Where is the location of source of disruption (e.g., hurricane)?
Where is it coming from? Where is moving to?
d.Where are the debris concentrated after the event?
When
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 4/9
a.When is the landfall or damage likely to occur?
b.When is the right time to alert the affected population to
minimize damages as well as unwanted stress?
c.When should delivery vehicles be dispatched to the affected
area?
d.When should supply be reordered to avoid stock‐outs or long
delays?
e.When should debris collection start?
How
a.How should critical resources be allocated to the affected
population?
b.How much of the resources should be prepositioned?
c.How many suppliers or providers should be in the network?
d.How to transport much needed suppliers and personnel in the
affected areas?
e.How should the affected population be routed?
What
a.What types of calamities are likely to happen in the upcoming
future?
b.What policies and procedure could help in planning and
execution?
c.What are the needs of the affected population? What are
reasons for the distress or movement?
d.What needs are most urgent? What additional resources are
needed?
Big Data Analytics
Big Data Analytics can help organizations in obtaining better
answers to the above types of questions and in this
process enable them to make sound real‐time decisions during
and after the event as well as help them plan and
prepare before the event (see Figure ). Descriptive analytics
(that describes the situation) could be used for
describing the current crisis state, identifying needs and key
drivers as well as advocating policies. Prescriptive
analytics (that prescribes solutions) can be utilized in alert and
dispatch, prepositioning of supplies, routing,
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 5/9
supplier selection, scheduling, allocation, and capacity
management. Predictive analytics (that predicts the future
state) could be utilized for developing forecasts around societal
needs, surge capacity needs in an emergency,
supply planning, and financial needs. Four types of data streams
that could be utilized to develop such models are
social media data, SMS data, weather data, and enterprise data.
Social Media Data
The availability of data from social media such as Twitter has
opened up several opportunities to improve
humanitarian emergency response. Descriptive analytics from
the data feed during an emergency could help create
the emergency crisis map in rapid time and inform about areas
of acute needs as well as movement of distressed
population. This could help with rapid response into areas that
need the most help. Furthermore, such data feed
could also be used to predict the future movement of the
affected population as well as surges in demand for
certain types of products or services. A detailed analysis of
these data after the event could inform humanitarian
operations about the quality of response during the disaster as
well as better ways to prepare for future events of a
similar type. This could be in terms of deciding where to stock
inventory, when and how many supply vehicles
should be dispatched and also make a case for funding needs
with the donors. Simulation using social media data
could provide solid underpinning for a request for increased
funding. Analysis of information diffusion in the
social network could present new insights on the speed and
efficacy of messages relayed in the social network
(Yoo et al. [ 30] ). Furthermore, analyzing the population
movement data in any given region of interest could
provide valuable input for ground operations related to supply
planning, positioning, and vehicle routing. Finally,
social media data is coming from the public directly and
sometimes may contain random or useless information
even during emergency. There is an opportunity to develop
advanced filtering models so that social media data are
leveraged in real‐time decision‐making.
SMS Data
Big Data Analytics can also be adapted successfully for
SMS‐based mobile communications. For example, a
number of areas in the United States have started using cell
phone SMS to text subscribers about warnings and
alerts. Timely and accurate alerts can save lives particularly
during emergencies. Predictive analytics models can
be developed to determine when, where, and to whom these
alerts should be broadcasted in order to maximize the
efficacy of the alerts. The usage of mobile alerts is gaining
momentum in the case of sustained humanitarian
response as well. For example, frequent reporting of inventory
at the warehouse for food and drugs can reduce
shortages. Analytics on these data could provide more nuances
on the demand patterns which in turn could be
used to plan for the correct amount and location of supplies.
Mobile phone alerts have also shown to improve
antiretroviral treatment adherence in patients. In such
situations, there is a great opportunity to analyze what kinds
of alerts and what levels of granularity lead to the best response
from the patient.
Weather Data
Most regions have highly sophisticated systems to track weather
patterns. This type of real‐time data is useful in
improving the speed of response, so that the affected population
can be alerted early and evacuations can be
planned better. It also has a lot of information for designing
humanitarian efforts for the future. For example, by
analyzing the data related to the weather changes along with
population movement, one could develop robust
prescriptive models around how shelter capacity should be
planned as well as how the affected population should
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 6/9
be routed to these locations. So, rather than trying to reach a
shelter on their own, an affected person can be
assigned a shelter and directed to go there. Prepositioning of
inventory at the right locations based on weather data
could improve response dramatically as reflected by the actions
of firms such as Wal‐Mart and Home Depot that
have made it a routine process after successful implementation
during hurricane Katrina. Finally, the weather
pattern data could be utilized to develop predictive models
around the needs of the population in the medium to
long term. For example, the drought cycles in certain regions of
Africa follow a typical time pattern. A predictive
model around the chances of famine in those regions could then
inform the needs and funding requirements for
food supplements.
Enterprise Data
Most large humanitarian organizations such as UNICEF have
information systems that collect a large amount of
data about their operations. Analytics on such data can be useful
to develop robust policies and guide the
operational decisions well. For example, in systemic and
emergent humanitarian needs, analyzing the demand and
prepositioning inventory accordingly has shown to improve the
operational performance. Furthermore, the analysis
of long‐term data could provide guidelines for surge capacity
needed under different environments as well as
predict long‐term patterns for social needs across the globe due
to changing demographics and socioeconomic
conditions.
As the Big Data Analytics models and techniques develop
further, there will be greater opportunities to leverage
these data streams in more effective ways, particularly, given
that the accuracy of data coming out of the different
sources may not have the same level of fidelity in a
humanitarian setting. While data are available in abundance in
the developed world, there are still geographical areas around
the globe where cell phone service is limited, leave
alone social media data. In those situations, models with
incomplete or missing data need to be developed. Also
the presence of multiple decentralized organizations with varied
degree of information technology competencies
and objectives limits their ability to effectively synthesize the
different data streams to coordinate decision‐
making.
Concluding Remarks
Big data has enabled new opportunities in the value creation
process including product design and innovation (Lee
[ 16] ), manufacturing and supply chain (Feng and
Shanthikumar [ 10] ), service operations (Cohen [ 3] ), and
retailing (Fisher and Raman [ 11] ). It is also likely to impact
sustainability (Corbett [ 4] ), agriculture (Devalkar
et al. [ 7] ), and healthcare (Guha and Kumar [ 14] ). In our
opinion, humanitarian organizations are also well
positioned to benefit from this phenomenon. Operations
Management researchers will have opportunity to study
newer topics and develop robust models and insights that could
guide humanitarian operations and make them
more Responsive, Impactful, Sustained, and Efficient.
Acknowledgments
The author wishes to thank Gemma Berenguer, Anand Bhatia,
Mahyar Efthekar, and Jarrod Goentzel for their
comments on an earlier version of this study.
References
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 7/9
1 Berenguer, G., A. V. Iyer, P. Yadav. 2016. Disentangling the
efficiency drivers in country‐level global health
programs: An empirical study. J. Oper. Manag. 45: 30–43.
2 Buchanan, L., J. C. Lee, S. Pechanha, K. K. R. Lai. 2017.
Mexico City before and after the earthquake. New York
Times, September 23, 2017.
3 Cohen, M. C. 2018. Big data and service operations. Prod.
Oper. Manag. 27(9): 1709–1723.
http://doi.org/10.1111/poms.12832.
4 Corbett, C. J. 2018. How sustainable is big data? Prod. Oper.
Manag. 27(9): 1685–1695.
http://doi.org/10.1111/poms.12837.
5 Deo, S., M. Sohoni. 2015. Optimal decentralization of early
infant diagnosis of HIV in resource‐limited settings.
Manuf. Serv. Oper. Manag. 17(2): 191–207.
6 Deo, S., S. Iravani, T. Jiang, K. Smilowitz, S. Samuelson.
2013. Improving health outcomes through capacity
allocation in a community based chronic care model. Oper. Res.
61(6): 1277–1294.
7 Devalkar, S. K., S. Seshadri, C. Ghosh, A. Mathias. 2018.
Data science applications in indian agriculture. Prod.
Oper. Manag. 27(9): 1701–1708.
http://doi.org/10.1111/poms.12834.
8 Dillow, C. 2017. The hidden costs of hurricanes, Fortune,
September 22, 2017.
9 Eftekar, M., A. Masini, A. Robotis, L. Van Wassenhove.
2014. Vehicle procurement policy for humanitarian
deevlopment programs. Prod. Oper. Manag. 23(6): 951–964.
10 Feng, Q., J. G. Shanthikumar. 2018. How research in
production and operations management may evolve in
the era of big data. Prod. Oper. Manag. 27(9): 1670–1684.
http://doi.org/10.1111/poms.12836.
11 Fisher, M., A. Raman. 2018. Using data and big data in
retailing. Prod. Oper. Manag. 27(9): 1665–1669.
http://doi.org/10.1111/poms.12846.
12 Gallien, J., I. Rashkova, R. Atun, P. Yadav. 2017. National
drug stockout risks and global fund disbusement
process for procurement. Prod. Oper. Manag. 26(6): 997–1014.
13 Gralla, E., J. Goentzel, C. Fine. 2016. Problem formulation
and solutions mechanisms: A behavioral study of
humanitarian transportation planning. Prod. Oper. Manag.
25(1): 22–35.
http://doi.org/10.1111/poms.12832
http://doi.org/10.1111/poms.12837
http://doi.org/10.1111/poms.12834
http://doi.org/10.1111/poms.12836
http://doi.org/10.1111/poms.12846
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 8/9
14 Guha, S., S. Kumar. 2018. Emergence of big data research in
operations management, information systems
and helathcare: Past contributions and future roadmap. Prod.
Oper. Manag. 27(9): 1724–1735.
http://doi.org/10.1111/poms.12833.
15 Komrska, J., L. Kopczak, J. M. Swaminathan. 2013. When
supply chains save lives. Supply Chain Manage. Rev.
January–February: 42–49.
16 Lee, H. L. 2018. Big data and the innovation cycle. Prod.
Oper. Manag. 27(9): 1642–1646.
http://doi.org/10.1111/poms.12845.
17 Lorca, A., M. Celik, O. Ergun, P. Keskiniocak. 2017. An
optimization based decision support tool for post‐
disaster debris operations. Prod. Oper. Manag. 26(6): 1076–
1091.
18 Lu, X., L. Bengtsson, P. Holme. 2012. Predictability of
population displacement after 2010 Haiti Earthquakes.
Proc. Natl Acad. Sci. 109(29): 11576–11581.
19 McCafee, A., E. Brynjolfsson. 2012. Big data: The
management revolution. Harvard Business Review, October
1–9, 2012.
20 McCoy, J., H. L. Lee. 2014. Using fairness models to
improve equity in health delivery fleet management.
Prod. Oper. Manag. 23(6): 965–977.
21 Meier, P. 2012. How UN used social media in response to
typhoon Pablo. Available at
http://www.irevolutions.org (accessed date December 12, 2012).
22 Natarajan, K., J. M. Swaminathan. 2014. Inventory
management in humanitarian operations: Impact of
amount, schedule, and uncertainty in funding. Manuf. Serv.
Oper. Manag. 16(4): 595–603.
23 Natarajan, K., J. M. Swaminathan. 2017. Multi‐Treatment
Inventory Allocation in Humanitarian Health
Settings under Funding Constraints. Prod. Oper. Manag. 26(6):
1015–1034.
24 Rowling, M. 2016. Aid efficiency bargain could save $1
billion per year. Reuters, May 23, 2016.
25 So, A., J. M. Swaminathan. 2009. The nutrition articulation
project: A supply chain analysis of ready‐to‐use
therapeutic foods to the horn of Africa. UNICEF Technical
Report.
26 Starr, M., L. Van Wassennhove. 2014. Introduction to the
special issue on humanitarian operations and crisis
management. Prod. Oper. Manag., 23(6), 925–937.
http://doi.org/10.1111/poms.12833
http://doi.org/10.1111/poms.12845
http://www.irevolutions.org/
3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained,
and Efficient (RISE) Hu...: UC MegaSearch
eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
6493-4df7-9888-
5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
oaWImc… 9/9
27 Swaminathan, J. M. 2010. Case study: Getting food to
disaster victims. Financial Times, October 13, 2010.
28 Swaminathan, J. M., W. Gilland, V. Mani, C. M. Vickery, A.
So. 2012. UNICEF employs prepositioning strategy
to improve treatment of severely malnourished children.
Working paper, Kenan‐Flagler Business School,
University of North Carolina, Chapel Hill.
29 Vanrooyen, M. 2013. Effective aid. Harvard International
Review, September 30, 2013.
30 Yoo, E., W. Rand, M. Eftekhar, E. Rabinovich. 2016.
Evaluating information diffusion speed and its
determinants in social networks during humanitarian crisis. J.
Oper. Manag. 45: 123–133.
PHOTO (COLOR): Big Data Analytics and Rapid, Impactful,
Sustained, and Efficient Humanitarian Operations
~~~~~~~~
By Jayashankar M. Swaminathan
Copyright of Production & Operations Management is the
property of Wiley-Blackwell and its content may not be
copied or emailed to multiple sites or posted to a listserv
without the copyright holder's express written permission.
However, users may print, download, or email articles for
individual use.
EBSCO Connect Privacy Policy A/B Testing Terms of Use
Copyright Cookie Policy Contact Us
powered by EBSCOhost
© 2020 EBSCO Industries, Inc. All rights reserved.
https://connect.ebsco.com/
https://www.ebsco.com/company/privacy-policy
https://www.ebsco.com/conversion-testing-statement
https://www.ebsco.com/terms-of-use
https://www.ebsco.com/terms-of-use
https://www.ebsco.com/cookie-policy
https://support.ebscohost.com/contact/index.php
Reading Head: ANNOTATED BIBLIOGRAPHY
1
ANNOTATED BIBLIOGRAPHY
5
Annotated Bibliography for Research Paper
Name
Professor:
University of the Cumberlands
Date
Pasquale, F., & Cashwell, G. (2018). Prediction, persuasion,
and the jurisprudence of behaviorism. University of Toronto
Law Journal, 68(supplement 1), 63-81.
http://eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a5393
2-b932-4bf6-926e-
093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo
aWImc2l0ZT1lZHMtbGl2ZQ%3d%3d#db=edspmu&AN=edspmu
.S1710117418000033
In the above article, Pasquale and Cashwell show how big data
and artificial intelligence is used in the judicial system for
prediction and persuasion of behavior. The authors point out
how decision-makers are using data algorithms to help in
predicting whether judges will take cases and if so, the merits
they will use to reach a decision. The employment of natural
language and machine learning (ML) techniques has become a
trend in the twenty-first century. The authors also state in their
article how big data could also be used to predict certain natural
phenomena such as the weather. The use of algorithmic
predictions is an emerging jurisprudence of behaviorism in the
context of judicial law. The article looks at the issues that are
associated with analytic data predictions as used by judges. It
also tries to answer questions such as the use and purpose of
predictive software, whether artificial intelligence is a valuable
tool for highlighting violations of human rights (in court cases),
and whether elements of bias could be possible when using ML
techniques. The article analyses the use of predictive data
technology on specific aspects of ordinary life, and whether
such artificial intelligence and trends in data analytics could be
of more benefit than the pullback in society. The authors focus
their argument on the judicial system.
Predictive analytics is essential when it comes to the business
world. Most businesses that use software to predict, persuade
consumer behaviors are always successful in terms of sales and
revenues and consumer loyalty. Unlike traditional intelligence
approaches to data, behavioral predictive, and persuasive data
analytics help determine how a customer might behave in the
future situation and how they may react to certain aspects a
business share with them. Such predictive analytics can
discover patterns and identify opportunities or problems in a
market. Predictive analytics also allows companies to plan, thus
avoiding certain uncertainties concerning their consumers. A
good example is a clothing store that gathers consumer behavior
data. The store can use the data to predict what a consumer will
buy in the future and thus be well-stocked with what the
consumers need beforehand.
Guha, S., & Kumar, S. (2018). Emergence of big data research
in operations management, information systems, and healthcare:
Past contributions and future roadmap. Production and
Operations Management, 27(9), 1724-1735.
http://eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=5f27297
7-4f85-413f-a32c-6b3dd97f38a8%40sdc-v-
sessmgr02&bdata=JkF1dGhUeXBlPXNoaWImc2l0ZT1lZHMtbG
l2ZQ%3d%3d#AN=131754554&db=buh
According to Guha and Kumar, in their article 'Emergence of
big data research in operations management…', there are
various changing trends in data collection and management. The
authors believe that in the new century, data is generated
whenever we use the Internet and that aside from the
information that we make, interconnected devices on the
Internet of things also collect data. The information is having a
considerable amount about the environmental factors of this
present reality and the requirement for extensive information
examination in the specialized viewpoint just as the individual
component of data use. In the article, the authors discuss the
contributions of big data to various domains such as healthcare,
information systems and operations, and supply management.
The report also touches on the sub-areas of the stated areas and
ways in which big data techniques lead to improvements. The
authors even discuss cloud computing, the Internet of things
(IoT), smart health and predictive manufacturing, and how such
an area has the potential of growth and exploration.
Big data is applied important to a business and can be used in
various ways. It can be used for social listening. The
availability of vast waves of data makes it possible for
businesses to determine the word going around in society about
the company. Business owners also use big data to make
comparative and market analysis.
Business owners can compare their products and services with
competition through analysis of user behavior. Big data also
allows for real-time monitoring of consumer engagement in the
business sector. Information from marketing analytics helps in
promote and get new audiences for new products in the market.
Big data thus helps businesses utilize outside intelligence in the
process of decision making, improve customer care, create
operational efficiency, and identifying risks in products and
services a company offers. An excellent example of the benefits
of big data is when a business uses information about consumer
purchasing behavior to target a tailored advertisement to such a
segment market.
Akl, S. G., & Salay, N. (2019). Artificial Intelligence A
Promising Future? Queen's Quarterly, 126(1), 6-20.
https://go.gale.com/ps/retrieve.do?tabID=T001&resultListType=
RESULT_LIST&searchResultsType=SingleTab&searchType=A
dvancedSearchForm&currentPosition=1&docId=GALE%7CA58
2622399&docType=Essay&sort=Pub+Date+Reverse+Chron&co
ntentSegment=ZLRC-
MOD1&prodId=LitRC&contentSet=GALE%7CA582622399&se
archId=R2&userGroupName=cumberlandcol&inPS=true
Aki and Salay discuss artificial intelligence in their article on
'Artificial Intelligence A Promising Future.' In their research,
they view AI having a bright future and shaping the way human
beings carry on their day to day activities. The authors talk of
how artificial intelligence has developed over the years, citing
examples of developments such as Deep Blue, Watson, Project
Debate, and AlphaGo, among many others. The article talks of
how Artificial Intelligence as a science has become a social
phenomenon. The authors point out that artificial intelligence
and machine learning serve a great purpose in the modern-day
world. The use of data and deep learning algorithms are to
extract features that are in artificial intelligence technology.
AI's future is bright, and the authors feel that such a positive
trend will see the use of technology hugely benefit the life of a
human being.
In business, artificial intelligence is to automate tasks that
would otherwise be manual and time-consuming. Technological
development can be used by companies to create a competitive
advantage and to increase efficiency. AI also ensures that tasks
are done efficiently with minimal errors as when compared to
human efforts. Artificial intelligence can also be to detect
fraud, improve data security, and ensure proper marketing and
security screening.

More Related Content

Similar to 3222020 Prediction, persuasion, and the jurisprudence of beh.docx

AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATION
AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATIONAUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATION
AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATIONgerogepatton
 
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...Daniel Katz
 
Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Paolo Missier
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesAnsgar Koene
 
Privacy and Cryptographic Security Issues within Mobile Recommender Syste...
Privacy and Cryptographic Security Issues within Mobile     Recommender Syste...Privacy and Cryptographic Security Issues within Mobile     Recommender Syste...
Privacy and Cryptographic Security Issues within Mobile Recommender Syste...Jacob Mack
 
Digital Authoritarianism: Implications, Ethics, and Safegaurds
Digital Authoritarianism: Implications, Ethics, and SafegaurdsDigital Authoritarianism: Implications, Ethics, and Safegaurds
Digital Authoritarianism: Implications, Ethics, and SafegaurdsAndy Aukerman
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...AJHSSR Journal
 
250 words agree or disagreeWhile I have mixed opinions about p.docx
250 words agree or disagreeWhile I have mixed opinions about p.docx250 words agree or disagreeWhile I have mixed opinions about p.docx
250 words agree or disagreeWhile I have mixed opinions about p.docxvickeryr87
 
The Impact of AI on Intelligence Analysis
The Impact of AI on Intelligence AnalysisThe Impact of AI on Intelligence Analysis
The Impact of AI on Intelligence AnalysisArXlan1
 
CRIME-INVEST-PPP.pptx
CRIME-INVEST-PPP.pptxCRIME-INVEST-PPP.pptx
CRIME-INVEST-PPP.pptxoluobes
 
Artificial Intelligence and Law
Artificial Intelligence and LawArtificial Intelligence and Law
Artificial Intelligence and LawSamos2019Summit
 
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...mlaij
 
An Ontological Framework For Structuring Process Knowledge Specified For The ...
An Ontological Framework For Structuring Process Knowledge Specified For The ...An Ontological Framework For Structuring Process Knowledge Specified For The ...
An Ontological Framework For Structuring Process Knowledge Specified For The ...Courtney Esco
 
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEHUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEeraser Juan José Calderón
 
Wearable Technology
Wearable TechnologyWearable Technology
Wearable TechnologyKim Arnott
 
Evidence Data Preprocessing for Forensic and Legal Analytics
Evidence Data Preprocessing for Forensic and Legal AnalyticsEvidence Data Preprocessing for Forensic and Legal Analytics
Evidence Data Preprocessing for Forensic and Legal AnalyticsCSCJournals
 
Running head CRIME ANALYSIS TECHNOLOGY .docx
Running head CRIME ANALYSIS TECHNOLOGY                           .docxRunning head CRIME ANALYSIS TECHNOLOGY                           .docx
Running head CRIME ANALYSIS TECHNOLOGY .docxhealdkathaleen
 
Running head CRIME ANALYSIS TECHNOLOGY .docx
Running head CRIME ANALYSIS TECHNOLOGY                           .docxRunning head CRIME ANALYSIS TECHNOLOGY                           .docx
Running head CRIME ANALYSIS TECHNOLOGY .docxtodd271
 

Similar to 3222020 Prediction, persuasion, and the jurisprudence of beh.docx (20)

AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATION
AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATIONAUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATION
AUTOMATED DISCOVERY OF LOGICAL FALLACIES IN LEGAL ARGUMENTATION
 
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...
The Three Forms of (Legal) Prediction: Experts, Crowds and Algorithms -- Prof...
 
Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)Transparency in ML and AI (humble views from a concerned academic)
Transparency in ML and AI (humble views from a concerned academic)
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
 
Privacy and Cryptographic Security Issues within Mobile Recommender Syste...
Privacy and Cryptographic Security Issues within Mobile     Recommender Syste...Privacy and Cryptographic Security Issues within Mobile     Recommender Syste...
Privacy and Cryptographic Security Issues within Mobile Recommender Syste...
 
Digital Authoritarianism: Implications, Ethics, and Safegaurds
Digital Authoritarianism: Implications, Ethics, and SafegaurdsDigital Authoritarianism: Implications, Ethics, and Safegaurds
Digital Authoritarianism: Implications, Ethics, and Safegaurds
 
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
Debate on Artificial Intelligence in Justice, in the Democracy of the Future,...
 
250 words agree or disagreeWhile I have mixed opinions about p.docx
250 words agree or disagreeWhile I have mixed opinions about p.docx250 words agree or disagreeWhile I have mixed opinions about p.docx
250 words agree or disagreeWhile I have mixed opinions about p.docx
 
The Impact of AI on Intelligence Analysis
The Impact of AI on Intelligence AnalysisThe Impact of AI on Intelligence Analysis
The Impact of AI on Intelligence Analysis
 
CRIME-INVEST-PPP.pptx
CRIME-INVEST-PPP.pptxCRIME-INVEST-PPP.pptx
CRIME-INVEST-PPP.pptx
 
Artificial Intelligence and Law
Artificial Intelligence and LawArtificial Intelligence and Law
Artificial Intelligence and Law
 
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
 
An Ontological Framework For Structuring Process Knowledge Specified For The ...
An Ontological Framework For Structuring Process Knowledge Specified For The ...An Ontological Framework For Structuring Process Knowledge Specified For The ...
An Ontological Framework For Structuring Process Knowledge Specified For The ...
 
ICBAI Paper (1)
ICBAI Paper (1)ICBAI Paper (1)
ICBAI Paper (1)
 
Intelligence Analysis
Intelligence AnalysisIntelligence Analysis
Intelligence Analysis
 
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCEHUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE
 
Wearable Technology
Wearable TechnologyWearable Technology
Wearable Technology
 
Evidence Data Preprocessing for Forensic and Legal Analytics
Evidence Data Preprocessing for Forensic and Legal AnalyticsEvidence Data Preprocessing for Forensic and Legal Analytics
Evidence Data Preprocessing for Forensic and Legal Analytics
 
Running head CRIME ANALYSIS TECHNOLOGY .docx
Running head CRIME ANALYSIS TECHNOLOGY                           .docxRunning head CRIME ANALYSIS TECHNOLOGY                           .docx
Running head CRIME ANALYSIS TECHNOLOGY .docx
 
Running head CRIME ANALYSIS TECHNOLOGY .docx
Running head CRIME ANALYSIS TECHNOLOGY                           .docxRunning head CRIME ANALYSIS TECHNOLOGY                           .docx
Running head CRIME ANALYSIS TECHNOLOGY .docx
 

More from priestmanmable

9©iStockphotoThinkstockPlanning for Material and Reso.docx
9©iStockphotoThinkstockPlanning for Material and Reso.docx9©iStockphotoThinkstockPlanning for Material and Reso.docx
9©iStockphotoThinkstockPlanning for Material and Reso.docxpriestmanmable
 
a 12 page paper on how individuals of color would be a more dominant.docx
a 12 page paper on how individuals of color would be a more dominant.docxa 12 page paper on how individuals of color would be a more dominant.docx
a 12 page paper on how individuals of color would be a more dominant.docxpriestmanmable
 
978-1-5386-6589-318$31.00 ©2018 IEEE COSO Framework for .docx
978-1-5386-6589-318$31.00 ©2018 IEEE COSO Framework for .docx978-1-5386-6589-318$31.00 ©2018 IEEE COSO Framework for .docx
978-1-5386-6589-318$31.00 ©2018 IEEE COSO Framework for .docxpriestmanmable
 
92 Academic Journal Article Critique  Help with Journal Ar.docx
92 Academic Journal Article Critique  Help with Journal Ar.docx92 Academic Journal Article Critique  Help with Journal Ar.docx
92 Academic Journal Article Critique  Help with Journal Ar.docxpriestmanmable
 
A ) Society perspective90 year old female, Mrs. Ruth, from h.docx
A ) Society perspective90 year old female, Mrs. Ruth, from h.docxA ) Society perspective90 year old female, Mrs. Ruth, from h.docx
A ) Society perspective90 year old female, Mrs. Ruth, from h.docxpriestmanmable
 
9 dissuasion question Bartol, C. R., & Bartol, A. M. (2017)..docx
9 dissuasion question Bartol, C. R., & Bartol, A. M. (2017)..docx9 dissuasion question Bartol, C. R., & Bartol, A. M. (2017)..docx
9 dissuasion question Bartol, C. R., & Bartol, A. M. (2017)..docxpriestmanmable
 
9 AssignmentAssignment Typologies of Sexual AssaultsT.docx
9 AssignmentAssignment Typologies of Sexual AssaultsT.docx9 AssignmentAssignment Typologies of Sexual AssaultsT.docx
9 AssignmentAssignment Typologies of Sexual AssaultsT.docxpriestmanmable
 
9 0 0 0 09 7 8 0 1 3 4 4 7 7 4 0 4ISBN-13 978-0-13-44.docx
9 0 0 0 09 7 8 0 1 3 4 4 7 7 4 0 4ISBN-13 978-0-13-44.docx9 0 0 0 09 7 8 0 1 3 4 4 7 7 4 0 4ISBN-13 978-0-13-44.docx
9 0 0 0 09 7 8 0 1 3 4 4 7 7 4 0 4ISBN-13 978-0-13-44.docxpriestmanmable
 
900 BritishJournalofNursing,2013,Vol22,No15©2.docx
900 BritishJournalofNursing,2013,Vol22,No15©2.docx900 BritishJournalofNursing,2013,Vol22,No15©2.docx
900 BritishJournalofNursing,2013,Vol22,No15©2.docxpriestmanmable
 
9 Augustine Confessions (selections) Augustine of Hi.docx
9 Augustine Confessions (selections) Augustine of Hi.docx9 Augustine Confessions (selections) Augustine of Hi.docx
9 Augustine Confessions (selections) Augustine of Hi.docxpriestmanmable
 
8.3 Intercultural CommunicationLearning Objectives1. Define in.docx
8.3 Intercultural CommunicationLearning Objectives1. Define in.docx8.3 Intercultural CommunicationLearning Objectives1. Define in.docx
8.3 Intercultural CommunicationLearning Objectives1. Define in.docxpriestmanmable
 
8413 906 AMLife in a Toxic Country - NYTimes.comPage 1 .docx
8413 906 AMLife in a Toxic Country - NYTimes.comPage 1 .docx8413 906 AMLife in a Toxic Country - NYTimes.comPage 1 .docx
8413 906 AMLife in a Toxic Country - NYTimes.comPage 1 .docxpriestmanmable
 
8. A 2 x 2 Experimental Design - Quality and Economy (x1 and x2.docx
8. A 2 x 2 Experimental Design - Quality and Economy (x1 and x2.docx8. A 2 x 2 Experimental Design - Quality and Economy (x1 and x2.docx
8. A 2 x 2 Experimental Design - Quality and Economy (x1 and x2.docxpriestmanmable
 
800 Words 42-year-old man presents to ED with 2-day history .docx
800 Words 42-year-old man presents to ED with 2-day history .docx800 Words 42-year-old man presents to ED with 2-day history .docx
800 Words 42-year-old man presents to ED with 2-day history .docxpriestmanmable
 
8.1 What Is Corporate StrategyLO 8-1Define corporate strategy.docx
8.1 What Is Corporate StrategyLO 8-1Define corporate strategy.docx8.1 What Is Corporate StrategyLO 8-1Define corporate strategy.docx
8.1 What Is Corporate StrategyLO 8-1Define corporate strategy.docxpriestmanmable
 
8.0 RESEARCH METHODS These guidelines address postgr.docx
8.0  RESEARCH METHODS  These guidelines address postgr.docx8.0  RESEARCH METHODS  These guidelines address postgr.docx
8.0 RESEARCH METHODS These guidelines address postgr.docxpriestmanmable
 
95People of AppalachianHeritageChapter 5KATHLEEN.docx
95People of AppalachianHeritageChapter 5KATHLEEN.docx95People of AppalachianHeritageChapter 5KATHLEEN.docx
95People of AppalachianHeritageChapter 5KATHLEEN.docxpriestmanmable
 
9 781292 041452ISBN 978-1-29204-145-2Forensic Science.docx
9 781292 041452ISBN 978-1-29204-145-2Forensic Science.docx9 781292 041452ISBN 978-1-29204-145-2Forensic Science.docx
9 781292 041452ISBN 978-1-29204-145-2Forensic Science.docxpriestmanmable
 
8-10 slide Powerpoint The example company is Tesla.Instructions.docx
8-10 slide Powerpoint The example company is Tesla.Instructions.docx8-10 slide Powerpoint The example company is Tesla.Instructions.docx
8-10 slide Powerpoint The example company is Tesla.Instructions.docxpriestmanmable
 
8Network Security April 2020FEATUREAre your IT staf.docx
8Network Security  April 2020FEATUREAre your IT staf.docx8Network Security  April 2020FEATUREAre your IT staf.docx
8Network Security April 2020FEATUREAre your IT staf.docxpriestmanmable
 

More from priestmanmable (20)

9©iStockphotoThinkstockPlanning for Material and Reso.docx
9©iStockphotoThinkstockPlanning for Material and Reso.docx9©iStockphotoThinkstockPlanning for Material and Reso.docx
9©iStockphotoThinkstockPlanning for Material and Reso.docx
 
a 12 page paper on how individuals of color would be a more dominant.docx
a 12 page paper on how individuals of color would be a more dominant.docxa 12 page paper on how individuals of color would be a more dominant.docx
a 12 page paper on how individuals of color would be a more dominant.docx
 
978-1-5386-6589-318$31.00 ©2018 IEEE COSO Framework for .docx
978-1-5386-6589-318$31.00 ©2018 IEEE COSO Framework for .docx978-1-5386-6589-318$31.00 ©2018 IEEE COSO Framework for .docx
978-1-5386-6589-318$31.00 ©2018 IEEE COSO Framework for .docx
 
92 Academic Journal Article Critique  Help with Journal Ar.docx
92 Academic Journal Article Critique  Help with Journal Ar.docx92 Academic Journal Article Critique  Help with Journal Ar.docx
92 Academic Journal Article Critique  Help with Journal Ar.docx
 
A ) Society perspective90 year old female, Mrs. Ruth, from h.docx
A ) Society perspective90 year old female, Mrs. Ruth, from h.docxA ) Society perspective90 year old female, Mrs. Ruth, from h.docx
A ) Society perspective90 year old female, Mrs. Ruth, from h.docx
 
9 dissuasion question Bartol, C. R., & Bartol, A. M. (2017)..docx
9 dissuasion question Bartol, C. R., & Bartol, A. M. (2017)..docx9 dissuasion question Bartol, C. R., & Bartol, A. M. (2017)..docx
9 dissuasion question Bartol, C. R., & Bartol, A. M. (2017)..docx
 
9 AssignmentAssignment Typologies of Sexual AssaultsT.docx
9 AssignmentAssignment Typologies of Sexual AssaultsT.docx9 AssignmentAssignment Typologies of Sexual AssaultsT.docx
9 AssignmentAssignment Typologies of Sexual AssaultsT.docx
 
9 0 0 0 09 7 8 0 1 3 4 4 7 7 4 0 4ISBN-13 978-0-13-44.docx
9 0 0 0 09 7 8 0 1 3 4 4 7 7 4 0 4ISBN-13 978-0-13-44.docx9 0 0 0 09 7 8 0 1 3 4 4 7 7 4 0 4ISBN-13 978-0-13-44.docx
9 0 0 0 09 7 8 0 1 3 4 4 7 7 4 0 4ISBN-13 978-0-13-44.docx
 
900 BritishJournalofNursing,2013,Vol22,No15©2.docx
900 BritishJournalofNursing,2013,Vol22,No15©2.docx900 BritishJournalofNursing,2013,Vol22,No15©2.docx
900 BritishJournalofNursing,2013,Vol22,No15©2.docx
 
9 Augustine Confessions (selections) Augustine of Hi.docx
9 Augustine Confessions (selections) Augustine of Hi.docx9 Augustine Confessions (selections) Augustine of Hi.docx
9 Augustine Confessions (selections) Augustine of Hi.docx
 
8.3 Intercultural CommunicationLearning Objectives1. Define in.docx
8.3 Intercultural CommunicationLearning Objectives1. Define in.docx8.3 Intercultural CommunicationLearning Objectives1. Define in.docx
8.3 Intercultural CommunicationLearning Objectives1. Define in.docx
 
8413 906 AMLife in a Toxic Country - NYTimes.comPage 1 .docx
8413 906 AMLife in a Toxic Country - NYTimes.comPage 1 .docx8413 906 AMLife in a Toxic Country - NYTimes.comPage 1 .docx
8413 906 AMLife in a Toxic Country - NYTimes.comPage 1 .docx
 
8. A 2 x 2 Experimental Design - Quality and Economy (x1 and x2.docx
8. A 2 x 2 Experimental Design - Quality and Economy (x1 and x2.docx8. A 2 x 2 Experimental Design - Quality and Economy (x1 and x2.docx
8. A 2 x 2 Experimental Design - Quality and Economy (x1 and x2.docx
 
800 Words 42-year-old man presents to ED with 2-day history .docx
800 Words 42-year-old man presents to ED with 2-day history .docx800 Words 42-year-old man presents to ED with 2-day history .docx
800 Words 42-year-old man presents to ED with 2-day history .docx
 
8.1 What Is Corporate StrategyLO 8-1Define corporate strategy.docx
8.1 What Is Corporate StrategyLO 8-1Define corporate strategy.docx8.1 What Is Corporate StrategyLO 8-1Define corporate strategy.docx
8.1 What Is Corporate StrategyLO 8-1Define corporate strategy.docx
 
8.0 RESEARCH METHODS These guidelines address postgr.docx
8.0  RESEARCH METHODS  These guidelines address postgr.docx8.0  RESEARCH METHODS  These guidelines address postgr.docx
8.0 RESEARCH METHODS These guidelines address postgr.docx
 
95People of AppalachianHeritageChapter 5KATHLEEN.docx
95People of AppalachianHeritageChapter 5KATHLEEN.docx95People of AppalachianHeritageChapter 5KATHLEEN.docx
95People of AppalachianHeritageChapter 5KATHLEEN.docx
 
9 781292 041452ISBN 978-1-29204-145-2Forensic Science.docx
9 781292 041452ISBN 978-1-29204-145-2Forensic Science.docx9 781292 041452ISBN 978-1-29204-145-2Forensic Science.docx
9 781292 041452ISBN 978-1-29204-145-2Forensic Science.docx
 
8-10 slide Powerpoint The example company is Tesla.Instructions.docx
8-10 slide Powerpoint The example company is Tesla.Instructions.docx8-10 slide Powerpoint The example company is Tesla.Instructions.docx
8-10 slide Powerpoint The example company is Tesla.Instructions.docx
 
8Network Security April 2020FEATUREAre your IT staf.docx
8Network Security  April 2020FEATUREAre your IT staf.docx8Network Security  April 2020FEATUREAre your IT staf.docx
8Network Security April 2020FEATUREAre your IT staf.docx
 

Recently uploaded

CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxJiesonDelaCerna
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for BeginnersSabitha Banu
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxthorishapillay1
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxpboyjonauth
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Celine George
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupJonathanParaisoCruz
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfMr Bounab Samir
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxUnboundStockton
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️9953056974 Low Rate Call Girls In Saket, Delhi NCR
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfMahmoud M. Sallam
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsanshu789521
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPCeline George
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceSamikshaHamane
 
AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.arsicmarija21
 

Recently uploaded (20)

CELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptxCELL CYCLE Division Science 8 quarter IV.pptx
CELL CYCLE Division Science 8 quarter IV.pptx
 
Full Stack Web Development Course for Beginners
Full Stack Web Development Course  for BeginnersFull Stack Web Development Course  for Beginners
Full Stack Web Development Course for Beginners
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Proudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptxProudly South Africa powerpoint Thorisha.pptx
Proudly South Africa powerpoint Thorisha.pptx
 
ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)ESSENTIAL of (CS/IT/IS) class 06 (database)
ESSENTIAL of (CS/IT/IS) class 06 (database)
 
Introduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptxIntroduction to AI in Higher Education_draft.pptx
Introduction to AI in Higher Education_draft.pptx
 
Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17Difference Between Search & Browse Methods in Odoo 17
Difference Between Search & Browse Methods in Odoo 17
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
MARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized GroupMARGINALIZATION (Different learners in Marginalized Group
MARGINALIZATION (Different learners in Marginalized Group
 
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdfLike-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
Like-prefer-love -hate+verb+ing & silent letters & citizenship text.pdf
 
Blooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docxBlooming Together_ Growing a Community Garden Worksheet.docx
Blooming Together_ Growing a Community Garden Worksheet.docx
 
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
call girls in Kamla Market (DELHI) 🔝 >༒9953330565🔝 genuine Escort Service 🔝✔️✔️
 
Pharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdfPharmacognosy Flower 3. Compositae 2023.pdf
Pharmacognosy Flower 3. Compositae 2023.pdf
 
Presiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha electionsPresiding Officer Training module 2024 lok sabha elections
Presiding Officer Training module 2024 lok sabha elections
 
What is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERPWhat is Model Inheritance in Odoo 17 ERP
What is Model Inheritance in Odoo 17 ERP
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
Roles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in PharmacovigilanceRoles & Responsibilities in Pharmacovigilance
Roles & Responsibilities in Pharmacovigilance
 
AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.AmericanHighSchoolsprezentacijaoskolama.
AmericanHighSchoolsprezentacijaoskolama.
 

3222020 Prediction, persuasion, and the jurisprudence of beh.docx

  • 1. 3/22/2020 Prediction, persuasion, and the jurisprudence of behaviourism: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932- b932-4bf6-926e- 093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo aWIm… 1/14 Title: Database: Prediction, persuasion, and the jurisprudence of behaviourism By: Frank Pasquale, Glyn Cashwell, 17101174, , Vol. 68, Issue 1 ProjectMUSE Prediction, persuasion, and the jurisprudence of behaviourism There is a growing literature critiquing the unreflective application of big data, predictive analytics, artificial intelligence, and machine-learning techniques to social problems. Such methods may reflect biases rather than reasoned decision making. They also may leave those affected by automated sorting and categorizing unable to understand the basis of the decisions affecting them. Despite these problems, machine-learning experts are feeding judicial opinions to algorithms to predict how future cases will be
  • 2. decided. We call the use of such predictive analytics in judicial contexts a jurisprudence of behaviourism as it rests on a fundamentally Skinnerian model of cognition as a black-boxed transformation of inputs into outputs. In this model, persuasion is passé; what matters is prediction. After describing and critiquing a recent study that has advanced this jurisprudence of behaviourism, we question the value of such research. Widespread deployment of prediction models not based on the meaning of important precedents and facts may endanger the core rule-of-law values. artificial intelligence; cyber law; machine learning; jurisprudence; predictive analysis I Introduction A growing chorus of critics are challenging the use of opaque (or merely complex) predictive analytics programs to monitor, influence, and assess individuals’ behaviour. The rise of a ‘black box society’ portends profound threats to individual autonomy; when critical data and algorithms cannot be a matter of public understanding or debate, both consumers and citizens are unable to comprehend how they are being sorted, categorized, and influenced.[ 2] A predictable counter-argument has arisen, discounting the comparative competence of human decision makers. Defending opaque sentencing algorithms, for instance, Christine Remington (a Wisconsin assistant attorney general) has stated: ‘We don’t know what’s going on in a judge’s head; it’s a black box, too.’[ 3] Of course, a judge must (upon issuing an important decision) explain why the
  • 3. decision was made; so too are agencies covered by the Administrative Procedure Act obliged to offer a ‘concise statement of basis and purpose’ for rule making.[ 4] But there is a long tradition of realist commentators dismissing the legal justifications adopted by judges as unconvincing fig leaves for the ‘real’ (non-legal) bases of their decisions. In the first half of the twentieth century, the realist disdain for stated rationales for decisions led in at least two directions: toward more rigorous and open discussions of policy considerations motivating judgments and toward frank recognition of judges as political actors, reflecting certain ideologies, values, and interests. In the twenty-first century, a new response is beginning to emerge: a deployment of natural language processing and machine- learning (ML) techniques to predict whether judges will hear a case and, if so, how they will decide it. ML experts are busily feeding algorithms with the opinions of the Supreme Court of the United States, the European Court of Human Rights, and other judicial bodies as well as with metadata on justices’ ideological commitments, past Listen American Accent http://app-na.readspeaker.com/cgi- bin/rsent?customerid=5845&lang=en_us&readid=rs_full_text_c ontainer_title&url=http%3A%2F%2Feds.a.ebscohost.com%2Fed s%2Fdetail%2Fdetail%3Fvid%3D1%26sid%3D80a53932-b932- 4bf6-926e- 093727bceef6%2540sessionmgr4007%26bdata%3DJkF1dGhUeX BlPXNoaWImc2l0ZT1lZHMtbGl2ZQ%253d%253d&speedValue
  • 4. =medium&download=true&audiofilename=Predictionpersuasion andthejurisprudence-FrankPasquale-20180327 javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); 3/22/2020 Prediction, persuasion, and the jurisprudence of behaviourism: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932- b932-4bf6-926e- 093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo aWIm… 2/14 voting record, and myriad other variables. By processing data related to cases, and the text of opinions, these systems purport to predict how judges will decide cases, how individual judges will vote, and how to optimize submissions and arguments before them. This form of prediction is analogous to forecasters using big data (rather than understanding underlying atmospheric dynamics) to predict the movement of storms. An algorithmic analysis of a database of, say, 10,000 past cumulonimbi sweeping over Lake Ontario may prove to be a better predictor of the next cumulonimbus’s track than a trained meteorologist without access to such a data trove. From the perspective of many predictive analytics approaches, judges are just like any other feature of the natural world – an
  • 5. entity that transforms certain inputs (such as briefs and advocacy documents) into outputs (decisions for or against a litigant). Just as forecasters predict whether a cloud will veer southwest or southeast, the user of a ML system might use machine-readable case characteristics to predict whether a rainmaker will prevail in the courtroom. We call the use of algorithmic predictive analytics in judicial contexts an emerging jurisprudence of behaviourism, since it rests on a fundamentally Skinnerian model of mental processes as a black- boxed transformation of inputs into outputs.[ 5] In this model, persuasion is passé; what matters is prediction.[ 6] After describing and critiquing a recent study typical of this jurisprudence of behaviourism, we question the value of the research program it is advancing. Billed as a method of enhancing the legitimacy and efficiency of the legal system, such modelling is all too likely to become one more tool deployed by richer litigants to gain advantages over poorer ones.[ 7] Moreover, it should raise suspicions if it is used as a triage tool to determine the priority of cases. Such predictive analytics are only as good as the training data on which they depend, and there is good reason to doubt such data could ever generate in social analysis the types of ground truths characteristic of scientific methods applied to the natural world. While fundamental physical laws rarely if ever change, human behaviour can change dramatically in a short period of time. Therefore, one should always be cautious when applying automated methods in the human context, where factors as basic as free will
  • 6. and political change make the behaviour of both decision makers, and those they impact, impossible to predict with certainty.[ 8] Nor are predictive analytics immune from bias. Just as judges bring biases into the courtroom, algorithm developers are prone to incorporate their own prejudices and priors into their machinery.[ 9] In addition, biases are no easier to address in software than in decisions justified by natural language. Such judicial opinions (or even oral statements) are generally much less opaque than ML algorithms. Unlike many proprietary or hopelessly opaque computational processes proposed to replace them, judges and clerks can be questioned and rebuked for discriminatory behaviour.[ 10] There is a growing literature critiquing the unreflective application of ML techniques to social problems.[ 11] Predictive analytics may reflect biases rather than reasoned decision making.[ 12] They may also leave those affected by automated sorting and categorizing unable to understand the basis of the decisions affecting them, especially when the output from the models in anyway affects one’s life, liberty, or property rights and when litigants are not given the basis of the model’s predictions.[ 13] This article questions the social utility of prediction models as applied to the judicial system, arguing that their deployment may endanger core rule-of-law values. In full bloom, predictive analytics would not simply be a camera trained on the judicial system, reporting on it, but it would also be an engine of influence,
  • 7. shaping it. Attorneys may decide whether to pursue cases based on such systems; courts swamped by appeals or applications may be tempted to use ML models to triage or prioritize cases. In work published to widespread acclaim in 2016, Nikolaos Aletras, Dimitrios Tsarapatsanis, Daniel Preoţiuc-Pietro, and Vasileios Lampos made bold claims about the place of natural language processing (NLP) in the legal system in their article Predicting Judicial Decisions of the European Court of Human Rights: A Natural Language Processing Perspective.[ 14] They claim that ‘advances in Natural Language Processing (NLP) and Machine Learning (ML) provide us with the tools to automatically analyse legal materials, so as to build successful predictive models of judicial outcomes.’[ 15] Presumably, they are referring to their own work as part of these advances. However, close analysis of their ‘systematic study on predicting the outcome of cases tried by the European Court of Human Rights based solely on textual content’ reveals that their soi-disant ‘success’ merits closer scrutiny on both positive and normative grounds. 3/22/2020 Prediction, persuasion, and the jurisprudence of behaviourism: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932- b932-4bf6-926e- 093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo aWIm… 3/14 The first question to be asked about a study like Predicting
  • 8. Judicial Decisions is: what are its uses and purposes? Aletras and colleagues suggest at least three uses. First, they present their work as a first step toward the development of ML and NLP software that can predict how judges and other authorities will decide legal disputes. Second, Aletras has clearly stated to media that artificial intelligence ‘could also be a valuable tool for highlighting which cases are most likely to be violations of the European Convention of Human Rights’ – in other words, that it could help courts triage which cases they should hear.[ 16] Third, they purport to intervene in a classic jurisprudential debate – whether facts or law matter more in judicial determinations.[ 17] Each of these aims and claims should be rigorously interrogated, given shortcomings of the study that the authors acknowledge. Beyond these acknowledged problems, there are even more faults in their approach which cast doubt on whether the research program of NLP-based prediction of judicial outcomes, even if pursued in a more realistic manner, has anything significant to contribute to our understanding of the legal system. Although Aletras and colleagues have used cutting edge ML and NLP methods in their study, their approach metaphorically stacks the deck in favour of their software and algorithms in so many ways that it is hard to see its relevance to either practising lawyers or scholars. Nor is it plausible to state that a method this crude, and disconnected from actual legal meaning and reasoning,
  • 9. provides empirical data relevant to jurisprudential debates over legal formalism and realism. As more advanced thinking on artificial intelligence and intelligence augmentation has already demonstrated, there is an inevitable interface of human meaning that is necessary to make sense of social institutions like law. II Stacking the deck: ‘predicting’ the contemporaneous The European Court of Human Rights (ECtHR) hears cases in which parties allege that their rights under the articles of the European Convention of Human Rights were violated and not remedied by their country’s courts.[ 18] The researchers claim that the textual model has an accuracy of ‘79% on average.’[ 19] Given sweepingly futuristic headlines generated by the study (including ‘Could AI [Artificial Intelligence] Replace Judges and Lawyers?’), a casual reader of reports on the study might assume that this finding means that, using the method of the researchers, those who have some aggregation of data and text about case filings can use that data to predict how the ECtHR will decide a case, with 79 per cent accuracy.[ 20] However, that would not be accurate. Instead, the researchers used the ‘circumstances’ subsection in the cases they claimed to ‘predict,’ which had ‘been formulated by the Court itself.’[ 21] In other words, they claimed to be ‘predicting’ an event (a decision) based on materials released simultaneously with the decision. This is a bit like claiming to ‘predict’ whether a judge had cereal for breakfast yesterday based on a report of the nutritional composition of the materials on the judge’s plate at the exact
  • 10. time she or he consumed the breakfast.[ 22] Readers can (and should) balk at using the term ‘prediction’ to describe correlations between past events (like decisions of a court) and contemporaneously generated, past data (like the circumstances subsection of a case). Sadly, though, few journalists breathlessly reporting the study by Aletras and colleagues did so. To their credit, though, Aletras and colleagues repeatedly emphasize how much they have effectively stacked the deck by using ECtHR-generated documents themselves to help the ML/NLP software they are using in the study ‘predict’ the outcomes of the cases associated with those documents. A truly predictive system would use the filings of the parties, or data outside the filings, that was in existence before the judgement itself. Aletras and colleagues grudgingly acknowledge that the circumstances subsection ‘should not always be understood as a neutral mirroring of the factual background of the case,’ but they defend their method by stating that the ‘summaries of facts found in the “Circumstances” section have to be at least framed in as neutral and impartial a way as possible.’[ 23] However, they give readers no clear guide as to when the circumstances subsection is actually a neutral mirroring of factual background or how closely it relates to records in existence before a judgment that would actually be useful to those aspiring to develop a predictive system.
  • 11. Instead, their ‘premise is that published judgments can be used to test the possibility of a text-based analysis for ex ante predictions of outcomes on the assumption that there is enough similarity between (at least) certain chunks of the text of published judgments and applications lodged with the Court and/or briefs submitted by parties with respect to pending cases.’[ 24] But they give us few compelling reasons to accept this assumption since almost any court writing an opinion to justify a judgment is going to develop a facts section in ways that reflect its outcome. The authors state that the ECtHR has ‘limited fact finding powers,’ but they give no 3/22/2020 Prediction, persuasion, and the jurisprudence of behaviourism: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932- b932-4bf6-926e- 093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo aWIm… 4/14 sense of how much that mitigates the cherry-picking of facts or statements about the facts problem. Nor should we be comforted by the fact that ‘the Court cannot openly acknowledge any kind of bias on its part.’ Indeed, this suggests a need for the Court to avoid the types of transparency in published justification that could help researchers artificially limited to NLP better understand it.[ 25] The authors also state that in the ‘vast majority of cases,’ the
  • 12. ‘parties do not seem to dispute the facts themselves, as contained in the “Circumstances�� subsection, but only their legal significance.’ However, the critical issues here are, first, the facts themselves and, second, how the parties characterized the facts before the circumstances section was written. Again, the fundamental problem of mischaracterization – of ‘prediction’ instead of mere correlation or relationship – crops up to undermine the value of the study. Even in its most academic mode – as an ostensibly empirical analysis of the prevalence of legal realism – the study by Aletras and colleagues stacks the deck in its favour in important ways. Indeed, it might be seen as assuming at the outset a version of the very hypothesis it ostensibly supports. This hypothesis is that something other than legal reasoning itself drives judicial decisions. Of course, that is true in a trivial sense – there is no case if there are no facts – and perhaps the authors intend to make that trivial point.[ 26] However, their language suggests a larger aim, designed to meld NLP and jurisprudence. Given the critical role of meaning in the latter discipline, and their NLP methods’ indifference to it, one might expect an unhappy coupling here. And that is indeed what we find. In the study by Aletras and colleagues, the corpus used for the predictive algorithm was a body of ECtHR’s ‘published judgments.’ Within these judgments, a summary of the factual background of the case was summarized (by the Court) in the circumstances
  • 13. section of the judgments, but the pleadings themselves were not included as inputs.[ 27] The law section, which ‘considers the merits of the case, through the use of legal argument,’ was also input into the model to determine how well that section alone could ‘predict’ the case outcome.[ 28] Aletras and colleagues were selective in the corpus they fed to their algorithms. The only judgments that were included in the corpus were those that passed both a ‘prejudicial stage’ and a second review.[ 29] In both stages, applications were denied if they did not meet ‘admissibility criteria,’ which were largely procedural in nature.[ 30] To the extent that such procedural barriers were deemed ‘legal,’ we might immediately have identified a bias problem in the corpus – that is, the types of cases where the law entirely determined the outcome (no matter how compelling the facts may have been) were removed from a data set that was ostensibly fairly representative of the universe of cases generally. This is not a small problem either; the overwhelming majority of applications were deemed inadmissible or struck out and were not reportable.[ 31] But let us assume, for now, that the model only aspired to offer data about the realist/formalist divide in those cases that did meet the admissibility criteria. There were other biases in the data set. Only cases that were in English, approximately 33 per cent of the total ECtHR decisions, were included.[ 32] This is a strange omission since the NLP approach employed here had no semantic content –
  • 14. that is, the meaning of the words did not matter to it. Presumably, this omission arose out of concerns for making data coding and processing easier. There was also a subject matter restriction that further limited the scope of the sample. Only cases addressing issues in Articles 3, 6, and 8 of the ECHR were included in training and in verifying the model. And there is yet another limitation: the researchers then threw cases out randomly (so that the data set contained an equal number of violation/no violation cases) before using them as training data.[ 33] III Problematic characteristics of the ECtHR textual ‘predictive’ model The algorithm used in the case depended on an atomization of case language into words grouped together in sets of one-, two-, three-, and four-word groupings, called n-grams.[ 34] Then, 2,000 of the most frequent n-grams, not taking into consideration ‘grammar, syntax and word order,’ were placed in feature matrices for each section of decisions and for the entire case by using the vectors from each decision.[ 35] Topics, which are created by ‘clustering together n-grams,’ were also created.[ 36] Both topics and n- grams were used to ‘to train Support Vector Machine (SVM) classifiers.’ As the authors explain, an ‘SVM is a machine learning algorithm that has shown particularly good results in text classification, especially using small data sets.’[ 37] Model training data from these opinions were ‘n-gram features,’ which consist of groups of words that ‘appear in similar contexts.’[ 38] Matrix
  • 15. mathematics, which 3/22/2020 Prediction, persuasion, and the jurisprudence of behaviourism: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932- b932-4bf6-926e- 093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo aWIm… 5/14 are manipulations on two-dimensional tables, and vector space models, which are based on a single column within a table, were programmed to determine clusters of words that should be similar to one another based on textual context.[ 39] These clusters of words are called topics. The model prevented a word group from showing up in more than one topic. Thirty topics, or sets of similar word groupings, were also created for entire court opinions. Topics were similarly created for entire opinions for each article.[ 40] Since the court opinions all follow a standard format, the opinions could be easily dissected into different identifiable sections.[ 41] Note that these sorting methods are legally meaningless. N- grams and topics are not sorted the way a treatise writer might try to organize cases or a judge might try to parse divergent lines of precedent. Rather, they simply serve as potential independent variables to predict a dependent variable (was there a violation, or was there not a violation, of the Convention). Before going further into the technical details of the study, it is
  • 16. useful to compare it to prior successes of ML in facial or number recognition. When a facial recognition program successfully identifies a given picture as an image of a given person, it does not achieve that machine vision in the way a human being’s eye and brain would do so. Rather, an initial training set of images (or perhaps even a single image) of the person are processed, perhaps on a 1,000-by-1,000-pixel grid. Each box in the grid can be identified as either skin or not skin, smooth or not smooth, along hundreds or even thousands of binaries, many of which would never be noticed by a human being. Moreover, such parameters can be related to one another; so, for example, regions hued as ‘lips’ or ‘eyes’ might have a certain maximum length, width, or ratio to one another (such that a person’s facial ‘signature’ reliably has eyes that are 1.35 times as long as they are wide). Add up enough of these ratios for easily recognized features (ears, eyebrows, foreheads, and so on), and software can quickly find a set of mathematical parameters unique to a given person – or at least unique enough that an algorithm can predict that a given picture is, or is not, a picture of a given person, with a high degree of accuracy. The technology found early commercial success with banks, which needed a way to recognize numbers on cheques (given the wide variety of human handwriting). With enough examples of written numbers (properly reduced to data via dark or filled spaces on a grid), and computational power, this recognition can become nearly perfect.
  • 17. Before assenting too quickly to the application of such methods to words in cases (as we see them applied to features of faces), we should note that there are not professions of ‘face recognizers’ or ‘number recognizers’ among human beings. So while Facebook’s face recognition algorithm, or TD Bank’s cheque sorter, do not obviously challenge our intuitions about how we recognize faces or numbers, applying ML to legal cases should be marked as a jarring imperialism of ML methods into domains associated with a rich history of meaning (and, to use a classic term from the philosophy of social sciences, Verstehen). In the realm of face recognizing, ‘whatever works’ as a pragmatic ethic of effectiveness underwrites some societies’ acceptance of width/length ratios and other methods to assure algorithmic recognition and classification of individuals.[ 42] The application of ML approaches devoid of apprehension of meaning in the legal context is more troubling. For example, Aletras and colleagues acknowledge that there are cases where the model predicts the incorrect outcome because of the similarity in words in cases that have opposite results. In this case, even if information regarding specific words that triggered the SVM classifier were output, users might not be able to easily determine that the case was likely misclassified.[ 43] Even with confidence interval outputs, this type of problem does not appear to have an easy solution. This is particularly troubling for due process if such an algorithm, in error, incorrectly classified someone’s case because it contained language similarities to another very different case.[ 44] When the cases are obviously misclassified
  • 18. in this way, models like this would likely ‘surreptitiously embed biases, mistakes and discrimination, and worse yet, even reiterate and reinforce them on the new cases processed.’[ 45] So, too, might a batch of training data representing a certain time period when a certain class of cases were dominant help ensure the dominance of such cases in the future. For example, the ‘most predictive topic’ for Article 8 decisions included prominently the words ‘son, body, result, Russian.’ If the system were used in the future to triage cases, ceteris paribus, it might prioritize cases involving sons over daughters or Russians over Poles.[ 46] But if those future cases do not share the characteristics of the cases in the training set that led to the ‘predictiveness’ of ‘son’ status or ‘Russian’ status, their prioritization would be a clear legal mistake. Troublingly, the entire ‘predictive’ project here may be riddled with spurious correlations. As any student of statistics knows, if one tests enough data sets against one another, spurious correlations will emerge. For example, Tyler Vigen has shown a very tight correlation between the divorce rate in Maine and the per capita consumption of margarine between 2000 and 2009.[ 47] It is unlikely that one variable there is driving the other. Nor is it likely that some intervening variable is affecting both butter 3/22/2020 Prediction, persuasion, and the jurisprudence of behaviourism: UC MegaSearch
  • 19. eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a53932- b932-4bf6-926e- 093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo aWIm… 6/14 consumption and divorce rates in a similar way, to ensure a similar correlation in the future. Rather, this is just the type of random association one might expect to emerge once one has thrown enough computing power at enough data sets. It is hard not to draw similar conclusions with respect to Aletras and colleagues’ ‘predictive’ project. Draw enough variations from the ‘bag of words,’ and some relationships will emerge. Given that the algorithm only had to predict ‘violation’ or ‘no violation,’ even a random guessing program would be expected to have a 50 per cent accuracy rate. A thought experiment easily deflates the meaning of their trumpeted 79 per cent ‘accuracy.’ Imagine that the authors had continual real time surveillance of every aspect of the judges’ lives before they wrote their opinions: the size of the buttons on their shirts and blouses, calories consumed at breakfast, average speed of commute, height and weight, and so forth. Given a near infinite number of parameters of evaluation, it is altogether possible that they could find that a cluster of data around breakfast type, or button size, or some similarly irrelevant characteristics, also added an increment of roughly 29 per cent accuracy to the baseline 50 per cent accuracy achieved via randomness (or always guessing violation). Should scholars celebrate the ‘artificial
  • 20. intelligence’ behind such a finding? No. Ideally, they would chuckle at it, as readers of Vigen’s website find amusement at random relationships between, say, number of letters in winning words at the National Spelling Bee and number of people killed by venomous spiders (which enjoys a 80.57 per cent correlation). This may seem unfair to Aletras and colleagues since they are using so much more advanced math than Vigen is. However, their models do not factor in meaning, which is of paramount importance in rights determinations. To be sure, words like ‘burial,’ ‘attack,’ and ‘died’ do appear properly predictive, to some extent, in Article 8 decisions and cause no surprise when they are predictive of violations.[ 48] But what are we to make of inclusion of words … YearQuarterLocationCarClass Revenue NumCars 2017Q1DowntownEconomy $976,803 6,137 2017Q1AirportEconomy $1,047,031 5,773 2015Q3DowntownEconomy $804,931 5,564 2016Q4AirportEconomy $958,989 5,370 2016Q1DowntownEconomy $750,562 5,048 2015Q3AirportEconomy $733,215 4,917 2016Q4DowntownEconomy $735,993 4,751 2016Q3DowntownEconomy $712,136 4,703 2016Q2DowntownEconomy $670,068 4,459 2015Q4AirportEconomy $639,838 4,256 2015Q4AirportPremium $663,293 4,137 2016Q3AirportPremium $688,190 4,081 2015Q4DowntownPremium $623,279 4,072 2017Q1AirportPremium $709,705 4,024 2017Q2AirportPremium $721,899 4,008 2016Q2AirportPremium $626,117 3,773
  • 21. 2017Q2DowntownEconomy $600,403 3,748 2016Q3AirportEconomy $620,543 3,665 2016Q1AirportPremium $590,987 3,621 2015Q3DowntownPremium $540,136 3,584 2015Q4DowntownEconomy $531,619 3,582 2015Q2AirportEconomy $501,606 3,470 2016Q1AirportEconomy $521,223 3,406 2015Q1AirportEconomy $469,217 3,387 2016Q2DowntownPremium $522,789 3,283 2017Q2AirportEconomy $621,746 3,282 2015Q2DowntownPremium $487,304 3,274 2016Q4AirportPremium $564,853 3,260 2015Q3AirportPremium $504,843 3,194 2016Q3DowntownPremium $517,084 3,185 2016Q1DowntownPremium $444,067 2,840 2015Q2DowntownEconomy $396,037 2,839 2015Q1DowntownEconomy $374,342 2,817 2016Q4DowntownPremium $450,598 2,748 2017Q1DowntownPremium $451,848 2,695 2015Q1DowntownPremium $370,169 2,537 2015Q1AirportPremium $375,634 2,507 2016Q2AirportEconomy $384,966 2,277 2015Q2AirportPremium $316,848 2,057 2017Q2DowntownPremium $344,292 2,008 Excel Project 3 – MS Excel Summer 2018 Use the project description HERE to complete this activity. For a review of the complete rubric used in grading this exercise, click on the Assignments tab, then on the title Excel Project #3. Click on Show Rubrics if the rubric is not already displayed.
  • 22. Summary Create a Microsoft Excel file with four worksheets that provides extensive use of Excel capabilities for charting. The charts will be copied into a Microsoft PowerPoint file and the student will develop appropriate findings and recommendations based on analysis of the data. A large rental car company has two metropolitan locations, one at the airport and another centrally located in downtown. It has been operating since 2015 and each location summarizes its car rental revenue quarterly. Both locations rent two classes of cars: economy and premium. Rental revenue is maintained separately for the two classes of rental vehicles. The data for this case resides in the file summer2018rentalcars.txt and can be downloaded by clicking on the Assignments tab, then on the data tile name. It is a text file (with the file type .txt). Do not create your own data, you must use the data provided and only the data provided. Default Formatting. All labels, text, and numbers will be Arial 10, There will be $ and comma and decimal point variations for numeric data, but Arial 10 will be the default font and font size. Step Requirement Points Allocated
  • 23. Comments 1 Open Excel and save a blank workbook with the following name: a. “Student’s First InitialLast Name Excel Project 3” Example: JSmith Excel Project 3 b. Set Page Layout Orientation to Landscape 0.2 Use Print Preview to review how the first worksheet would print. 2 Change the name of the worksheet to Analysis by. 0.1 3 In the Analysis by worksheet: a. Beginning in Row 1, enter the four labels in column A (one label per row) in the following order: Name:, Class/Section:, Project:, Date Due: b. Place a blank row between each label. Please note the colon : after each label. c. Align the labels to the right side in the cells It may be necessary to adjust the column width so the four labels are clearly visible.
  • 24. 0.3 Format for text in column A: • Arial 10 point • Normal font • Right-align all four labels in the cells 4 In the Analysis by worksheet with all entries in column C: a. Enter the appropriate values for your Name, Class and Section, Project, Date Due across from the appropriate label in column A. 0.2 Format for text in column C: • Arial 10 point Step Requirement Points Allocated
  • 25. Comments b. Use the formatting in the Comments column (to the right). • Bold • Left-align all four values in the cells 5 a. Create three new worksheets: Data, Slide 2, Slide 3. Upon completion, there must be the Analysis by worksheet as well as the three newly created worksheets. b. Delete any other worksheets. 0.2 6 After clicking on the blank cell A1 (to select it) in the Data worksheet: a. Import the text file summer2018rentalcars.txt into the Data worksheet. b. Adjust all column widths so there is no data or column header truncation. Though the intent is to import the text file into the Data
  • 26. worksheet, sometimes when text data is imported into a worksheet, a new worksheet is created. If this happens, delete the blank Data worksheet, and then rename the new worksheet which HAS the recently imported data as “Data.” It may be necessary to change Revenue data to Currency format (leading $ and thousands separators) with NO decimal points and to change NumCars data to Number format with NO decimal points, but with the comma (thousands separator) because of the import operation. This may or may not occur, but in case it does it needs to be corrected. Adjust all column widths so there is no data or column header truncation. 0.3 Format for all data (field names, data text, and data numbers) • Arial 10 point. • Normal font The field names must be in the top row of the worksheet with the data directly under it in rows. This action may not be necessary as this is part of the Excel table creation process. The data must begin in Column A..
  • 27. 7 In the Data worksheet: a. Create an Excel table with the recently imported data. b. Pick a style with the styles group to format the table (choose a style that shows banded rows, i.e., rows that alternate between 2 colors). c. The style must highlight the field names in the first row. These are your table headers. d. Ensure NO blank cells are part of the specified data range. e. Ensure that Header Row and Banded Rows are selected in the Table Style Options Group Box. Do NOT check the Total Row. 0.5 Some adjustment may be necessary to column widths to ensure all field names and all data are readable (not truncated or obscured). 8 In the Data worksheet,
  • 28. a. Sort the entire table by Year (Ascending). b. Delete rows that contain 2015 data as well as 2017 data. The resulting table must consist of Row 1 labels followed by 2016 data, with NO empty cells or rows within the table. 0.2 9 In the Data worksheet: 0.4 Step Requirement Points Allocated Comments a. Select the entire table (data and headers) using a mouse. b. Copy the table to the both the Slide 2 as well as the Slide 3 worksheets. c. The upper left-hand corner of the header/data must be in cell A1 on Slide 2 and Slide 3 Adjust columns widths if necessary to ensure all data and field names are readable. 10
  • 29. In the Slide 2 worksheet, based solely on the 2016 data: a. Create a Pivot Table that displays the total number of car rentals for each car class and the total number of car rentals for each of the four quarters of 2016. A grand total for the total number of rentals must also be displayed. The column labels must be the two car classes and the row labels must be the four quarters. b. Place the pivot table two rows below the data beginning at the left border of column A. Ensure that the formatting is as listed in the Comments column. c. Create a Pivot Table that displays the total number of car rentals for each location and the total number of car rentals for each of the four quarters of 2016. A grand total for the total number of rentals must also be displayed. The column labels must be the two locations and the row labels must be the four quarters. d. Place this pivot table two rows below the above pivot table beginning at the left border of column A. Ensure that the formatting is as listed in the Comments column. Adjust the column widths as necessary to preclude data and title and label truncation. 2.0 Format (for both pivot tables):
  • 30. • Number format with comma separators (for thousands) • No decimal places • Arial 10 point • Normal 11 In the Slide 2 worksheet, based solely on the 2016 data: a. Using the pivot table created in Step 10 a, create a bar or column chart that displays the number of car rentals by car class for the four 2016 quarters. Both car types and quarters must be clearly visible. b. Add a title that reflects the information presented by the chart. c. Position the top of the chart in row 1 and two or three columns to the right of the data table. Use this same type of bar or column chart for the remaining three charts to be created. d. Using the pivot table created in 10 c, create a bar or column chart that displays the number of car rentals by location for the four 2016 quarters. Both locations and quarters must be clearly visible. e. Add a title that reflects the information presented by the chart.
  • 31. f. Left-align this chart with the left side of the first chart and below it. The same type of bar or column chart must be used throughout this project. 1.8 The charts must allow a viewer to determine approximate number or car rental by car class (first chart) and number of car rentals by location (second chart) The charts must have no more than eight bars or columns. ALL FOUR charts must be the same “format.” Formatted: Font: (Default) Arial, Font color: Black Step Requirement Points Allocated Comments 12
  • 32. In the Slide 3 worksheet, based solely on the 2016 data: a. Create a Pivot Table that displays the total revenue for each car class and the total revenue for each of the four quarters of 2016. A grand total for the total revenue must also be displayed. The column labels must be the two car classes and the row labels must be the four quarters. b. Place the pivot table two rows below the data beginning at the left border of column A. c. Create a Pivot Table that must displays the total revenue for each location and the total revenue for each of the four quarters of 2016. A grant total for the total revenue must also be displayed. The column labels must be the two locations and the row labels must be the four quarters.. d. Place this pivot table two rows below the above pivot table beginning at the left border of column A. Adjust the column widths as necessary to preclude data and title and label truncation. 2.0 Format (for both pivot tables): • Currency ($) with comma separators (for thousands)
  • 33. • No decimal places • Arial 10 point Normal 13 In the Slide 3 worksheet, based solely on the 2016 data: a. Using the pivot table created in Step 12 a, create a bar or column chart that displays the revenue from car rentals by car class for the four 2016 quarters. Ensure both car types and quarters are clearly visible. b. Add a title that reflects the information presented by the chart. c. Position the top of the chart in row 1 and two or three columns to the right of the data table. The same type of bar chart must be used throughout this project. d. Using the pivot table created in Step 12 c, create a bar or column chart that displays the revenue from car rentals by location for the four 2016 quarters. Ensure both locations and quarters are clearly visible. e. Add a title that reflects the information presented by the chart. f. Left-align this chart with the left side of the first chart and below it. The same type of bar chart must be used throughout this project. 1.8
  • 34. The charts must allow a viewer to determine approximate number or car rental by car class (first chart) and number of car rentals by location (second chart) The charts must have no more than eight bars or columns. ALL FOUR charts must be the same “format.” 14 a. Open a new, blank Power Point presentation file. b. Save the Presentation using the following name: “Student’s First Initial Last Name Presentation” Example: JSmith Presentation 0.1 Step Requirement Points
  • 35. Allocated Comments 15 Slides are NOT Microsoft Word documents viewed horizontally. Be brief. Full sentences are not needed. Blank space in a slide enhances the viewer experience and contributes to readability. Slide 1: a. Select an appropriate Design to maintain a consistent look and feel for all slides in the presentation. Blank slides with text are not acceptable. b. This is your Title Slide. c. Select an appropriate title and subtitle layout that clearly conveys the purpose of your presentation. d. Name, Class/Section, and Date Due must be displayed. 0.8 No speaker notes required. Remember, the title on your slide must convey what the presentation is about. Your Name, Class/Section, and Date Due can be used in the subtitle area.
  • 36. 16 Slide 2: a. Title this slide "Number of Cars Rented in 2016" b. Add two charts created in the Slide 2 worksheet of the Excel file c. The charts must be the same type and equal size and be symmetrically placed on the slide. d. A bullet or two of explanation of the charts may be included, but is not required if charts are self- explanatory. e. Use the speaker notes feature to help you discuss the bullet points and the charts (four complete sentences minimum). 1.1 Ensure that there are no grammar or spelling errors on your chart and in your speaker notes. 17 Slide 3: a. Title this slide "Car Rental Revenue in 2016" b. Add two charts, created in the Slide 3 worksheet of the Excel file. c. The charts must be the same type and equal size and
  • 37. be symmetrically placed on the slide. d. A bullet or two explanation of the charts may be included, but is not required if charts are self- explanatory. e. Use the speaker notes feature to help you discuss the bullet points and the charts (four complete sentences minimum). 1.1 Ensure that there are no grammar or spelling errors on your chart and in your speaker notes. 18 Slide 4: a. Title this slide "And in Conclusion….." b. Write and add two major bullets, one for findings and one for recommendations. c. There must be a minimum of one finding based on slide 2 and one finding based on slide 3. Findings are facts that can be deduced by analyzing the charts. What happened? Trends? Observations? d. There must be a minimum of one recommendation based on slide 2 and one recommendation based on slide 3. Recommendations are strategies or suggestions to improve or enhance the business based on the findings above.
  • 38. 1.1 Ensure that there are no grammar or spelling errors on your chart and in your speaker notes. Step Requirement Points Allocated Comments e. Use the speaker notes feature to help you discuss the findings and recommendations (four complete sentences minimum). 19 Add a relevant graphic that enhances the recommendations and conclusions on slide 4. If a photo is used, be sure to cite the source. The source citation must be no larger than Font size of 6, so it does not distract from the content of the slide. 0.2 20 Create a footer using "Courtesy of Your Name" so that is shows on all slides including the Title Slide. The text in this footer must be on the left side of the slides IF the theme selected allows. Otherwise let the theme determine the position of this text.
  • 39. 0.2 Replace the words "Your Name" with your actual name. 21 Create a footer for automated Slide Numbers that appears on all slides except the Title Slide. The page number must be on the right side of the slides IF the theme selected allows. Otherwise let the theme determine the position of the page number. Ensure that your name does appear on every slide, but the page numbers start on slide #2. This will involve slightly different steps to accomplish both. 0.2 Depending upon the theme you have chosen, the page number or your name may not appear in the lower portion of the slide. That is ok, as long as both appear somewhere on the slides. 22 Apply a transition scheme to all slides. 0.1 One transition scheme may be used OR different schemes for different slides 23 Apply an animation on at least one slide. The animation may
  • 40. be applied to text or a graphic. 0.1 TOTAL 15.0 Be sure you submit BOTH the Excel file and the PowerPoint file in the appropriate Assignment folder (Excel Project #3). 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a- 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN oaWImc… 1/9 Title: Database: Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Humanitarian Operations. By: Swaminathan, Jayashankar M., Production & Operations Management, 10591478, Sep2018, Vol. 27, Issue 9 Business Source Premier Big Data Analytics for Rapid, Impactful, Sustained,
  • 41. and Efficient (RISE) Humanitarian Operations There has been a significant increase in the scale and scope of humanitarian efforts over the last decade. Humanitarian operations need to be—rapid, impactful, sustained, and efficient (RISE). Big data offers many opportunities to enable RISE humanitarian operations. In this study, we introduce the role of big data in humanitarian settings and discuss data streams which could be utilized to develop descriptive, prescriptive, and predictive models to significantly impact the lives of people in need. big data; humanitarian operations; analytics Introduction Humanitarian efforts are increasing on a daily basis both in terms of scale and scope. This past year has been terrible in terms of devastations and losses during hurricanes and earthquake in North America. Hurricanes Harvey and Irma are expected to lead to losses of more than $150 billion US dollars due to damages and lost productivity (Dillow [ 8] ). In addition, more than 200 lives have been lost and millions of people have suffered from power outages and shortage of basic necessities for an extended period of time in the United States and the Caribbean. In the same year, a 7.1 earthquake rattled Mexico City killing more than 150 people and leaving thousands struggling to get their lives back to normalcy (Buchanan et al. [ 2] ). Based on the Intergovernmental Panel on Climate Change, NASA predicts that global warming could possibly lead to increase in natural calamities such as drought, intensity of storms, hurricanes, monsoons, and mid‐latitude storms in the upcoming years. Simultaneously, the geo‐political, social, and economic tensions have increased the need for humanitarian operations globally; such
  • 42. impacts have been experienced due to the crisis in Middle East, refugees in Europe, the systemic needs related to drought, hunger, disease, and poverty in the developing world, and the increased frequency of random acts of terrorism. According to the Global Humanitarian Assistance Report, 164.2 million people across 46 countries needed some form of humanitarian assistance in 2016 and 65.6 million people were displaced from their homes, the highest number witnessed thus far. At the same time, the international humanitarian aid increased to all time high of $27.3 billion US dollars from $16.1 billion US dollars in 2012. Despite that increase, common belief is that Listen American Accent http://app-na.readspeaker.com/cgi- bin/rsent?customerid=5845&lang=en_us&readid=rs_full_text_c ontainer_title&url=http%3A%2F%2Feds.a.ebscohost.com%2Fed s%2Fdetail%2Fdetail%3Fvid%3D2%26sid%3D2956bf6a-6493- 4df7-9888- 5624b87bfb48%2540sessionmgr4007%26bdata%3DJkF1dGhUe XBlPXNoaWImc2l0ZT1lZHMtbGl2ZQ%253d%253d&speedVal ue=medium&download=true&audiofilename=BigDataAnalyticsf orRapid-SwaminathanJayashankarM-20180901 javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); javascript:void(0); 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a-
  • 43. 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN oaWImc… 2/9 funding is not sufficient to meet the growing humanitarian needs. Therefore, humanitarian organizations will continue to operate under capacity constraints and will need to innovate their operations to make them more efficient and responsive. There are many areas in which humanitarian operations can improve. Humanitarian operations are often blamed for being slow or unresponsive. For example, the most recent relief efforts for Puerto Rico have been criticized for slow response. These organizations also face challenges in being able to sustain a policy or best practice for an extended period of time because of constant turnover in personnel. They are often blamed for being inefficient in how they utilize resources (Vanrooyen [ 29] ). Some of the reasons that contribute to their inefficiency include operating environment such as infrastructure deficiencies in the last mile, socio‐political tensions, uncertainty in funding, randomness of events and presence of multiple agencies and stake holders. However, it is critical that humanitarian operations show high level of performance so that every dollar that is routed in these activities is utilized to have the maximum impact on the people in need. Twenty‐one donor governments and 16 agencies have pledged at the World Humanitarian Summit in 2016 to find at least one billion USD in savings by working more efficiently over the next 5 years (Rowling [ 24] ). We believe the best performing humanitarian operations need to have the following characteristics—they need to be Rapid, they have to be Impactful in terms of saving human lives, should be effective in terms of providing
  • 44. Sustained benefits and they should be highly Efficient. We coin RISE as an acronym that succinctly describes the characteristics of successful humanitarian operations and it stands for Rapid, Impactful, Sustained, and Efficient. One of the major opportunities for improving humanitarian operations lies in how data and information are leveraged to develop above competencies. Traditionally, humanitarian operations have suffered from lack of consistent data and information (Starr and Van Wassennhove [ 26] ). In these settings, information comes from a diverse set of stakeholders and a common information technology is not readily deployable in remote parts of the world. However, the Big Data wave that is sweeping through all business environments is starting to have an impact in humanitarian operations as well. For example, after the 2010 Haiti Earthquake, population displacement was studied for a period of 341 days using data from mobile phone and SIM card tracking using FlowMinder. The data analysis allowed researchers to predict refugee locations 3 months out with 85% accuracy. This analysis facilitated the identification of cholera outbreak areas (Lu et al. [ 18] ). Similarly, during the Typhoon Pablo in 2012, the first official crisis map was created using social media data that gave situation reports on housing, infrastructure, crop damage, and population displacement using metadata from Twitter. The map became influential in guiding both UN and Philippines government agencies (Meier [ 21] ). Big Data is defined as large volume of structured and unstructured data. The three V's of Big Data are Volume, Variety, and Velocity (McCafee and Brynjolfsson [ 19] ). Big Data Analytics examines large amounts of data to uncover hidden patterns and correlations which can then be utilized to develop intelligence around the operating
  • 45. environment to make better decisions. Our goal in this article is to lay out a framework and present examples around how Big Data Analytics could enable RISE humanitarian operations. Humanitarian Operations—Planning and Execution 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a- 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN oaWImc… 3/9 Planning and Execution are critical aspects of humanitarian operations that deal with emergencies (like hurricanes) and systemic needs (hunger). All humanitarian operations have activities during preparedness phase (before) and disaster phase (during). Emergencies also need additional focus on the recovery phase (after). Planning and execution decisions revolve around Where, When, How, and What. We will take the UNICEF RUTF supply chain for the Horn of Africa (Kenya, Ethiopia, and Somalia) as an example. RUTF (ready to use therapeutic food) also called Plumpy’ Nut is a packaged protein supplement that can be given to malnourished children under the age of 5 years. The supplement was found to be very effective; therefore, the demand for RUTF skyrocketed, and UNICEF supply chain became over stretched (Swaminathan [ 27] ). UNICEF supply chain showed many inefficiencies due to long lead times, high transportation costs, product shortages, funding uncertainties, severe production capacity constraints, and
  • 46. government regulations (So and Swaminathan [ 25] ). Our analysis using forecasted demand data from the region found that it was important to determine where inventory should be prepositioned (in Kenya or in Dubai). The decision greatly influenced the speed and efficiency of distribution of RUTF. The amount of prepositioned inventory also needed to be appropriately computed and operationalized (Swaminathan et al. [ 28] ). Given that the amount of funding and timing showed a lot of uncertainty, when funding was obtained, and how inventory was procured and allocated, dramatically influenced the overall performance (Natarajan and Swaminathan [ 22] , [ 23] ). Finally, understanding the major roadblocks to execution and addressing those for a sustained solution had a great impact on the overall performance. In the UNICEF example, solving the production bottleneck in France was critical. UNICEF was able to successfully diversify its global supply base and bring in more local suppliers into the network. Along with the other changes that were incorporated, UNICEF RUTF supply chain came closer to being a RISE humanitarian operations and estimated that an additional one million malnourished children were fed RUTF over the next 5 years (Komrska et al. [ 15] ). There are a number of other studies that have developed robust optimization models and analyzed humanitarian settings along many dimensions. While not an exhaustive list, these areas include humanitarian transportation planning (Gralla et al. [ 13] ), vehicle procurement and allocation (Eftekar et al. [ 9] ), equity and fairness in delivery (McCoy and Lee [ 20] ), funding processes and stock‐outs (Gallien et al. [ 12] ), post‐disaster debris operation (Lorca et al. [ 17] ), capacity planning (Deo et al. [ 6] ), efficiency drivers in global health (Berenguer et al. [ 1] ), and decentralized decision‐making (Deo and Sohoni [ 5] ). In a humanitarian setting, the following types of questions need to be answered.
  • 47. Where a.Where is the affected population? Where did it originate? Where is it moving to? b.Where is supply going to be stored? Where is the supply coming from? Where will the distribution points be located? c.Where is the location of source of disruption (e.g., hurricane)? Where is it coming from? Where is moving to? d.Where are the debris concentrated after the event? When 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a- 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN oaWImc… 4/9 a.When is the landfall or damage likely to occur? b.When is the right time to alert the affected population to minimize damages as well as unwanted stress? c.When should delivery vehicles be dispatched to the affected area? d.When should supply be reordered to avoid stock‐outs or long
  • 48. delays? e.When should debris collection start? How a.How should critical resources be allocated to the affected population? b.How much of the resources should be prepositioned? c.How many suppliers or providers should be in the network? d.How to transport much needed suppliers and personnel in the affected areas? e.How should the affected population be routed? What a.What types of calamities are likely to happen in the upcoming future? b.What policies and procedure could help in planning and execution? c.What are the needs of the affected population? What are reasons for the distress or movement? d.What needs are most urgent? What additional resources are needed? Big Data Analytics Big Data Analytics can help organizations in obtaining better answers to the above types of questions and in this process enable them to make sound real‐time decisions during
  • 49. and after the event as well as help them plan and prepare before the event (see Figure ). Descriptive analytics (that describes the situation) could be used for describing the current crisis state, identifying needs and key drivers as well as advocating policies. Prescriptive analytics (that prescribes solutions) can be utilized in alert and dispatch, prepositioning of supplies, routing, 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a- 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN oaWImc… 5/9 supplier selection, scheduling, allocation, and capacity management. Predictive analytics (that predicts the future state) could be utilized for developing forecasts around societal needs, surge capacity needs in an emergency, supply planning, and financial needs. Four types of data streams that could be utilized to develop such models are social media data, SMS data, weather data, and enterprise data. Social Media Data The availability of data from social media such as Twitter has opened up several opportunities to improve humanitarian emergency response. Descriptive analytics from the data feed during an emergency could help create the emergency crisis map in rapid time and inform about areas of acute needs as well as movement of distressed population. This could help with rapid response into areas that need the most help. Furthermore, such data feed could also be used to predict the future movement of the
  • 50. affected population as well as surges in demand for certain types of products or services. A detailed analysis of these data after the event could inform humanitarian operations about the quality of response during the disaster as well as better ways to prepare for future events of a similar type. This could be in terms of deciding where to stock inventory, when and how many supply vehicles should be dispatched and also make a case for funding needs with the donors. Simulation using social media data could provide solid underpinning for a request for increased funding. Analysis of information diffusion in the social network could present new insights on the speed and efficacy of messages relayed in the social network (Yoo et al. [ 30] ). Furthermore, analyzing the population movement data in any given region of interest could provide valuable input for ground operations related to supply planning, positioning, and vehicle routing. Finally, social media data is coming from the public directly and sometimes may contain random or useless information even during emergency. There is an opportunity to develop advanced filtering models so that social media data are leveraged in real‐time decision‐making. SMS Data Big Data Analytics can also be adapted successfully for SMS‐based mobile communications. For example, a number of areas in the United States have started using cell phone SMS to text subscribers about warnings and alerts. Timely and accurate alerts can save lives particularly during emergencies. Predictive analytics models can be developed to determine when, where, and to whom these alerts should be broadcasted in order to maximize the efficacy of the alerts. The usage of mobile alerts is gaining momentum in the case of sustained humanitarian response as well. For example, frequent reporting of inventory at the warehouse for food and drugs can reduce
  • 51. shortages. Analytics on these data could provide more nuances on the demand patterns which in turn could be used to plan for the correct amount and location of supplies. Mobile phone alerts have also shown to improve antiretroviral treatment adherence in patients. In such situations, there is a great opportunity to analyze what kinds of alerts and what levels of granularity lead to the best response from the patient. Weather Data Most regions have highly sophisticated systems to track weather patterns. This type of real‐time data is useful in improving the speed of response, so that the affected population can be alerted early and evacuations can be planned better. It also has a lot of information for designing humanitarian efforts for the future. For example, by analyzing the data related to the weather changes along with population movement, one could develop robust prescriptive models around how shelter capacity should be planned as well as how the affected population should 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a- 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN oaWImc… 6/9 be routed to these locations. So, rather than trying to reach a shelter on their own, an affected person can be assigned a shelter and directed to go there. Prepositioning of inventory at the right locations based on weather data could improve response dramatically as reflected by the actions
  • 52. of firms such as Wal‐Mart and Home Depot that have made it a routine process after successful implementation during hurricane Katrina. Finally, the weather pattern data could be utilized to develop predictive models around the needs of the population in the medium to long term. For example, the drought cycles in certain regions of Africa follow a typical time pattern. A predictive model around the chances of famine in those regions could then inform the needs and funding requirements for food supplements. Enterprise Data Most large humanitarian organizations such as UNICEF have information systems that collect a large amount of data about their operations. Analytics on such data can be useful to develop robust policies and guide the operational decisions well. For example, in systemic and emergent humanitarian needs, analyzing the demand and prepositioning inventory accordingly has shown to improve the operational performance. Furthermore, the analysis of long‐term data could provide guidelines for surge capacity needed under different environments as well as predict long‐term patterns for social needs across the globe due to changing demographics and socioeconomic conditions. As the Big Data Analytics models and techniques develop further, there will be greater opportunities to leverage these data streams in more effective ways, particularly, given that the accuracy of data coming out of the different sources may not have the same level of fidelity in a humanitarian setting. While data are available in abundance in the developed world, there are still geographical areas around the globe where cell phone service is limited, leave alone social media data. In those situations, models with incomplete or missing data need to be developed. Also
  • 53. the presence of multiple decentralized organizations with varied degree of information technology competencies and objectives limits their ability to effectively synthesize the different data streams to coordinate decision‐ making. Concluding Remarks Big data has enabled new opportunities in the value creation process including product design and innovation (Lee [ 16] ), manufacturing and supply chain (Feng and Shanthikumar [ 10] ), service operations (Cohen [ 3] ), and retailing (Fisher and Raman [ 11] ). It is also likely to impact sustainability (Corbett [ 4] ), agriculture (Devalkar et al. [ 7] ), and healthcare (Guha and Kumar [ 14] ). In our opinion, humanitarian organizations are also well positioned to benefit from this phenomenon. Operations Management researchers will have opportunity to study newer topics and develop robust models and insights that could guide humanitarian operations and make them more Responsive, Impactful, Sustained, and Efficient. Acknowledgments The author wishes to thank Gemma Berenguer, Anand Bhatia, Mahyar Efthekar, and Jarrod Goentzel for their comments on an earlier version of this study. References 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a- 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN
  • 54. oaWImc… 7/9 1 Berenguer, G., A. V. Iyer, P. Yadav. 2016. Disentangling the efficiency drivers in country‐level global health programs: An empirical study. J. Oper. Manag. 45: 30–43. 2 Buchanan, L., J. C. Lee, S. Pechanha, K. K. R. Lai. 2017. Mexico City before and after the earthquake. New York Times, September 23, 2017. 3 Cohen, M. C. 2018. Big data and service operations. Prod. Oper. Manag. 27(9): 1709–1723. http://doi.org/10.1111/poms.12832. 4 Corbett, C. J. 2018. How sustainable is big data? Prod. Oper. Manag. 27(9): 1685–1695. http://doi.org/10.1111/poms.12837. 5 Deo, S., M. Sohoni. 2015. Optimal decentralization of early infant diagnosis of HIV in resource‐limited settings. Manuf. Serv. Oper. Manag. 17(2): 191–207. 6 Deo, S., S. Iravani, T. Jiang, K. Smilowitz, S. Samuelson. 2013. Improving health outcomes through capacity allocation in a community based chronic care model. Oper. Res. 61(6): 1277–1294. 7 Devalkar, S. K., S. Seshadri, C. Ghosh, A. Mathias. 2018. Data science applications in indian agriculture. Prod. Oper. Manag. 27(9): 1701–1708. http://doi.org/10.1111/poms.12834. 8 Dillow, C. 2017. The hidden costs of hurricanes, Fortune, September 22, 2017. 9 Eftekar, M., A. Masini, A. Robotis, L. Van Wassenhove.
  • 55. 2014. Vehicle procurement policy for humanitarian deevlopment programs. Prod. Oper. Manag. 23(6): 951–964. 10 Feng, Q., J. G. Shanthikumar. 2018. How research in production and operations management may evolve in the era of big data. Prod. Oper. Manag. 27(9): 1670–1684. http://doi.org/10.1111/poms.12836. 11 Fisher, M., A. Raman. 2018. Using data and big data in retailing. Prod. Oper. Manag. 27(9): 1665–1669. http://doi.org/10.1111/poms.12846. 12 Gallien, J., I. Rashkova, R. Atun, P. Yadav. 2017. National drug stockout risks and global fund disbusement process for procurement. Prod. Oper. Manag. 26(6): 997–1014. 13 Gralla, E., J. Goentzel, C. Fine. 2016. Problem formulation and solutions mechanisms: A behavioral study of humanitarian transportation planning. Prod. Oper. Manag. 25(1): 22–35. http://doi.org/10.1111/poms.12832 http://doi.org/10.1111/poms.12837 http://doi.org/10.1111/poms.12834 http://doi.org/10.1111/poms.12836 http://doi.org/10.1111/poms.12846 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a- 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN oaWImc… 8/9
  • 56. 14 Guha, S., S. Kumar. 2018. Emergence of big data research in operations management, information systems and helathcare: Past contributions and future roadmap. Prod. Oper. Manag. 27(9): 1724–1735. http://doi.org/10.1111/poms.12833. 15 Komrska, J., L. Kopczak, J. M. Swaminathan. 2013. When supply chains save lives. Supply Chain Manage. Rev. January–February: 42–49. 16 Lee, H. L. 2018. Big data and the innovation cycle. Prod. Oper. Manag. 27(9): 1642–1646. http://doi.org/10.1111/poms.12845. 17 Lorca, A., M. Celik, O. Ergun, P. Keskiniocak. 2017. An optimization based decision support tool for post‐ disaster debris operations. Prod. Oper. Manag. 26(6): 1076– 1091. 18 Lu, X., L. Bengtsson, P. Holme. 2012. Predictability of population displacement after 2010 Haiti Earthquakes. Proc. Natl Acad. Sci. 109(29): 11576–11581. 19 McCafee, A., E. Brynjolfsson. 2012. Big data: The management revolution. Harvard Business Review, October 1–9, 2012. 20 McCoy, J., H. L. Lee. 2014. Using fairness models to improve equity in health delivery fleet management. Prod. Oper. Manag. 23(6): 965–977. 21 Meier, P. 2012. How UN used social media in response to typhoon Pablo. Available at http://www.irevolutions.org (accessed date December 12, 2012). 22 Natarajan, K., J. M. Swaminathan. 2014. Inventory
  • 57. management in humanitarian operations: Impact of amount, schedule, and uncertainty in funding. Manuf. Serv. Oper. Manag. 16(4): 595–603. 23 Natarajan, K., J. M. Swaminathan. 2017. Multi‐Treatment Inventory Allocation in Humanitarian Health Settings under Funding Constraints. Prod. Oper. Manag. 26(6): 1015–1034. 24 Rowling, M. 2016. Aid efficiency bargain could save $1 billion per year. Reuters, May 23, 2016. 25 So, A., J. M. Swaminathan. 2009. The nutrition articulation project: A supply chain analysis of ready‐to‐use therapeutic foods to the horn of Africa. UNICEF Technical Report. 26 Starr, M., L. Van Wassennhove. 2014. Introduction to the special issue on humanitarian operations and crisis management. Prod. Oper. Manag., 23(6), 925–937. http://doi.org/10.1111/poms.12833 http://doi.org/10.1111/poms.12845 http://www.irevolutions.org/ 3/22/2020 Big Data Analytics for Rapid, Impactful, Sustained, and Efficient (RISE) Hu...: UC MegaSearch eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=2956bf6a- 6493-4df7-9888- 5624b87bfb48%40sessionmgr4007&bdata=JkF1dGhUeXBlPXN oaWImc… 9/9 27 Swaminathan, J. M. 2010. Case study: Getting food to disaster victims. Financial Times, October 13, 2010.
  • 58. 28 Swaminathan, J. M., W. Gilland, V. Mani, C. M. Vickery, A. So. 2012. UNICEF employs prepositioning strategy to improve treatment of severely malnourished children. Working paper, Kenan‐Flagler Business School, University of North Carolina, Chapel Hill. 29 Vanrooyen, M. 2013. Effective aid. Harvard International Review, September 30, 2013. 30 Yoo, E., W. Rand, M. Eftekhar, E. Rabinovich. 2016. Evaluating information diffusion speed and its determinants in social networks during humanitarian crisis. J. Oper. Manag. 45: 123–133. PHOTO (COLOR): Big Data Analytics and Rapid, Impactful, Sustained, and Efficient Humanitarian Operations ~~~~~~~~ By Jayashankar M. Swaminathan Copyright of Production & Operations Management is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. EBSCO Connect Privacy Policy A/B Testing Terms of Use Copyright Cookie Policy Contact Us powered by EBSCOhost © 2020 EBSCO Industries, Inc. All rights reserved.
  • 59. https://connect.ebsco.com/ https://www.ebsco.com/company/privacy-policy https://www.ebsco.com/conversion-testing-statement https://www.ebsco.com/terms-of-use https://www.ebsco.com/terms-of-use https://www.ebsco.com/cookie-policy https://support.ebscohost.com/contact/index.php Reading Head: ANNOTATED BIBLIOGRAPHY 1 ANNOTATED BIBLIOGRAPHY 5 Annotated Bibliography for Research Paper Name Professor: University of the Cumberlands Date Pasquale, F., & Cashwell, G. (2018). Prediction, persuasion, and the jurisprudence of behaviorism. University of Toronto Law Journal, 68(supplement 1), 63-81. http://eds.a.ebscohost.com/eds/detail/detail?vid=1&sid=80a5393 2-b932-4bf6-926e-
  • 60. 093727bceef6%40sessionmgr4007&bdata=JkF1dGhUeXBlPXNo aWImc2l0ZT1lZHMtbGl2ZQ%3d%3d#db=edspmu&AN=edspmu .S1710117418000033 In the above article, Pasquale and Cashwell show how big data and artificial intelligence is used in the judicial system for prediction and persuasion of behavior. The authors point out how decision-makers are using data algorithms to help in predicting whether judges will take cases and if so, the merits they will use to reach a decision. The employment of natural language and machine learning (ML) techniques has become a trend in the twenty-first century. The authors also state in their article how big data could also be used to predict certain natural phenomena such as the weather. The use of algorithmic predictions is an emerging jurisprudence of behaviorism in the context of judicial law. The article looks at the issues that are associated with analytic data predictions as used by judges. It also tries to answer questions such as the use and purpose of predictive software, whether artificial intelligence is a valuable tool for highlighting violations of human rights (in court cases), and whether elements of bias could be possible when using ML techniques. The article analyses the use of predictive data technology on specific aspects of ordinary life, and whether such artificial intelligence and trends in data analytics could be of more benefit than the pullback in society. The authors focus their argument on the judicial system. Predictive analytics is essential when it comes to the business world. Most businesses that use software to predict, persuade consumer behaviors are always successful in terms of sales and revenues and consumer loyalty. Unlike traditional intelligence approaches to data, behavioral predictive, and persuasive data analytics help determine how a customer might behave in the future situation and how they may react to certain aspects a business share with them. Such predictive analytics can discover patterns and identify opportunities or problems in a market. Predictive analytics also allows companies to plan, thus avoiding certain uncertainties concerning their consumers. A
  • 61. good example is a clothing store that gathers consumer behavior data. The store can use the data to predict what a consumer will buy in the future and thus be well-stocked with what the consumers need beforehand. Guha, S., & Kumar, S. (2018). Emergence of big data research in operations management, information systems, and healthcare: Past contributions and future roadmap. Production and Operations Management, 27(9), 1724-1735. http://eds.a.ebscohost.com/eds/detail/detail?vid=2&sid=5f27297 7-4f85-413f-a32c-6b3dd97f38a8%40sdc-v- sessmgr02&bdata=JkF1dGhUeXBlPXNoaWImc2l0ZT1lZHMtbG l2ZQ%3d%3d#AN=131754554&db=buh According to Guha and Kumar, in their article 'Emergence of big data research in operations management…', there are various changing trends in data collection and management. The authors believe that in the new century, data is generated whenever we use the Internet and that aside from the information that we make, interconnected devices on the Internet of things also collect data. The information is having a considerable amount about the environmental factors of this present reality and the requirement for extensive information examination in the specialized viewpoint just as the individual component of data use. In the article, the authors discuss the contributions of big data to various domains such as healthcare, information systems and operations, and supply management. The report also touches on the sub-areas of the stated areas and ways in which big data techniques lead to improvements. The authors even discuss cloud computing, the Internet of things (IoT), smart health and predictive manufacturing, and how such an area has the potential of growth and exploration. Big data is applied important to a business and can be used in various ways. It can be used for social listening. The availability of vast waves of data makes it possible for businesses to determine the word going around in society about the company. Business owners also use big data to make comparative and market analysis.
  • 62. Business owners can compare their products and services with competition through analysis of user behavior. Big data also allows for real-time monitoring of consumer engagement in the business sector. Information from marketing analytics helps in promote and get new audiences for new products in the market. Big data thus helps businesses utilize outside intelligence in the process of decision making, improve customer care, create operational efficiency, and identifying risks in products and services a company offers. An excellent example of the benefits of big data is when a business uses information about consumer purchasing behavior to target a tailored advertisement to such a segment market. Akl, S. G., & Salay, N. (2019). Artificial Intelligence A Promising Future? Queen's Quarterly, 126(1), 6-20. https://go.gale.com/ps/retrieve.do?tabID=T001&resultListType= RESULT_LIST&searchResultsType=SingleTab&searchType=A dvancedSearchForm&currentPosition=1&docId=GALE%7CA58 2622399&docType=Essay&sort=Pub+Date+Reverse+Chron&co ntentSegment=ZLRC- MOD1&prodId=LitRC&contentSet=GALE%7CA582622399&se archId=R2&userGroupName=cumberlandcol&inPS=true Aki and Salay discuss artificial intelligence in their article on 'Artificial Intelligence A Promising Future.' In their research, they view AI having a bright future and shaping the way human beings carry on their day to day activities. The authors talk of how artificial intelligence has developed over the years, citing examples of developments such as Deep Blue, Watson, Project Debate, and AlphaGo, among many others. The article talks of how Artificial Intelligence as a science has become a social phenomenon. The authors point out that artificial intelligence and machine learning serve a great purpose in the modern-day world. The use of data and deep learning algorithms are to extract features that are in artificial intelligence technology. AI's future is bright, and the authors feel that such a positive trend will see the use of technology hugely benefit the life of a human being.
  • 63. In business, artificial intelligence is to automate tasks that would otherwise be manual and time-consuming. Technological development can be used by companies to create a competitive advantage and to increase efficiency. AI also ensures that tasks are done efficiently with minimal errors as when compared to human efforts. Artificial intelligence can also be to detect fraud, improve data security, and ensure proper marketing and security screening.