SlideShare a Scribd company logo
SHOULD ALGORITHMS DECIDE YOUR FUTURE?
This publication was prepared by Kilian Vieth and
Joanna Bronowicka from Centre for Internet and
Human Rights at European University Viadrina. It was
prepared based on a publication “The Ethics of
Algorithms: from radical content to self-driving cars”
with contributions from Zeynep Tufekci, Jillian C. York,
Ben Wagner and Frederike Kaltheuner and an event
on the Ethics of Algorithms, which took place on
March 9-10, 2015 in Berlin. The research was support-
ed by the Dutch Ministry of Foreign Affairs.
Find out more: cihr.eu/ethics-of-algorithms/
Follow the discussion on Twitter: #EoA2015
Graphic design by Thiago Parizi
cihr.eu @cihr_eu
1 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS |
2
WHAT IS AN ALGORITHM?
ALGORITHMS SHAPE OUR WORLD(S)!
Our everyday life is shaped by computers and our computers are
shaped
by algorithms. Digital computation is constantly changing how
we commu-
nicate, work, move, and learn. In short, digitally connected
computers are
changing how we live our lives. This revolution is unlikely to
stop any time
soon.
Digitalization produces increasing amounts of datasets known
as ‘big
data’. So far, research focused on how ‘big data is produced and
stored.
Now, we begin to scrutinize how algorithms make sense of this
growing
amount of data
Algorithms are the brains of our computers, mobiles, Internet of
Things.
Algorithms are increasingly used to make decisions for us,
about us, or
with us – oftentimes without us realizing it. This raises many
questions
about the ethical dimension of algorithms.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of
algorithms.
What are typical functions they perform? What are negative
impacts for
human rights? Here are some examples that probably affect you
too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is
ignored;
and even what gets published at all, and what is censored. This
is true for
all kinds of search rankings, for example the way your social
media news-
feed looks. In other words, algorithms perform a gate-keeping
function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking
part in
hiring (and firing) of employees. Deciding who gets a job and
who does
not, is among the most powerful gate-keeping function in
society.
• Research shows that human managers display many different
biases in
hiring decisions, for example based on social class, race and
gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic
hiring can
easily overcome human biases. Algorithms might work more
accurate
in some areas, but can also create new, sometimes unintended,
prob-
lems depending on how they are programmed and what input
data is
used.
Ethical implications: Algorithms work as gatekeepers that
influence how
we perceive the world, often without us realizing it. They
channel our
attention, which implies tremendous power.
The term ‘algorithm’ refers to any computer code that carries
out a
set of instructions. Algorithms are essential to the way
computers
process data. Theoretically speaking, they are encoded
procedures,
which transform data based on specific calculations. They
consist of
a series of steps that are undertaken to solve a particular
problem,
like in a recipe. An algorithm is taking inputs (ingredients),
breaking
a task into its constituent parts, undertaking those tasks one by
one,
and then producing an output (e.g. a cake). A simple example of
an
algorithm is “find the largest number in this series of numbers”.
THEY MAKE SUBJECTIVE DECISIONS
Some algorithms deal with questions, which do not have a clear
‘yes or no’
answer. Thus, they move a way from a checkbox answer “Is this
right or
wrong?” to more complex judgements, such as “What is
important? Who is
the right person for the job? Who is a threat to public safety?
Who should I
date?” Quietly, these types of subjective decisions previously
made by
humans are turned over to algorithms.
EXAMPLE
Predictive policing based on statistical forecasting:
• In early 2014, the Chicago Police Department made national
headlines
in the US for visiting residents who were considered to be most
likely
involved in violent crime. The selection of individuals, wo were
not
necessarily under investigation, was guided by a computer-
generated
“heat list” – an algorithm that seeks to predict future
involvement in
violent crime.
• A key concern about predictive policing is that such
automated systems
may create an echo chamber or a self-fulfilling prophecy. In
fact, heavy
policing of a specific area can increase the likelihood that crime
will be
detected. Since more police means more opportunities to
observe
residents' activities, the algorithm might just confirm its own
predic-
tion.
• Right now, police departments around the globe are testing
and imple-
menting predictive policing algorithms, but lack safeguards for
discrim-
inatory biases.
Ethical implications: Predictions made by algorithms provide
no guaran-
tee that they are right. And officials acting on incorrect
predictions may
even create unjustified or biased investigations.
CORE ISSUES – SHOULD AN ALGORITHM
DECIDE YOUR FUTURE?
Without the help of algorithms, many present-day applications
would be
unusable. We need them to cope with the enormous amounts of
data we
produce every day. Algorithms make our lives easier and more
productive,
and we certainly don't want to lose those advantages. But we
need to be
aware of what they do and how they decide.
THEY CAN DISCRIMINATE AGAINST YOU,
JUST LIKE HUMANS
Computers are often regarded as objective and rational
machines. Howev-
er, algorithms are made by humans and can be just as biased.
We need to
be critical of the assumption, that algorithms can make “better”
decisions
than human beings. There are racist algorithms and sexist ones.
Algorithms
are not neutral, but rather they perpetuate the prejudices of their
creators.
Their creators, such as businesses or governments, can
potentially have
different goals in mind than the users would have.
THEY MUST BE KNOWN TO THE USER
Since algorithms make increasingly important decisions about
our lives,
users need to be informed about them. Knowledge about
automated
decision-making in everyday services is still very limited
among consumers.
Raising awareness should be at the heart of the debate about
ethics of
algorithms.
PUBLIC POLICY APPROACHES TO REGULATE
ALGORITHMS
How do you regulate a black box? We need to open a discussion
on how
policy-makers are trying to deal with ethical concerns around
algorithms.
There have been some attempts to provide algorithmic
accountability, but
we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is
government
institutions like antitrust or consumer protection agencies, who
should
provide appropriate regulation and oversight. But who regulates
the use of
algorithms by the government itself? For cases like predictive
policing
ethical standards and legal safeguards are needed.
Recently, regulatory approaches to algorithms have circled
around trans-
parency, notification, and direct regulation. Yet, experience
shows that
policy-makers are facing certain dilemmas of regulation when it
comes to
algorithms.
TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE
If you are faced with a complex and obscure algorithm, one
common
reaction is a demand for more transparency about what and how
it works.
The concern about black-box algorithms is that they make
inherently
subjective decisions, which might contain implicit or explicit
biases. At the
same time, making complex algorithms fully transparent can be
extremely
challenging:
• It is not enough to merely publish the source code of an
algorithm,
because machine-learning systems will inevitably make
decisions that
have not been programmed directly. Complete transparency
would
require that we are able to explain why any particular outcome
was
produced.
• Some investigations have reverse-engineered algorithms in
order to
create greater public awareness about them. That is one way
how the
public can perform a watchdog function.
• Often, there might be good reasons why complex algorithms
operate
opaquely, because public access would make them much more
vulnera-
ble to manipulation. If every company knew how Google ranks
its
search results, it could optimize their behavior and render the
ranking
algorithm useless.
NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
A different form of transparency is to give consumers control
over their
personal information that feeds into algorithms. Notification
includes the
rights to correct that personal information and demand it be
excluded
from databases of data vendors. Regaining control over your
personal
information ensures accountability to the users.
DIRECT REGULATION | WHEN ALGORITHMS BECOME
CRITICAL INFRASTRUCTURE
• In some cases public regulators have been prone to create
ways to
manipulate algorithms directly. This is especially relevant for
core
infrastructure.
Debate about algorithmic regulation is most advanced in the
area of
finance. Automated high-speed trading has potentially
destabilizing
effects on financial markets, so regulators have begun to
demand the
ability to modify these algorithms.
• The ongoing antitrust investigations into Google's ‘search
neutrality’
revolve around the same question: can regulators may require
access to
and modification of the search algorithm in the interest of the
public?
This approach is based on a contested assumption that it is
possible to
predict objectively how a certain algorithms will respond. Yet,
there is
simply no ‘right’ answer to how Google should rank its results.
Antitrust
agencies in the US and the EU have not yet found an regulatory
response to this issue.
In some cases direct regulation or complete and public
transparency might
be necessary. However, there is no one-size-fits-all regulatory
response.
More scrutiny of algorithms must be enabled, which requires
new practices
from industry and technologists. More consumer protection and
direct
regulation should be introduced where appropriate.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of
algorithms.
What are typical functions they perform? What are negative
impacts for
human rights? Here are some examples that probably affect you
too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is
ignored;
and even what gets published at all, and what is censored. This
is true for
all kinds of search rankings, for example the way your social
media news-
feed looks. In other words, algorithms perform a gate-keeping
function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking
part in
hiring (and firing) of employees. Deciding who gets a job and
who does
not, is among the most powerful gate-keeping function in
society.
• Research shows that human managers display many different
biases in
hiring decisions, for example based on social class, race and
gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic
hiring can
easily overcome human biases. Algorithms might work more
accurate
in some areas, but can also create new, sometimes unintended,
prob-
lems depending on how they are programmed and what input
data is
used.
Ethical implications: Algorithms work as gatekeepers that
influence how
we perceive the world, often without us realizing it. They
channel our
attention, which implies tremendous power.
3 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS |
4
THEY MAKE SUBJECTIVE DECISIONS
Some algorithms deal with questions, which do not have a clear
‘yes or no’
answer. Thus, they move a way from a checkbox answer “Is this
right or
wrong?” to more complex judgements, such as “What is
important? Who is
the right person for the job? Who is a threat to public safety?
Who should I
date?” Quietly, these types of subjective decisions previously
made by
humans are turned over to algorithms.
EXAMPLE
Predictive policing based on statistical forecasting:
• In early 2014, the Chicago Police Department made national
headlines
in the US for visiting residents who were considered to be most
likely
involved in violent crime. The selection of individuals, wo were
not
necessarily under investigation, was guided by a computer-
generated
“heat list” – an algorithm that seeks to predict future
involvement in
violent crime.
• A key concern about predictive policing is that such
automated systems
may create an echo chamber or a self-fulfilling prophecy. In
fact, heavy
policing of a specific area can increase the likelihood that crime
will be
detected. Since more police means more opportunities to
observe
residents' activities, the algorithm might just confirm its own
predic-
tion.
• Right now, police departments around the globe are testing
and imple-
menting predictive policing algorithms, but lack safeguards for
discrim-
inatory biases.
Ethical implications: Predictions made by algorithms provide
no guaran-
tee that they are right. And officials acting on incorrect
predictions may
even create unjustified or biased investigations.
WE DON’T ALWAYS KNOW HOW THEY WORK
Complexity is huge: Many present day algorithms are very
complicated
and can be hard for humans to understand, even if their source
code is
shared with competent observers. What adds to the problem is
opacity, as
in lack of transparency of the code. Algorithms perform
complex calcula-
tions, which follow many potential steps along the way. They
can consist of
thousands, or even millions, of individual data points.
Sometimes not even
the programmers can predict how an algorithm will decide on a
certain
case.
EXAMPLE
Facebook newsfeed algorithms is complex and opaque:
• Many users are not aware that when we open Facebook, an
algorithm
that puts together the postings, pictures and ads that we see.
Indeed, it
is an algorithm that decides what to show us and what to hold
back.
Facebook newsfeed algorithm filters the content you see, but do
you
know the principles it uses to hold back information from you?
• In fact, a team of researchers tweaks this algorithm every
week – they
take thousands and thousands of metrics into consideration.
This is
why the effects of newsfeed algorithms are hard to predict -
even by
Facebook engineers!
• If we asked Facebook how the algorithm works, they would
not tell us.
The principles behind the way newsfeed work (the source code)
are in
fact a business secret. Without knowing the exact code, nobody
can
evaluate how your newsfeed is composed.
Ethical implications: Complex algorithms are often practically
incompre-
hensible to outsiders, but they inevitably have values, biases,
and potential
discrimination built in.
CORE ISSUES – SHOULD AN ALGORITHM
DECIDE YOUR FUTURE?
Without the help of algorithms, many present-day applications
would be
unusable. We need them to cope with the enormous amounts of
data we
produce every day. Algorithms make our lives easier and more
productive,
and we certainly don't want to lose those advantages. But we
need to be
aware of what they do and how they decide.
THEY CAN DISCRIMINATE AGAINST YOU,
JUST LIKE HUMANS
Computers are often regarded as objective and rational
machines. Howev-
er, algorithms are made by humans and can be just as biased.
We need to
be critical of the assumption, that algorithms can make “better”
decisions
than human beings. There are racist algorithms and sexist ones.
Algorithms
are not neutral, but rather they perpetuate the prejudices of their
creators.
Their creators, such as businesses or governments, can
potentially have
different goals in mind than the users would have.
THEY MUST BE KNOWN TO THE USER
Since algorithms make increasingly important decisions about
our lives,
users need to be informed about them. Knowledge about
automated
decision-making in everyday services is still very limited
among consumers.
Raising awareness should be at the heart of the debate about
ethics of
algorithms.
PUBLIC POLICY APPROACHES TO REGULATE
ALGORITHMS
How do you regulate a black box? We need to open a discussion
on how
policy-makers are trying to deal with ethical concerns around
algorithms.
There have been some attempts to provide algorithmic
accountability, but
we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is
government
institutions like antitrust or consumer protection agencies, who
should
provide appropriate regulation and oversight. But who regulates
the use of
algorithms by the government itself? For cases like predictive
policing
ethical standards and legal safeguards are needed.
Recently, regulatory approaches to algorithms have circled
around trans-
parency, notification, and direct regulation. Yet, experience
shows that
policy-makers are facing certain dilemmas of regulation when it
comes to
algorithms.
TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE
If you are faced with a complex and obscure algorithm, one
common
reaction is a demand for more transparency about what and how
it works.
The concern about black-box algorithms is that they make
inherently
subjective decisions, which might contain implicit or explicit
biases. At the
same time, making complex algorithms fully transparent can be
extremely
challenging:
• It is not enough to merely publish the source code of an
algorithm,
because machine-learning systems will inevitably make
decisions that
have not been programmed directly. Complete transparency
would
require that we are able to explain why any particular outcome
was
produced.
• Some investigations have reverse-engineered algorithms in
order to
create greater public awareness about them. That is one way
how the
public can perform a watchdog function.
• Often, there might be good reasons why complex algorithms
operate
opaquely, because public access would make them much more
vulnera-
ble to manipulation. If every company knew how Google ranks
its
search results, it could optimize their behavior and render the
ranking
algorithm useless.
NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
A different form of transparency is to give consumers control
over their
personal information that feeds into algorithms. Notification
includes the
rights to correct that personal information and demand it be
excluded
from databases of data vendors. Regaining control over your
personal
information ensures accountability to the users.
DIRECT REGULATION | WHEN ALGORITHMS BECOME
CRITICAL INFRASTRUCTURE
• In some cases public regulators have been prone to create
ways to
manipulate algorithms directly. This is especially relevant for
core
infrastructure.
Debate about algorithmic regulation is most advanced in the
area of
finance. Automated high-speed trading has potentially
destabilizing
effects on financial markets, so regulators have begun to
demand the
ability to modify these algorithms.
• The ongoing antitrust investigations into Google's ‘search
neutrality’
revolve around the same question: can regulators may require
access to
and modification of the search algorithm in the interest of the
public?
This approach is based on a contested assumption that it is
possible to
predict objectively how a certain algorithms will respond. Yet,
there is
simply no ‘right’ answer to how Google should rank its results.
Antitrust
agencies in the US and the EU have not yet found an regulatory
response to this issue.
In some cases direct regulation or complete and public
transparency might
be necessary. However, there is no one-size-fits-all regulatory
response.
More scrutiny of algorithms must be enabled, which requires
new practices
from industry and technologists. More consumer protection and
direct
regulation should be introduced where appropriate.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of
algorithms.
What are typical functions they perform? What are negative
impacts for
human rights? Here are some examples that probably affect you
too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is
ignored;
and even what gets published at all, and what is censored. This
is true for
all kinds of search rankings, for example the way your social
media news-
feed looks. In other words, algorithms perform a gate-keeping
function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking
part in
hiring (and firing) of employees. Deciding who gets a job and
who does
not, is among the most powerful gate-keeping function in
society.
• Research shows that human managers display many different
biases in
hiring decisions, for example based on social class, race and
gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic
hiring can
easily overcome human biases. Algorithms might work more
accurate
in some areas, but can also create new, sometimes unintended,
prob-
lems depending on how they are programmed and what input
data is
used.
Ethical implications: Algorithms work as gatekeepers that
influence how
we perceive the world, often without us realizing it. They
channel our
attention, which implies tremendous power.
THEY MAKE SUBJECTIVE DECISIONS
Some algorithms deal with questions, which do not have a clear
‘yes or no’
answer. Thus, they move a way from a checkbox answer “Is this
right or
wrong?” to more complex judgements, such as “What is
important? Who is
the right person for the job? Who is a threat to public safety?
Who should I
date?” Quietly, these types of subjective decisions previously
made by
humans are turned over to algorithms.
EXAMPLE
Predictive policing based on statistical forecasting:
• In early 2014, the Chicago Police Department made national
headlines
in the US for visiting residents who were considered to be most
likely
involved in violent crime. The selection of individuals, wo were
not
necessarily under investigation, was guided by a computer-
generated
“heat list” – an algorithm that seeks to predict future
involvement in
violent crime.
• A key concern about predictive policing is that such
automated systems
may create an echo chamber or a self-fulfilling prophecy. In
fact, heavy
policing of a specific area can increase the likelihood that crime
will be
detected. Since more police means more opportunities to
observe
residents' activities, the algorithm might just confirm its own
predic-
tion.
• Right now, police departments around the globe are testing
and imple-
menting predictive policing algorithms, but lack safeguards for
discrim-
inatory biases.
Ethical implications: Predictions made by algorithms provide
no guaran-
tee that they are right. And officials acting on incorrect
predictions may
even create unjustified or biased investigations.
5 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS |
6
CORE ISSUES – SHOULD AN ALGORITHM
DECIDE YOUR FUTURE?
Without the help of algorithms, many present-day applications
would be
unusable. We need them to cope with the enormous amounts of
data we
produce every day. Algorithms make our lives easier and more
productive,
and we certainly don't want to lose those advantages. But we
need to be
aware of what they do and how they decide.
THEY CAN DISCRIMINATE AGAINST YOU,
JUST LIKE HUMANS
Computers are often regarded as objective and rational
machines. Howev-
er, algorithms are made by humans and can be just as biased.
We need to
be critical of the assumption, that algorithms can make “better”
decisions
than human beings. There are racist algorithms and sexist ones.
Algorithms
are not neutral, but rather they perpetuate the prejudices of their
creators.
Their creators, such as businesses or governments, can
potentially have
different goals in mind than the users would have.
THEY MUST BE KNOWN TO THE USER
Since algorithms make increasingly important decisions about
our lives,
users need to be informed about them. Knowledge about
automated
decision-making in everyday services is still very limited
among consumers.
Raising awareness should be at the heart of the debate about
ethics of
algorithms.
PUBLIC POLICY APPROACHES TO REGULATE
ALGORITHMS
How do you regulate a black box? We need to open a discussion
on how
policy-makers are trying to deal with ethical concerns around
algorithms.
There have been some attempts to provide algorithmic
accountability, but
we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is
government
institutions like antitrust or consumer protection agencies, who
should
provide appropriate regulation and oversight. But who regulates
the use of
algorithms by the government itself? For cases like predictive
policing
ethical standards and legal safeguards are needed.
Recently, regulatory approaches to algorithms have circled
around trans-
parency, notification, and direct regulation. Yet, experience
shows that
policy-makers are facing certain dilemmas of regulation when it
comes to
algorithms.
TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE
If you are faced with a complex and obscure algorithm, one
common
reaction is a demand for more transparency about what and how
it works.
The concern about black-box algorithms is that they make
inherently
subjective decisions, which might contain implicit or explicit
biases. At the
same time, making complex algorithms fully transparent can be
extremely
challenging:
• It is not enough to merely publish the source code of an
algorithm,
because machine-learning systems will inevitably make
decisions that
have not been programmed directly. Complete transparency
would
require that we are able to explain why any particular outcome
was
produced.
• Some investigations have reverse-engineered algorithms in
order to
create greater public awareness about them. That is one way
how the
public can perform a watchdog function.
• Often, there might be good reasons why complex algorithms
operate
opaquely, because public access would make them much more
vulnera-
ble to manipulation. If every company knew how Google ranks
its
search results, it could optimize their behavior and render the
ranking
algorithm useless.
NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
A different form of transparency is to give consumers control
over their
personal information that feeds into algorithms. Notification
includes the
rights to correct that personal information and demand it be
excluded
from databases of data vendors. Regaining control over your
personal
information ensures accountability to the users.
DIRECT REGULATION | WHEN ALGORITHMS BECOME
CRITICAL INFRASTRUCTURE
• In some cases public regulators have been prone to create
ways to
manipulate algorithms directly. This is especially relevant for
core
infrastructure.
Debate about algorithmic regulation is most advanced in the
area of
finance. Automated high-speed trading has potentially
destabilizing
effects on financial markets, so regulators have begun to
demand the
ability to modify these algorithms.
• The ongoing antitrust investigations into Google's ‘search
neutrality’
revolve around the same question: can regulators may require
access to
and modification of the search algorithm in the interest of the
public?
This approach is based on a contested assumption that it is
possible to
predict objectively how a certain algorithms will respond. Yet,
there is
simply no ‘right’ answer to how Google should rank its results.
Antitrust
agencies in the US and the EU have not yet found an regulatory
response to this issue.
In some cases direct regulation or complete and public
transparency might
be necessary. However, there is no one-size-fits-all regulatory
response.
More scrutiny of algorithms must be enabled, which requires
new practices
from industry and technologists. More consumer protection and
direct
regulation should be introduced where appropriate.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of
algorithms.
What are typical functions they perform? What are negative
impacts for
human rights? Here are some examples that probably affect you
too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is
ignored;
and even what gets published at all, and what is censored. This
is true for
all kinds of search rankings, for example the way your social
media news-
feed looks. In other words, algorithms perform a gate-keeping
function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking
part in
hiring (and firing) of employees. Deciding who gets a job and
who does
not, is among the most powerful gate-keeping function in
society.
• Research shows that human managers display many different
biases in
hiring decisions, for example based on social class, race and
gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic
hiring can
easily overcome human biases. Algorithms might work more
accurate
in some areas, but can also create new, sometimes unintended,
prob-
lems depending on how they are programmed and what input
data is
used.
Ethical implications: Algorithms work as gatekeepers that
influence how
we perceive the world, often without us realizing it. They
channel our
attention, which implies tremendous power.
THEY MAKE SUBJECTIVE DECISIONS
Some algorithms deal with questions, which do not have a clear
‘yes or no’
answer. Thus, they move a way from a checkbox answer “Is this
right or
wrong?” to more complex judgements, such as “What is
important? Who is
the right person for the job? Who is a threat to public safety?
Who should I
date?” Quietly, these types of subjective decisions previously
made by
humans are turned over to algorithms.
EXAMPLE
Predictive policing based on statistical forecasting:
• In early 2014, the Chicago Police Department made national
headlines
in the US for visiting residents who were considered to be most
likely
involved in violent crime. The selection of individuals, wo were
not
necessarily under investigation, was guided by a computer-
generated
“heat list” – an algorithm that seeks to predict future
involvement in
violent crime.
• A key concern about predictive policing is that such
automated systems
may create an echo chamber or a self-fulfilling prophecy. In
fact, heavy
policing of a specific area can increase the likelihood that crime
will be
detected. Since more police means more opportunities to
observe
residents' activities, the algorithm might just confirm its own
predic-
tion.
• Right now, police departments around the globe are testing
and imple-
menting predictive policing algorithms, but lack safeguards for
discrim-
inatory biases.
Ethical implications: Predictions made by algorithms provide
no guaran-
tee that they are right. And officials acting on incorrect
predictions may
even create unjustified or biased investigations.
CORE ISSUES – SHOULD AN ALGORITHM
DECIDE YOUR FUTURE?
Without the help of algorithms, many present-day applications
would be
unusable. We need them to cope with the enormous amounts of
data we
produce every day. Algorithms make our lives easier and more
productive,
and we certainly don't want to lose those advantages. But we
need to be
aware of what they do and how they decide.
THEY CAN DISCRIMINATE AGAINST YOU,
JUST LIKE HUMANS
Computers are often regarded as objective and rational
machines. Howev-
er, algorithms are made by humans and can be just as biased.
We need to
be critical of the assumption, that algorithms can make “better”
decisions
than human beings. There are racist algorithms and sexist ones.
Algorithms
are not neutral, but rather they perpetuate the prejudices of their
creators.
Their creators, such as businesses or governments, can
potentially have
different goals in mind than the users would have.
THEY MUST BE KNOWN TO THE USER
Since algorithms make increasingly important decisions about
our lives,
users need to be informed about them. Knowledge about
automated
decision-making in everyday services is still very limited
among consumers.
Raising awareness should be at the heart of the debate about
ethics of
algorithms.
PUBLIC POLICY APPROACHES TO REGULATE
ALGORITHMS
How do you regulate a black box? We need to open a discussion
on how
policy-makers are trying to deal with ethical concerns around
algorithms.
There have been some attempts to provide algorithmic
accountability, but
we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is
government
institutions like antitrust or consumer protection agencies, who
should
provide appropriate regulation and oversight. But who regulates
the use of
algorithms by the government itself? For cases like predictive
policing
ethical standards and legal safeguards are needed.
Recently, regulatory approaches to algorithms have circled
around trans-
parency, notification, and direct regulation. Yet, experience
shows that
policy-makers are facing certain dilemmas of regulation when it
comes to
algorithms.
TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE
If you are faced with a complex and obscure algorithm, one
common
reaction is a demand for more transparency about what and how
it works.
The concern about black-box algorithms is that they make
inherently
subjective decisions, which might contain implicit or explicit
biases. At the
same time, making complex algorithms fully transparent can be
extremely
challenging:
• It is not enough to merely publish the source code of an
algorithm,
because machine-learning systems will inevitably make
decisions that
have not been programmed directly. Complete transparency
would
require that we are able to explain why any particular outcome
was
produced.
• Some investigations have reverse-engineered algorithms in
order to
create greater public awareness about them. That is one way
how the
public can perform a watchdog function.
• Often, there might be good reasons why complex algorithms
operate
opaquely, because public access would make them much more
vulnera-
ble to manipulation. If every company knew how Google ranks
its
search results, it could optimize their behavior and render the
ranking
algorithm useless.
NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
A different form of transparency is to give consumers control
over their
personal information that feeds into algorithms. Notification
includes the
rights to correct that personal information and demand it be
excluded
from databases of data vendors. Regaining control over your
personal
information ensures accountability to the users.
DIRECT REGULATION | WHEN ALGORITHMS BECOME
CRITICAL INFRASTRUCTURE
• In some cases public regulators have been prone to create
ways to
manipulate algorithms directly. This is especially relevant for
core
infrastructure.
Debate about algorithmic regulation is most advanced in the
area of
finance. Automated high-speed trading has potentially
destabilizing
effects on financial markets, so regulators have begun to
demand the
ability to modify these algorithms.
• The ongoing antitrust investigations into Google's ‘search
neutrality’
revolve around the same question: can regulators may require
access to
and modification of the search algorithm in the interest of the
public?
This approach is based on a contested assumption that it is
possible to
predict objectively how a certain algorithms will respond. Yet,
there is
simply no ‘right’ answer to how Google should rank its results.
Antitrust
agencies in the US and the EU have not yet found an regulatory
response to this issue.
In some cases direct regulation or complete and public
transparency might
be necessary. However, there is no one-size-fits-all regulatory
response.
More scrutiny of algorithms must be enabled, which requires
new practices
from industry and technologists. More consumer protection and
direct
regulation should be introduced where appropriate.
7 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS |
8
EDS 1021 Week 6 Interactive Assignment
Radiometric Dating
Objective: Using a simulated practical application, explore the
concepts of radioactive decay and the half-life of
radioactive elements, and then apply the concept of radiometric
dating to estimate the age of various objects.
Background: Review the topics Half-Life, Radiometric Dating,
and Decay Chains in Chapter 12 of The Sciences.
Instructions:
1. PRINT a hard copy of this entire document, so that the
experiment instructions may be easily referred to,
and the data tables and questions (on the last three pages) can
be completed as a rough draft.
2. Download the Radiometric Dating Game Answer Sheet from
the course website. Transfer your data
values and question answers from the completed rough draft to
the answer sheet. Be sure to put your
NAME on the answer sheet where indicated. Save your
completed answer sheet on your computer.
3. SUBMIT ONLY the completed answer sheet, by uploading
your file to the digital drop box for the
assignment.
Introduction to the Simulation
1. After reviewing the background information for this
assignment, go to the website for the interactive
simulation “Radioactive Dating Game” at
http://phet.colorado.edu/en/simulation/radioactive-dating-game.
Click on DOWNLOAD to run the simulation locally on your
computer.
2. Software Requirements: You must have the latest version of
Java software (free) loaded on your computer
to run the simulation. If you do not or are not sure if you have
the latest version, go to
http://www.java.com/en/download/index.jsp .
3. Explore and experiment on the 4 different “tabs” (areas) of
the simulation. While playing around, think about
how the concepts of radioactive decay are being illustrated in
the simulation.
Half Life Tab – observe a sample of radioactive atoms decaying
- Carbon-14, Uranium-238, or ? (a custom-
made radioactive atom). Clicking on the “add 10” button adds
10 atoms at a time to the “decay area”. There
are a total of 100 atoms in the bucket, so clicking the “add 10”
button 10 times will empty the bucket into the
decay area. Observe the pie chart and time graph as atoms
decay. You can PAUSE, STEP (buttons at the
bottom of the screen) the simulation in time as atoms are
decaying, and RESET the simulation.
Decay Rates Tab – Similar to the half-life tab, but different!
Atom choices are carbon-14 and uranium-238.
The bucket has a total of 1000 atoms. Drag the slide bar on the
bucket to the right to increase the number
of atoms added to the decay area. Observe the pie chart and
time graph as atoms decay. Note that the
graph for the Decay Rates tab provides different information
than the graph for the Half Life tab. You can
PAUSE, STEP (buttons at the bottom of the screen) the
simulation in time as atoms are decaying, and
RESET the simulation.
Measurement Tab – Use a probe to virtually measure
radioactive decay within an object - a tree or a
volcanic rock. The probe can be set to detect either the decay of
carbon-14 atoms, or the decay of uranium-
238 atoms. Follow prompts on the screen to run a simulation of
a tree growing and dying, or of a volcano
erupting and creating a rock, and then measuring the decay of
atoms within each object.
Dating Game Tab – Use a probe to virtually measure the
percentage of radioactive atoms remaining within
various objects and, knowing the half-life of radioactive
elements being detected, estimate the ages of
objects. The probe can be set to either detect carbon-14,
uranium-238, or other “mystery” elements, as
appropriate for determining the age of the object. Drag the
probe over an object, select which element to
measure, and then slide the arrow on the graph to match the
percentage of atoms measured by the probe.
The time (t) shown for the matching percentage can then be
entered as the estimate in years of the object’s
age.
After playing around with the simulation, conduct the following
four (4) short experiments. As you conduct
the experiments and collect data, fill in the data tables and
answer the questions on the last three pages of
this document.
http://phet.colorado.edu/en/simulation/radioactive-dating-game
http://www.java.com/en/download/index.jsp
http://www.java.com/en/download/index.jsp
Experiment 1: Half Life
1. Click on the Half Life tab at the top of the simulation screen.
2. Procedure:
Part I - Carbon-14
a. Click the blue “Pause” button at the bottom of the screen
(i.e., set it so that it shows the “play” arrow).
Click the “Add 10” button below the “Bucket o’ Atoms”
repeatedly, until there are no more atoms left in
the bucket. There are now 100 carbon-14 atoms in the decay
area.
b. The half-life of carbon-14 is about 5700 years. Based on the
definition of half-life, if you left these 100
carbon-14 atoms to sit around for 5700 years, what is your
prediction of how many carbon-14 atoms will
decay into the stable element nitrogen-14 during that time?
Write your prediction in the “prediction”
column for the row labeled “carbon-14”, in data table 1.
c. Click the blue “Play” arrow at the bottom of the screen. As
the simulation runs, carefully observe what
is happening to the carbon-14 atoms in the decay area, and the
graphs at the top of the screen (both
the pie chart and the time graph). Once all atoms have decayed
into the stable isotope nitrogen-14,
click the blue “Pause” button at the bottom of the screen (i.e.,
set it so that it shows the “play” arrow),
and “Reset All Nuclei” button in the decay area.
d. Repeat step c. until you have a good idea of what is going on
in this simulation.
e. Repeat step c. again, but this time, watch the graph at the top
of the window carefully, and click
“pause” when TIME reaches 5700 years (when the carbon-14
atom moving across the graph reaches
the red dashed line labeled HALF LIFE on the TIME graph).
f. If you do NOT pause the simulation on or very close to the
red dashed line, click the “Reset All Nuclei”
button and repeat step e.
g. Once you have paused the simulation in the correct spot, look
at the pie graph and determine the
number of nuclei that have decayed into nitrogen-14 at Time =
HALF LIFE. Write this number in data
table 1, in the row labeled “carbon-14”, under “trial 1”.
h. Click the “Reset All Nuclei” button in the decay area.
i. Repeat steps e through h for two more trials. For each trial,
write down in data table 1, the number of
nuclei that have decayed into nitrogen-14 at Time = HALF
LIFE, in the row labeled “carbon-14”, under
“trial 2” and “trial 3” respectively.
Part II – Uranium-238
a. Click “reset all” on the right side of the screen in the Decay
Isotope box, and click “yes” in the box that
pops up. Click on the radio button for uranium-238 in the Decay
Isotope box. Click the blue “Pause”
button at the bottom of the screen (i.e., set it so that it shows
the “play” arrow). Click the “Add 10”
button below the “Bucket o’ Atoms” repeatedly, until there are
no more atoms left in the bucket. There
are now 100 uranium-238 atoms in the decay area.
b. The half-life of uranium-238 is 4.5 billion years!* Based on
the definition of half-life, if you left these 100
uranium-238 atoms to sit around for 4.5 billion years, what is
your prediction of how many uranium
atoms will decay into lead-206 during that time? Write your
prediction in the “prediction” column for the
second row, labeled “uranium-238”, in data table 1.
c. Click the “Play” button at the bottom of the window. Watch
the graph at the top of the window
carefully, and click “pause” when the time reaches 4.5 billion
years (when the uranium-238 atom
moving across the graph reaches the red dashed line labeled
HALF LIFE on the time graph).
d. If you don’t pause the simulation on or very close to the red
line, click the “Reset All Nuclei” button and
repeat step c.
e. Once you have paused the simulation in the correct spot, look
at the pie graph and determine the
number of nuclei that have decayed into lead-206 at Time =
HALF LIFE. Write this number in data table
1, in the row labeled “uranium-238”, under “trial 1”.
f. Click the “Reset All Nuclei” button in the decay area.
g. Repeat steps c through e for two more trials. For each trial,
write down in data table 1, the number of
nuclei that have decayed into lead-206 at Time = HALF LIFE,
in the row labeled “uranium-238”, under
“trial 2” and “trial 3” respectively.
3. Calculate the average number of atoms that decayed for all
three trials of carbon-14 decay. Do the same for
all three trials uranium-238 decay. Write the average values for
each element under “averages”, the last
column in data table 1.
4. Answer the five (5) questions for Experiment 1 on the
Questions page.
* Unlike Carbon-14, which undergoes only one radioactive
decay to reach the stable nitrogen-14, uranium-238 undergoes
MANY decays into many intermediate unstable elements before
finally getting to the stable element lead-206 (see the decay
chain for uranium-238 in chapter 12 for details).
Experiment 2: Decay Rates
1. Set Up: Click on the Decay Rates tab at the top of the
simulation screen.
2. Procedure
Part I – Carbon-14
a. In the Choose Isotope area on the right side of the screen,
click the button next to carbon-14. Recall
that carbon-14 has a half-life of about 5700 years.
b. Drag the slide bar on the bucket of atoms all the way to the
right. This will put 1,000 radioactive atoms
into the decay area. When you let go of the slide bar, the
simulation will start right away. If you missed
seeing the simulation run from the start, start it over again by
clicking “Reset All Nuclei”, and keep your
eyes on the screen. Watch the graph at the bottom of the screen
as you allow the simulation to run until
all atoms have decayed.
c. Carefully interpret the graph to obtain the data requested to
fill in data table 2 for “carbon-14”.
Part II – Uranium-238
a. In the Choose Isotope area on the right side of the screen,
click the button next to uranium-238.
Recall that the half-life of uranium-238 is about 4.5 billion
years.
b. Repeat step b. of part I for uranium-238.
c. Carefully interpret the graph to obtain the data requested to
fill in data table 2 for “uranium-238”.
3. Answer the two (2) questions for Experiment 2 on the
Questions page.
Experiment 3: Measurement
1. Set Up: Click on the Measurement tab at the top of the
simulation screen.
2. Procedure
a. In Choose an Object, click on the button for Tree. In the
Probe Type box, click on the buttons for
“carbon-14”, and “Objects”.
b. Click “Plant Tree” at the bottom of the screen. Observe that
the tree grows and lives for about 1200
years, then die and begin to decay. As the simulation runs,
observe the probe reading (upper left box)
and graph (upper right box) at the top of the screen. The graph
is plotting the probe’s measurement of
the percentage of carbon-14 remaining in the tree over time.
c. On the right side of the screen, in the “Choose an Object”
box, click the reset button. Set the probe to
measure uranium-238 instead of carbon-14 (in the Probe Type
box, click on the button for “uranium-
238”). Click “Plant Tree” and again observe the probe reading
and graph. This time, the graph will plot
the probe’s measurement of uranium-238 in the tree.
d. On the right side of the screen, in the “Choose an Object”
box, click the reset button and click the
button for Rock. Keep the probe type set on “uranium-238”.
e. Click “Erupt Volcano” and observe the volcano creating and
ejecting HOT igneous rocks. As the
simulation runs, observe the probe reading (upper left box) and
graph (upper right box) at the top of the
screen. The graph is plotting the probe’s measurement of
uranium-238 in the rock over time.
f. On the right side of the screen, in the “Choose an Object”
box, click the reset button. Set the probe to
measure carbon-14 instead of uranium-238 (in the Probe Type
box, click on the button for “carbon-14”).
Click “erupt volcano” and again observe the probe reading and
graph. This time, the graph will plot the
probe’s measurement of carbon-14 in the rock.
3. Answer the six (6) questions for Experiment 3 on the
Questions page. Repeat the Measurement
experiments as necessary to answer the questions.
Experiment 4: Dating Game
1. Set Up: Click on the Dating Game tab at the top of the
simulation screen. Verify that the “objects” button is
clicked below the Probe Type box.
2. Procedure
a. Select an item either at or below the Earth’s surface to
determine the age of. Set the Probe Type to
either carbon-14 or uranium-238, as appropriate for the item
you are measuring, based on your findings
in experiment 3.
b. Drag the probe directly over the item you chose. With the
probe over the item, the box in the upper left,
above “Probe Type”, shows the percentage of element that
remains in the item.
c. Drag the green arrow on the graph to the right or left, until
the percentage in the box on the graph
matches the percentage of element in the object. Once you have
the arrow positioned correctly, enter,
in the “estimate” box, the time (t) shown, as the estimate for the
age of the rock or fossil. Click “Check
Estimate” to see if the estimate is correct.
d. Example:
1) Drag the probe over the dead tree to the right of the house.
Since this is a once-living thing, set the
Probe Type to Carbon-14.
2) Look at the probe reading: it shows that there is 97.4% of
carbon-14 remaining in the dead tree.
3) Drag the green arrows on the graph at the top of the screen to
the right or left until the top line tells
you that the carbon-14 percentage is 97.4%, matching the
reading from the probe.
4) When the graph reads 97.4%, it shows that time (t) equals
220 years.
5) Type “220” into the box that says “Estimate age of dead
tree:” and click “Check Estimate”.
6) You should see a green smiley face, indicating that you have
correctly figured out the age of the
dead tree, 220 years. This means that the tree died 220 years
ago.
e. Repeat step c. for all the other items on the Dating Game
screen. Fill in data table 3.
Hints:
*When entering the estimate for the age of an object, you
MUST enter it in YEARS. So, if t = 134.8 MY
on the graph, which means 134.8 million years, you would enter
the number 134,800,000 as the
estimate in years for the age of a rock. The estimates entered
do not have to be EXACT to be
registered as correct, but must be CLOSE. For example, if
134,000,000 were entered for the above
example, the simulation would return a green smiley face for a
correct estimate.
**For the last four items on the list, neither carbon-14 nor
uranium-238 will work to determine the item’s
age. Select “Custom”, and pick a half-life from the drop-down
menu that allows the probe to detect
something (i.e., a reading other than 0.0%). All of the “custom”
elements are, like carbon-14, elements
that would be found in living or once living things.
3. Answer the (1) question for Experiment 4 on the Questions
page.
Name:
Data
Each box to be filled in with a value is worth 1 point.
Data Table 1
Radioactive
Element
Number of atoms in the
sample at Time = 0
Prediction of # atoms
that will decay when
time reaches one half-
life
Number of atoms that
have decayed when
Time = Half Life
Average number of
atoms that have
decayed when
Time = Half Life Trial
#1
Trial
#2
Trial
#3
Carbon-14 100
Uranium-238 100
Data Table 2
Radioactive
Element
Percentage of the original element remaining after:
Percentage of the decay product present after:
1 half-life 2 half-lives 3 half-lives 1 half-life 2 half-lives
3 half-lives
Carbon-14
Uranium-238
Data Table 3
Item Age Element Used
Animal Skull
House
Living Tree
Dead Tree
Bone
Wooden Cup
Large Human Skull
Fish Bones
Rock 1
Rock 3
Rock 5
Fish Fossil*
Dinosaur Skull*
Trilobite*
Small Human Skull*
Questions
For multiple choice questions, choose the letter of the best
answer choice from the list below the question.
For short answer questions, type your answer in the space
provided below the question.
Note that there are TWO PAGES OF QUESTIONS below. Each
multiple question is worth 3 points and each
short answer question is worth 5 points.
Experiment 1
1. In 1 or 2 sentences, describe, in the space below, what you
observed happening to the atoms in the decay area
when the simulation is in “play” mode.
2. In this simulation (and in nature), carbon-14 is radioactively
decays to nitrogen-14. What type of radioactive
decay is carbon-14 undergoing?
a. alpha decay b. beta decay c. gamma decay
3. In data table 1, compare your calculations for the average
number of atoms that decayed to your predictions for
how many atoms would decay. How did your predictions
compare to the averages?
a. Close b. Exact c. Nowhere Close
4. Which of the following statements best describes the decay of
a sample of radioactive atoms?
a. A statistical process, where the sample as a whole decays
predictably, but individual atoms in the sample
decay like popcorn kernels popping.
b. An exact process where the time of decay of each atom in the
sample can be predicted.
c. A completely random process that is in no way predictable.
5. Based on your observations, the data you collected, and the
comparison of the calculated averages to
predictions in experiment 1, write in the space below, IN YOUR
OWN WORDS, a definition of “half-life”. Phrase
the definition in a way that you would explain the concept to
one of your classmates.
Experiment 2
1. Suppose you found a rock. Using radiometric dating, you
determined that the rock contained the
same percentage of lead-206 and uranium-238. How old would
you conclude the rock to be?
a. 2.25 billion years old b. 4.5 billion years old c. 9 billion
years old
2. Using the data in data table 2, if the original sample
contained 24 grams of carbon-14, how many
grams of carbon-14 would be left, and how many grams of
nitrogen-14 would be present, after 2
half-lives?
a. 12; 12 b. 6; 18 c. 2; 22
Experiment 3
1. When half of the carbon-14 atoms in the tree have decayed
into nitrogen-14, it has been approximately 5700
years since the tree _____.
a. was planted b. died
2. The probe measures that the percentage of carbon-14 in the
tree is 100% while the tree is alive, and then
measures the percentage of carbon-14 decreasing with time after
the tree dies. This means that radiometric
dating using carbon-14 is applicable to:
a. living objects. B. once-living objects c. both living and
once-living objects.
3. When half of the Uranium-238 in the rock has decayed,
______ years have passed since the volcano erupted.
a. 4.5 billion years b. 5700 years
4. In step c of experiment 3, the probe was used to measure
uranium-238 in the tree. In step f of experiment 3, the
probe was used to measure carbon-14 in the rock. In both steps
(c and f), the probe reading was _____.
a. 100%, then decreasing with time. b. zero (0) . c. decreasing
with time.
5. Uranium-238 must be used to measure the age of the rock,
and not carbon-14, because:
a. The isotope carbon-14 did not exist when the rock was
created.
b. rocks do not contain carbon-14, and even if they did, the
carbon-14 would have likely decayed away
prior to measuring the rock’s age.
c. it is easier to measure uranium in rocks than it is carbon.
6. Based on your observations for experiment 3, and the answers
to your questions above, _______ should be
used to determine the ages of once-living things, and _______
should be used to determine the ages of rocks.
a. carbon-14; uranium-238 b. uranium-238; carbon-14 c.
either can be used for both objects.
Experiment 4
1. Briefly explain why neither carbon-14 nor uranium-238 could
be used to determine the ages of the last 4 items in
data table 3.
EDS 1021 Week 6 Interactive Assignment
Radiometric Dating
Objective: Using a simulated practical application, explore the
concepts of radioactive decay and the half-life of
radioactive elements, and then apply the concept of radiometric
dating to estimate the age of various objects.
Background: Review the topics Half-Life, Radiometric Dating,
and Decay Chains in Chapter 12 of The Sciences.
Instructions:
1. PRINT a hard copy of this entire document, so that the
experiment instructions may be easily referred to,
and the data tables and questions (on the last three pages) can
be completed as a rough draft.
2. Download the Radiometric Dating Game Answer Sheet from
the course website. Transfer your data
values and question answers from the completed rough draft to
the answer sheet. Be sure to put your
NAME on the answer sheet where indicated. Save your
completed answer sheet on your computer.
3. SUBMIT ONLY the completed answer sheet, by uploading
your file to the digital drop box for the
assignment.
Introduction to the Simulation
1. After reviewing the background information for this
assignment, go to the website for the interactive
simulation “Radioactive Dating Game” at
http://phet.colorado.edu/en/simulation/radioactive-dating-game.
Click on DOWNLOAD to run the simulation locally on your
computer.
2. Software Requirements: You must have the latest version of
Java software (free) loaded on your computer
to run the simulation. If you do not or are not sure if you have
the latest version, go to
http://www.java.com/en/download/index.jsp .
3. Explore and experiment on the 4 different “tabs” (areas) of
the simulation. While playing around, think about
how the concepts of radioactive decay are being illustrated in
the simulation.
Half Life Tab – observe a sample of radioactive atoms decaying
- Carbon-14, Uranium-238, or ? (a custom-
made radioactive atom). Clicking on the “add 10” button adds
10 atoms at a time to the “decay area”. There
are a total of 100 atoms in the bucket, so clicking the “add 10”
button 10 times will empty the bucket into the
decay area. Observe the pie chart and time graph as atoms
decay. You can PAUSE, STEP (buttons at the
bottom of the screen) the simulation in time as atoms are
decaying, and RESET the simulation.
Decay Rates Tab – Similar to the half-life tab, but different!
Atom choices are carbon-14 and uranium-238.
The bucket has a total of 1000 atoms. Drag the slide bar on the
bucket to the right to increase the number
of atoms added to the decay area. Observe the pie chart and
time graph as atoms decay. Note that the
graph for the Decay Rates tab provides different information
than the graph for the Half Life tab. You can
PAUSE, STEP (buttons at the bottom of the screen) the
simulation in time as atoms are decaying, and
RESET the simulation.
Measurement Tab – Use a probe to virtually measure
radioactive decay within an object - a tree or a
volcanic rock. The probe can be set to detect either the decay of
carbon-14 atoms, or the decay of uranium-
238 atoms. Follow prompts on the screen to run a simulation of
a tree growing and dying, or of a volcano
erupting and creating a rock, and then measuring the decay of
atoms within each object.
Dating Game Tab – Use a probe to virtually measure the
percentage of radioactive atoms remaining within
various objects and, knowing the half-life of radioactive
elements being detected, estimate the ages of
objects. The probe can be set to either detect carbon-14,
uranium-238, or other “mystery” elements, as
appropriate for determining the age of the object. Drag the
probe over an object, select which element to
measure, and then slide the arrow on the graph to match the
percentage of atoms measured by the probe.
The time (t) shown for the matching percentage can then be
entered as the estimate in years of the object’s
age.
After playing around with the simulation, conduct the following
four (4) short experiments. As you conduct
the experiments and collect data, fill in the data tables and
answer the questions on the last three pages of
this document.
http://phet.colorado.edu/en/simulation/radioactive-dating-game
http://www.java.com/en/download/index.jsp
http://www.java.com/en/download/index.jsp
Experiment 1: Half Life
1. Click on the Half Life tab at the top of the simulation screen.
2. Procedure:
Part I - Carbon-14
a. Click the blue “Pause” button at the bottom of the screen
(i.e., set it so that it shows the “play” arrow).
Click the “Add 10” button below the “Bucket o’ Atoms”
repeatedly, until there are no more atoms left in
the bucket. There are now 100 carbon-14 atoms in the decay
area.
b. The half-life of carbon-14 is about 5700 years. Based on the
definition of half-life, if you left these 100
carbon-14 atoms to sit around for 5700 years, what is your
prediction of how many carbon-14 atoms will
decay into the stable element nitrogen-14 during that time?
Write your prediction in the “prediction”
column for the row labeled “carbon-14”, in data table 1.
c. Click the blue “Play” arrow at the bottom of the screen. As
the simulation runs, carefully observe what
is happening to the carbon-14 atoms in the decay area, and the
graphs at the top of the screen (both
the pie chart and the time graph). Once all atoms have decayed
into the stable isotope nitrogen-14,
click the blue “Pause” button at the bottom of the screen (i.e.,
set it so that it shows the “play” arrow),
and “Reset All Nuclei” button in the decay area.
d. Repeat step c. until you have a good idea of what is going on
in this simulation.
e. Repeat step c. again, but this time, watch the graph at the top
of the window carefully, and click
“pause” when TIME reaches 5700 years (when the carbon-14
atom moving across the graph reaches
the red dashed line labeled HALF LIFE on the TIME graph).
f. If you do NOT pause the simulation on or very close to the
red dashed line, click the “Reset All Nuclei”
button and repeat step e.
g. Once you have paused the simulation in the correct spot, look
at the pie graph and determine the
number of nuclei that have decayed into nitrogen-14 at Time =
HALF LIFE. Write this number in data
table 1, in the row labeled “carbon-14”, under “trial 1”.
h. Click the “Reset All Nuclei” button in the decay area.
i. Repeat steps e through h for two more trials. For each trial,
write down in data table 1, the number of
nuclei that have decayed into nitrogen-14 at Time = HALF
LIFE, in the row labeled “carbon-14”, under
“trial 2” and “trial 3” respectively.
Part II – Uranium-238
a. Click “reset all” on the right side of the screen in the Decay
Isotope box, and click “yes” in the box that
pops up. Click on the radio button for uranium-238 in the Decay
Isotope box. Click the blue “Pause”
button at the bottom of the screen (i.e., set it so that it shows
the “play” arrow). Click the “Add 10”
button below the “Bucket o’ Atoms” repeatedly, until there are
no more atoms left in the bucket. There
are now 100 uranium-238 atoms in the decay area.
b. The half-life of uranium-238 is 4.5 billion years!* Based on
the definition of half-life, if you left these 100
uranium-238 atoms to sit around for 4.5 billion years, what is
your prediction of how many uranium
atoms will decay into lead-206 during that time? Write your
prediction in the “prediction” column for the
second row, labeled “uranium-238”, in data table 1.
c. Click the “Play” button at the bottom of the window. Watch
the graph at the top of the window
carefully, and click “pause” when the time reaches 4.5 billion
years (when the uranium-238 atom
moving across the graph reaches the red dashed line labeled
HALF LIFE on the time graph).
d. If you don’t pause the simulation on or very close to the red
line, click the “Reset All Nuclei” button and
repeat step c.
e. Once you have paused the simulation in the correct spot, look
at the pie graph and determine the
number of nuclei that have decayed into lead-206 at Time =
HALF LIFE. Write this number in data table
1, in the row labeled “uranium-238”, under “trial 1”.
f. Click the “Reset All Nuclei” button in the decay area.
g. Repeat steps c through e for two more trials. For each trial,
write down in data table 1, the number of
nuclei that have decayed into lead-206 at Time = HALF LIFE,
in the row labeled “uranium-238”, under
“trial 2” and “trial 3” respectively.
3. Calculate the average number of atoms that decayed for all
three trials of carbon-14 decay. Do the same for
all three trials uranium-238 decay. Write the average values for
each element under “averages”, the last
column in data table 1.
4. Answer the five (5) questions for Experiment 1 on the
Questions page.
* Unlike Carbon-14, which undergoes only one radioactive
decay to reach the stable nitrogen-14, uranium-238 undergoes
MANY decays into many intermediate unstable elements before
finally getting to the stable element lead-206 (see the decay
chain for uranium-238 in chapter 12 for details).
Experiment 2: Decay Rates
1. Set Up: Click on the Decay Rates tab at the top of the
simulation screen.
2. Procedure
Part I – Carbon-14
a. In the Choose Isotope area on the right side of the screen,
click the button next to carbon-14. Recall
that carbon-14 has a half-life of about 5700 years.
b. Drag the slide bar on the bucket of atoms all the way to the
right. This will put 1,000 radioactive atoms
into the decay area. When you let go of the slide bar, the
simulation will start right away. If you missed
seeing the simulation run from the start, start it over again by
clicking “Reset All Nuclei”, and keep your
eyes on the screen. Watch the graph at the bottom of the screen
as you allow the simulation to run until
all atoms have decayed.
c. Carefully interpret the graph to obtain the data requested to
fill in data table 2 for “carbon-14”.
Part II – Uranium-238
a. In the Choose Isotope area on the right side of the screen,
click the button next to uranium-238.
Recall that the half-life of uranium-238 is about 4.5 billion
years.
b. Repeat step b. of part I for uranium-238.
c. Carefully interpret the graph to obtain the data requested to
fill in data table 2 for “uranium-238”.
3. Answer the two (2) questions for Experiment 2 on the
Questions page.
Experiment 3: Measurement
1. Set Up: Click on the Measurement tab at the top of the
simulation screen.
2. Procedure
a. In Choose an Object, click on the button for Tree. In the
Probe Type box, click on the buttons for
“carbon-14”, and “Objects”.
b. Click “Plant Tree” at the bottom of the screen. Observe that
the tree grows and lives for about 1200
years, then die and begin to decay. As the simulation runs,
observe the probe reading (upper left box)
and graph (upper right box) at the top of the screen. The graph
is plotting the probe’s measurement of
the percentage of carbon-14 remaining in the tree over time.
c. On the right side of the screen, in the “Choose an Object”
box, click the reset button. Set the probe to
measure uranium-238 instead of carbon-14 (in the Probe Type
box, click on the button for “uranium-
238”). Click “Plant Tree” and again observe the probe reading
and graph. This time, the graph will plot
the probe’s measurement of uranium-238 in the tree.
d. On the right side of the screen, in the “Choose an Object”
box, click the reset button and click the
button for Rock. Keep the probe type set on “uranium-238”.
e. Click “Erupt Volcano” and observe the volcano creating and
ejecting HOT igneous rocks. As the
simulation runs, observe the probe reading (upper left box) and
graph (upper right box) at the top of the
screen. The graph is plotting the probe’s measurement of
uranium-238 in the rock over time.
f. On the right side of the screen, in the “Choose an Object”
box, click the reset button. Set the probe to
measure carbon-14 instead of uranium-238 (in the Probe Type
box, click on the button for “carbon-14”).
Click “erupt volcano” and again observe the probe reading and
graph. This time, the graph will plot the
probe’s measurement of carbon-14 in the rock.
3. Answer the six (6) questions for Experiment 3 on the
Questions page. Repeat the Measurement
experiments as necessary to answer the questions.
Experiment 4: Dating Game
1. Set Up: Click on the Dating Game tab at the top of the
simulation screen. Verify that the “objects” button is
clicked below the Probe Type box.
2. Procedure
a. Select an item either at or below the Earth’s surface to
determine the age of. Set the Probe Type to
either carbon-14 or uranium-238, as appropriate for the item
you are measuring, based on your findings
in experiment 3.
b. Drag the probe directly over the item you chose. With the
probe over the item, the box in the upper left,
above “Probe Type”, shows the percentage of element that
remains in the item.
c. Drag the green arrow on the graph to the right or left, until
the percentage in the box on the graph
matches the percentage of element in the object. Once you have
the arrow positioned correctly, enter,
in the “estimate” box, the time (t) shown, as the estimate for the
age of the rock or fossil. Click “Check
Estimate” to see if the estimate is correct.
d. Example:
1) Drag the probe over the dead tree to the right of the house.
Since this is a once-living thing, set the
Probe Type to Carbon-14.
2) Look at the probe reading: it shows that there is 97.4% of
carbon-14 remaining in the dead tree.
3) Drag the green arrows on the graph at the top of the screen to
the right or left until the top line tells
you that the carbon-14 percentage is 97.4%, matching the
reading from the probe.
4) When the graph reads 97.4%, it shows that time (t) equals
220 years.
5) Type “220” into the box that says “Estimate age of dead
tree:” and click “Check Estimate”.
6) You should see a green smiley face, indicating that you have
correctly figured out the age of the
dead tree, 220 years. This means that the tree died 220 years
ago.
e. Repeat step c. for all the other items on the Dating Game
screen. Fill in data table 3.
Hints:
*When entering the estimate for the age of an object, you
MUST enter it in YEARS. So, if t = 134.8 MY
on the graph, which means 134.8 million years, you would enter
the number 134,800,000 as the
estimate in years for the age of a rock. The estimates entered
do not have to be EXACT to be
registered as correct, but must be CLOSE. For example, if
134,000,000 were entered for the above
example, the simulation would return a green smiley face for a
correct estimate.
**For the last four items on the list, neither carbon-14 nor
uranium-238 will work to determine the item’s
age. Select “Custom”, and pick a half-life from the drop-down
menu that allows the probe to detect
something (i.e., a reading other than 0.0%). All of the “custom”
elements are, like carbon-14, elements
that would be found in living or once living things.
3. Answer the (1) question for Experiment 4 on the Questions
page.
Name:
Data
Each box to be filled in with a value is worth 1 point.
Data Table 1
Radioactive
Element
Number of atoms in the
sample at Time = 0
Prediction of # atoms
that will decay when
time reaches one half-
life
Number of atoms that
have decayed when
Time = Half Life
Average number of
atoms that have
decayed when
Time = Half Life Trial
#1
Trial
#2
Trial
#3
Carbon-14 100
Uranium-238 100
Data Table 2
Radioactive
Element
Percentage of the original element remaining after:
Percentage of the decay product present after:
1 half-life 2 half-lives 3 half-lives 1 half-life 2 half-lives
3 half-lives
Carbon-14
Uranium-238
Data Table 3
Item Age Element Used
Animal Skull
House
Living Tree
Dead Tree
Bone
Wooden Cup
Large Human Skull
Fish Bones
Rock 1
Rock 3
Rock 5
Fish Fossil*
Dinosaur Skull*
Trilobite*
Small Human Skull*
Questions
For multiple choice questions, choose the letter of the best
answer choice from the list below the question.
For short answer questions, type your answer in the space
provided below the question.
Note that there are TWO PAGES OF QUESTIONS below. Each
multiple question is worth 3 points and each
short answer question is worth 5 points.
Experiment 1
1. In 1 or 2 sentences, describe, in the space below, what you
observed happening to the atoms in the decay area
when the simulation is in “play” mode.
2. In this simulation (and in nature), carbon-14 is radioactively
decays to nitrogen-14. What type of radioactive
decay is carbon-14 undergoing?
a. alpha decay b. beta decay c. gamma decay
3. In data table 1, compare your calculations for the average
number of atoms that decayed to your predictions for
how many atoms would decay. How did your predictions
compare to the averages?
a. Close b. Exact c. Nowhere Close
4. Which of the following statements best describes the decay of
a sample of radioactive atoms?
a. A statistical process, where the sample as a whole decays
predictably, but individual atoms in the sample
decay like popcorn kernels popping.
b. An exact process where the time of decay of each atom in the
sample can be predicted.
c. A completely random process that is in no way predictable.
5. Based on your observations, the data you collected, and the
comparison of the calculated averages to
predictions in experiment 1, write in the space below, IN YOUR
OWN WORDS, a definition of “half-life”. Phrase
the definition in a way that you would explain the concept to
one of your classmates.
Experiment 2
1. Suppose you found a rock. Using radiometric dating, you
determined that the rock contained the
same percentage of lead-206 and uranium-238. How old would
you conclude the rock to be?
a. 2.25 billion years old b. 4.5 billion years old c. 9 billion
years old
2. Using the data in data table 2, if the original sample
contained 24 grams of carbon-14, how many
grams of carbon-14 would be left, and how many grams of
nitrogen-14 would be present, after 2
half-lives?
a. 12; 12 b. 6; 18 c. 2; 22
Experiment 3
1. When half of the carbon-14 atoms in the tree have decayed
into nitrogen-14, it has been approximately 5700
years since the tree _____.
a. was planted b. died
2. The probe measures that the percentage of carbon-14 in the
tree is 100% while the tree is alive, and then
measures the percentage of carbon-14 decreasing with time after
the tree dies. This means that radiometric
dating using carbon-14 is applicable to:
a. living objects. B. once-living objects c. both living and
once-living objects.
3. When half of the Uranium-238 in the rock has decayed,
______ years have passed since the volcano erupted.
a. 4.5 billion years b. 5700 years
4. In step c of experiment 3, the probe was used to measure
uranium-238 in the tree. In step f of experiment 3, the
probe was used to measure carbon-14 in the rock. In both steps
(c and f), the probe reading was _____.
a. 100%, then decreasing with time. b. zero (0) . c. decreasing
with time.
5. Uranium-238 must be used to measure the age of the rock,
and not carbon-14, because:
a. The isotope carbon-14 did not exist when the rock was
created.
b. rocks do not contain carbon-14, and even if they did, the
carbon-14 would have likely decayed away
prior to measuring the rock’s age.
c. it is easier to measure uranium in rocks than it is carbon.
6. Based on your observations for experiment 3, and the answers
to your questions above, _______ should be
used to determine the ages of once-living things, and _______
should be used to determine the ages of rocks.
a. carbon-14; uranium-238 b. uranium-238; carbon-14 c.
either can be used for both objects.
Experiment 4
1. Briefly explain why neither carbon-14 nor uranium-238 could
be used to determine the ages of the last 4 items in
data table 3.
Innovations
Perspective
Is AI the end of jobs or a new beginning?
By Vivek Wadhwa May 31 2017 WASHINGTON POST
A visitor plays chess against a robot during a COMPUTEX
event in Taiwan on May 30. This year’s COMPUTEX focuses
on artificial intelligence and robotics as well as virtual reality
and the Internet of things. (Ritchie B. Tongo/European
Pressphoto Agency)
Artificial Intelligence (AI) is advancing so rapidly that even its
developers are being caught off guard. Google co-founder
Sergey Brin said in Davos, Switzerland, in January that it
“touches every single one of our main projects, ranging from
search to photos to ads … everything we do … it definitely
surprised me, even though I was sitting right there.”
The long-promised AI, the stuff we’ve seen in science fiction, is
coming and we need to be prepared. Today, AI is powering
voice assistants such as Google Home, Amazon Alexa and
Apple Siri, allowing them to have increasingly natural
conversations with us and manage our lights, order food and
schedule meetings. Businesses are infusing AI into their
products to analyze the vast amounts of data and improve
decision-making. In a decade or two, we will have robotic
assistants that remind us of Rosie from “The Jetsons” and R2-
D2 of “Star Wars.”
This has profound implications for how we live and work, for
better and worse. AI is going to become our guide and
companion — and take millions of jobs away from people. We
can deny this is happening, be angry or simply ignore it. But if
we do, we will be the losers. As I discussed in my new book,
“Driver in the Driverless Car,” technology is now advancing on
an exponential curve and making science fiction a reality. We
can’t stop it. All we can do is to understand it and use it to
better ourselves — and humanity.
Rosie and R2-D2 may be on their way but AI is still very
limited in its capability, and will be for a long time. The voice
assistants are examples of what technologists call narrow AI:
systems that are useful, can interact with humans and bear some
of the hallmarks of intelligence — but would never be mistaken
for a human. They can, however, do a better job on a very
specific range of tasks than humans can. I couldn’t, for
example, recall the winning and losing pitcher in every baseball
game of the major leagues from the previous night.
Narrow-AI systems are much better than humans at accessing
information stored in complex databases, but their capabilities
exclude creative thought. If you asked Siri to find the perfect
gift for your mother for Valentine’s Day, she might make a
snarky comment but couldn’t venture an educated guess. If you
asked her to write your term paper on the Napoleonic Wars, she
couldn’t help. That is where the human element comes in and
where the opportunities are for us to benefit from AI — and
stay employed.
In his book “Deep Thinking: Where Machine Intelligence Ends
and Human Creativity Begins,” chess grandmaster Garry
Kasparov tells of his shock and anger at being defeated by
IBM’s Deep Blue supercomputer in 1997. He acknowledges that
he is a sore loser but was clearly traumatized by having a
machine outsmart him. He was aware of the evolution of the
technology but never believed it would beat him at his own
game. After coming to grips with his defeat, 20 years later, he
says fail-safes are required … but so is courage.
Kasparov wrote: “When I sat across from Deep Blue twenty
years ago I sensed something new, something unsettling.
Perhaps you will experience a similar feeling the first time you
ride in a driverless car, or the first time your new computer boss
issues an order at work. We must face these fears in order to get
the most out of our technology and to get the most out of
ourselves. Intelligent machines will continue that process,
taking over the more menial aspects of cognition and elevating
our mental lives toward creativity, curiosity, beauty, and joy.
These are what truly make us human, not any particular activity
or skill like swinging a hammer — or even playing chess.”
In other words, we better get used to it and ride the wave.
Human superiority over animals is based on our ability to create
and use tools. The mental capacity to make things that improved
our chances of survival led to a natural selection of better
toolmakers and tool users. Nearly everything a human does
involves technology. For adding numbers, we used abacuses and
mechanical calculators and now spreadsheets. To improve our
memory, we wrote on stones, parchment and paper, and now
have disk drives and cloud storage.
AI is the next step in improving our cognitive functions and
decision-making.
Think about it: When was the last time you tried memorizing
your calendar or Rolodex or used a printed map? Just as we
instinctively do everything on our smartphones, we will rely on
AI. We may have forfeited skills such as the ability to add up
the price of our groceries but we are smarter and more
productive. With the help of Google and Wikipedia, we can be
experts on any topic, and these don’t make us any dumber than
encyclopedias, phone books and librarians did.
A valid concern is that dependence on AI may cause us to
forfeit human creativity. As Kasparov observes, the chess games
on our smartphones are many times more powerful than the
supercomputers that defeated him, yet this didn’t cause human
chess players to become less capable — the opposite happened.
There are now stronger chess players all over the world, and the
game is played in a better way.
As Kasparov explains: “It used to be that young players might
acquire the style of their early coaches. If you worked with a
coach who preferred sharp openings and speculative attacking
play himself, it would influence his pupils to play similarly. …
What happens when the early influential coach is a computer?
The machine doesn’t care about style or patterns or hundreds of
years of established theory. It counts up the values of the chess
pieces, analyzes a few billion moves, and counts them up again.
It is entirely free of prejudice and doctrine. … The heavy use of
computers for practice and analysis has contributed to the
development of a generation of players who are almost as free
of dogma as the machines with which they train.”
Perhaps this is the greatest benefit that AI will bring —
humanity can be free of dogma and historical bias; it can do
more intelligent decision-making. And instead of doing
repetitive data analysis and number crunching, human workers
can focus on enhancing their knowledge and being more
creative.
https://www.technologyreview.com/s/607970/experts-predict-
when-artificial-intelligence-will-exceed-human-performance/
Intelligent Machines
Experts Predict When Artificial Intelligence Will Exceed
Human Performance
Trucking will be computerized long before surgery, computer
scientists say.
· by Emerging Technology from the arXiv
· May 31, 2017
Artificial intelligence is changing the world and doing it at
breakneck speed. The promise is that intelligent machines will
be able to do every task better and more cheaply than humans.
Rightly or wrongly, one industry after another is falling under
its spell, even though few have benefited significantly so far.
And that raises an interesting question: when will artificial
intelligence exceed human performance? More specifically,
when will a machine do your job better than you?
Today, we have an answer of sorts thanks to the work of Katja
Grace at the Future of Humanity Institute at the University of
Oxford and a few pals. To find out, these guys asked the
experts. They surveyed the world’s leading researchers in
artificial intelligence by asking them when they think intelligent
machines will better humans in a wide range of tasks. And many
of the answers are something of a surprise.
The experts that Grace and co coopted were academics and
industry experts who gave papers at the International
Conference on Machine Learning in July 2015 and the Neural
Information Processing Systems conference in December 2015.
These are two of the most important events for experts in
artificial intelligence, so it’s a good bet that many of the
world’s experts were on this list.
Grace and co asked them all—1,634 of them—to fill in a survey
about when artificial intelligence would be better and cheaper
than humans at a variety of tasks. Of these experts, 352
responded. Grave and co then calculated their median responses
The experts predict that AI will outperform humans in the next
10 years in tasks such as translating languages (by 2024),
writing high school essays (by 2026), and driving trucks (by
2027).
But many other tasks will take much longer for machines to
master. AI won’t be better than humans at working in retail
until 2031, able to write a bestselling book until 2049, or
capable of working as a surgeon until 2053.
The experts are far from infallible. They predicted that AI
would be better than humans at Go by about 2027. (This was in
2015, remember.) In fact, Google’s DeepMind subsidiary has
already developed an artificial intelligence capable of beating
the best humans. That took two years rather than 12. It’s easy to
think that this gives the lie to these predictions.
The experts go on to predict a 50 percent chance that AI will be
better than humans at more or less everything in about 45 years.
That’s the kind of prediction that needs to be taken with a pinch
of salt. The 40-year prediction horizon should always raise
alarm bells. According to some energy experts, cost-effective
fusion energy is about 40 years away—but it always has been. It
was 40 years away when researchers first explored fusion more
than 50 years ago. But it has stayed a distant dream because the
challenges have turned out to be more significant than anyone
imagined.
Forty years is an important number when humans make
predictions because it is the length of most people’s working
lives. So any predicted change that is further away than that
means the change will happen beyond the working lifetime of
everyone who is working today. In other words, it cannot
happen with any technology that today’s experts have any
practical experience with. That suggests it is a number to be
treated with caution.
But teasing apart the numbers shows something interesting. This
45-year prediction is the median figure from all the experts.
Perhaps some subset of this group is more expert than the
others?
To find out if different groups made different predictions, Grace
and co looked at how the predictions changed with the age of
the researchers, the number of their citations (i.e., their
expertise), and their region of origin.
It turns out that age and expertise make no difference to the
prediction, but origin does. While North American researchers
expect AI to outperform humans at everything in 74 years,
researchers from Asia expect it in just 30 years.
That’s a big difference that is hard to explain. And it raises an
interesting question: what do Asian researchers know that North
Americans don’t (or vice versa)?
Ref: arxiv.org/abs/1705.08807 : When Will AI Exceed Human
Performance? Evidence from AI Experts

More Related Content

Similar to SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx

How ai and ml can help in transforming governance
How ai and ml can help in transforming governance How ai and ml can help in transforming governance
How ai and ml can help in transforming governance
GlobalTechCouncil
 
QUALCO at CCR Magazine August Edition
QUALCO at CCR Magazine August EditionQUALCO at CCR Magazine August Edition
QUALCO at CCR Magazine August Edition
QUALCO
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Rinshad Choorappara
 
Slave to the Algorithm 2016
Slave to the Algorithm  2016 Slave to the Algorithm  2016
Slave to the Algorithm 2016
Lilian Edwards
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
Ansgar Koene
 
A case for intelligent autonomous ai (iai)
A case for intelligent autonomous ai (iai)A case for intelligent autonomous ai (iai)
A case for intelligent autonomous ai (iai)
Mark Albala
 
AI in Law Enforcement - Applications and Implications of Machine Vision and M...
AI in Law Enforcement - Applications and Implications of Machine Vision and M...AI in Law Enforcement - Applications and Implications of Machine Vision and M...
AI in Law Enforcement - Applications and Implications of Machine Vision and M...
Daniel Faggella
 
[REPORT PREVIEW] The Customer Experience of AI
[REPORT PREVIEW] The Customer Experience of AI[REPORT PREVIEW] The Customer Experience of AI
[REPORT PREVIEW] The Customer Experience of AI
Altimeter, a Prophet Company
 
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
Bernhard Rieder
 
inte
inteinte
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
AnitaRaj43
 
Bsa cpd a_koene2016
Bsa cpd a_koene2016Bsa cpd a_koene2016
Bsa cpd a_koene2016
Ansgar Koene
 
The Ethics of Artificial Intelligence in Digital Ecosystems
The Ethics of Artificial Intelligence in Digital EcosystemsThe Ethics of Artificial Intelligence in Digital Ecosystems
The Ethics of Artificial Intelligence in Digital Ecosystems
washikmaryam
 
Emerging Trends in AI and data science IN KRCT
Emerging Trends in AI and data science IN KRCTEmerging Trends in AI and data science IN KRCT
Emerging Trends in AI and data science IN KRCT
krctseo
 
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfDigital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
Mahdi_Fahmideh
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?
Nozha Boujemaa
 
Threats_Report_2013
Threats_Report_2013Threats_Report_2013
Threats_Report_2013
Mary Claire Thompson
 
How Technology Is Changing Our Choices and the ValuesThat He.docx
How Technology Is Changing Our Choices and the ValuesThat He.docxHow Technology Is Changing Our Choices and the ValuesThat He.docx
How Technology Is Changing Our Choices and the ValuesThat He.docx
wellesleyterresa
 
Future of Information Ethics.pptx
Future of Information Ethics.pptxFuture of Information Ethics.pptx
Future of Information Ethics.pptx
john6938
 
AI NOW REPORT 2018
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018
Peerasak C.
 

Similar to SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx (20)

How ai and ml can help in transforming governance
How ai and ml can help in transforming governance How ai and ml can help in transforming governance
How ai and ml can help in transforming governance
 
QUALCO at CCR Magazine August Edition
QUALCO at CCR Magazine August EditionQUALCO at CCR Magazine August Edition
QUALCO at CCR Magazine August Edition
 
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaEthical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad Choorappara
 
Slave to the Algorithm 2016
Slave to the Algorithm  2016 Slave to the Algorithm  2016
Slave to the Algorithm 2016
 
Industry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challengesIndustry Standards as vehicle to address socio-technical AI challenges
Industry Standards as vehicle to address socio-technical AI challenges
 
A case for intelligent autonomous ai (iai)
A case for intelligent autonomous ai (iai)A case for intelligent autonomous ai (iai)
A case for intelligent autonomous ai (iai)
 
AI in Law Enforcement - Applications and Implications of Machine Vision and M...
AI in Law Enforcement - Applications and Implications of Machine Vision and M...AI in Law Enforcement - Applications and Implications of Machine Vision and M...
AI in Law Enforcement - Applications and Implications of Machine Vision and M...
 
[REPORT PREVIEW] The Customer Experience of AI
[REPORT PREVIEW] The Customer Experience of AI[REPORT PREVIEW] The Customer Experience of AI
[REPORT PREVIEW] The Customer Experience of AI
 
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...
 
inte
inteinte
inte
 
AI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by AnitarajAI in Action: Real World Use Cases by Anitaraj
AI in Action: Real World Use Cases by Anitaraj
 
Bsa cpd a_koene2016
Bsa cpd a_koene2016Bsa cpd a_koene2016
Bsa cpd a_koene2016
 
The Ethics of Artificial Intelligence in Digital Ecosystems
The Ethics of Artificial Intelligence in Digital EcosystemsThe Ethics of Artificial Intelligence in Digital Ecosystems
The Ethics of Artificial Intelligence in Digital Ecosystems
 
Emerging Trends in AI and data science IN KRCT
Emerging Trends in AI and data science IN KRCTEmerging Trends in AI and data science IN KRCT
Emerging Trends in AI and data science IN KRCT
 
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfDigital Forensics for Artificial Intelligence (AI ) Systems.pdf
Digital Forensics for Artificial Intelligence (AI ) Systems.pdf
 
What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?What regulation for Artificial Intelligence?
What regulation for Artificial Intelligence?
 
Threats_Report_2013
Threats_Report_2013Threats_Report_2013
Threats_Report_2013
 
How Technology Is Changing Our Choices and the ValuesThat He.docx
How Technology Is Changing Our Choices and the ValuesThat He.docxHow Technology Is Changing Our Choices and the ValuesThat He.docx
How Technology Is Changing Our Choices and the ValuesThat He.docx
 
Future of Information Ethics.pptx
Future of Information Ethics.pptxFuture of Information Ethics.pptx
Future of Information Ethics.pptx
 
AI NOW REPORT 2018
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018
 

More from maoanderton

InstructionsFor this assignment, select one of the following.docx
InstructionsFor this assignment, select one of the following.docxInstructionsFor this assignment, select one of the following.docx
InstructionsFor this assignment, select one of the following.docx
maoanderton
 
InstructionsFor this assignment, analyze the space race..docx
InstructionsFor this assignment, analyze the space race..docxInstructionsFor this assignment, analyze the space race..docx
InstructionsFor this assignment, analyze the space race..docx
maoanderton
 
InstructionsFor the initial post, address one of the fol.docx
InstructionsFor the initial post, address one of the fol.docxInstructionsFor the initial post, address one of the fol.docx
InstructionsFor the initial post, address one of the fol.docx
maoanderton
 
InstructionsFollow paper format and Chicago Style to complete t.docx
InstructionsFollow paper format and Chicago Style to complete t.docxInstructionsFollow paper format and Chicago Style to complete t.docx
InstructionsFollow paper format and Chicago Style to complete t.docx
maoanderton
 
InstructionsFind a NEWS article that addresses a recent t.docx
InstructionsFind a NEWS article that addresses a recent t.docxInstructionsFind a NEWS article that addresses a recent t.docx
InstructionsFind a NEWS article that addresses a recent t.docx
maoanderton
 
InstructionsFind a NEWS article that addresses a current .docx
InstructionsFind a NEWS article that addresses a current .docxInstructionsFind a NEWS article that addresses a current .docx
InstructionsFind a NEWS article that addresses a current .docx
maoanderton
 
InstructionsFinancial challenges associated with changes.docx
InstructionsFinancial challenges associated with changes.docxInstructionsFinancial challenges associated with changes.docx
InstructionsFinancial challenges associated with changes.docx
maoanderton
 
InstructionsExplain the role of the U.S. Office of Personnel.docx
InstructionsExplain the role of the U.S. Office of Personnel.docxInstructionsExplain the role of the U.S. Office of Personnel.docx
InstructionsExplain the role of the U.S. Office of Personnel.docx
maoanderton
 
InstructionsEvaluate Personality TestsEvaluation Title.docx
InstructionsEvaluate Personality TestsEvaluation Title.docxInstructionsEvaluate Personality TestsEvaluation Title.docx
InstructionsEvaluate Personality TestsEvaluation Title.docx
maoanderton
 
InstructionsEach of your responses will be graded not only for .docx
InstructionsEach of your responses will be graded not only for .docxInstructionsEach of your responses will be graded not only for .docx
InstructionsEach of your responses will be graded not only for .docx
maoanderton
 
InstructionsEffective communication skills can prevent many si.docx
InstructionsEffective communication skills can prevent many si.docxInstructionsEffective communication skills can prevent many si.docx
InstructionsEffective communication skills can prevent many si.docx
maoanderton
 
InstructionsEcologyTo complete this assignment, complete the.docx
InstructionsEcologyTo complete this assignment, complete the.docxInstructionsEcologyTo complete this assignment, complete the.docx
InstructionsEcologyTo complete this assignment, complete the.docx
maoanderton
 
InstructionsDevelop an iconographic essay. Select a work fro.docx
InstructionsDevelop an iconographic essay. Select a work fro.docxInstructionsDevelop an iconographic essay. Select a work fro.docx
InstructionsDevelop an iconographic essay. Select a work fro.docx
maoanderton
 
InstructionsDEFINITION a brief definition of the key term fo.docx
InstructionsDEFINITION a brief definition of the key term fo.docxInstructionsDEFINITION a brief definition of the key term fo.docx
InstructionsDEFINITION a brief definition of the key term fo.docx
maoanderton
 
InstructionsCreate a PowerPoint presentation of 15 slides (not c.docx
InstructionsCreate a PowerPoint presentation of 15 slides (not c.docxInstructionsCreate a PowerPoint presentation of 15 slides (not c.docx
InstructionsCreate a PowerPoint presentation of 15 slides (not c.docx
maoanderton
 
InstructionsCookie Creations (Continued)Part INatalie is.docx
InstructionsCookie Creations (Continued)Part INatalie is.docxInstructionsCookie Creations (Continued)Part INatalie is.docx
InstructionsCookie Creations (Continued)Part INatalie is.docx
maoanderton
 
InstructionsCommunities do not exist in a bubble. Often changes .docx
InstructionsCommunities do not exist in a bubble. Often changes .docxInstructionsCommunities do not exist in a bubble. Often changes .docx
InstructionsCommunities do not exist in a bubble. Often changes .docx
maoanderton
 
InstructionsChoose only ONE of the following options and wri.docx
InstructionsChoose only ONE of the following options and wri.docxInstructionsChoose only ONE of the following options and wri.docx
InstructionsChoose only ONE of the following options and wri.docx
maoanderton
 
InstructionsChoose only ONE of the following options and.docx
InstructionsChoose only ONE of the following options and.docxInstructionsChoose only ONE of the following options and.docx
InstructionsChoose only ONE of the following options and.docx
maoanderton
 
InstructionsBeginning in the 1770s, an Age of Revolution swep.docx
InstructionsBeginning in the 1770s, an Age of Revolution  swep.docxInstructionsBeginning in the 1770s, an Age of Revolution  swep.docx
InstructionsBeginning in the 1770s, an Age of Revolution swep.docx
maoanderton
 

More from maoanderton (20)

InstructionsFor this assignment, select one of the following.docx
InstructionsFor this assignment, select one of the following.docxInstructionsFor this assignment, select one of the following.docx
InstructionsFor this assignment, select one of the following.docx
 
InstructionsFor this assignment, analyze the space race..docx
InstructionsFor this assignment, analyze the space race..docxInstructionsFor this assignment, analyze the space race..docx
InstructionsFor this assignment, analyze the space race..docx
 
InstructionsFor the initial post, address one of the fol.docx
InstructionsFor the initial post, address one of the fol.docxInstructionsFor the initial post, address one of the fol.docx
InstructionsFor the initial post, address one of the fol.docx
 
InstructionsFollow paper format and Chicago Style to complete t.docx
InstructionsFollow paper format and Chicago Style to complete t.docxInstructionsFollow paper format and Chicago Style to complete t.docx
InstructionsFollow paper format and Chicago Style to complete t.docx
 
InstructionsFind a NEWS article that addresses a recent t.docx
InstructionsFind a NEWS article that addresses a recent t.docxInstructionsFind a NEWS article that addresses a recent t.docx
InstructionsFind a NEWS article that addresses a recent t.docx
 
InstructionsFind a NEWS article that addresses a current .docx
InstructionsFind a NEWS article that addresses a current .docxInstructionsFind a NEWS article that addresses a current .docx
InstructionsFind a NEWS article that addresses a current .docx
 
InstructionsFinancial challenges associated with changes.docx
InstructionsFinancial challenges associated with changes.docxInstructionsFinancial challenges associated with changes.docx
InstructionsFinancial challenges associated with changes.docx
 
InstructionsExplain the role of the U.S. Office of Personnel.docx
InstructionsExplain the role of the U.S. Office of Personnel.docxInstructionsExplain the role of the U.S. Office of Personnel.docx
InstructionsExplain the role of the U.S. Office of Personnel.docx
 
InstructionsEvaluate Personality TestsEvaluation Title.docx
InstructionsEvaluate Personality TestsEvaluation Title.docxInstructionsEvaluate Personality TestsEvaluation Title.docx
InstructionsEvaluate Personality TestsEvaluation Title.docx
 
InstructionsEach of your responses will be graded not only for .docx
InstructionsEach of your responses will be graded not only for .docxInstructionsEach of your responses will be graded not only for .docx
InstructionsEach of your responses will be graded not only for .docx
 
InstructionsEffective communication skills can prevent many si.docx
InstructionsEffective communication skills can prevent many si.docxInstructionsEffective communication skills can prevent many si.docx
InstructionsEffective communication skills can prevent many si.docx
 
InstructionsEcologyTo complete this assignment, complete the.docx
InstructionsEcologyTo complete this assignment, complete the.docxInstructionsEcologyTo complete this assignment, complete the.docx
InstructionsEcologyTo complete this assignment, complete the.docx
 
InstructionsDevelop an iconographic essay. Select a work fro.docx
InstructionsDevelop an iconographic essay. Select a work fro.docxInstructionsDevelop an iconographic essay. Select a work fro.docx
InstructionsDevelop an iconographic essay. Select a work fro.docx
 
InstructionsDEFINITION a brief definition of the key term fo.docx
InstructionsDEFINITION a brief definition of the key term fo.docxInstructionsDEFINITION a brief definition of the key term fo.docx
InstructionsDEFINITION a brief definition of the key term fo.docx
 
InstructionsCreate a PowerPoint presentation of 15 slides (not c.docx
InstructionsCreate a PowerPoint presentation of 15 slides (not c.docxInstructionsCreate a PowerPoint presentation of 15 slides (not c.docx
InstructionsCreate a PowerPoint presentation of 15 slides (not c.docx
 
InstructionsCookie Creations (Continued)Part INatalie is.docx
InstructionsCookie Creations (Continued)Part INatalie is.docxInstructionsCookie Creations (Continued)Part INatalie is.docx
InstructionsCookie Creations (Continued)Part INatalie is.docx
 
InstructionsCommunities do not exist in a bubble. Often changes .docx
InstructionsCommunities do not exist in a bubble. Often changes .docxInstructionsCommunities do not exist in a bubble. Often changes .docx
InstructionsCommunities do not exist in a bubble. Often changes .docx
 
InstructionsChoose only ONE of the following options and wri.docx
InstructionsChoose only ONE of the following options and wri.docxInstructionsChoose only ONE of the following options and wri.docx
InstructionsChoose only ONE of the following options and wri.docx
 
InstructionsChoose only ONE of the following options and.docx
InstructionsChoose only ONE of the following options and.docxInstructionsChoose only ONE of the following options and.docx
InstructionsChoose only ONE of the following options and.docx
 
InstructionsBeginning in the 1770s, an Age of Revolution swep.docx
InstructionsBeginning in the 1770s, an Age of Revolution  swep.docxInstructionsBeginning in the 1770s, an Age of Revolution  swep.docx
InstructionsBeginning in the 1770s, an Age of Revolution swep.docx
 

Recently uploaded

MARY JANE WILSON, A “BOA MÃE” .
MARY JANE WILSON, A “BOA MÃE”           .MARY JANE WILSON, A “BOA MÃE”           .
MARY JANE WILSON, A “BOA MÃE” .
Colégio Santa Teresinha
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
heathfieldcps1
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
Scholarhat
 
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptxChapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Mohd Adib Abd Muin, Senior Lecturer at Universiti Utara Malaysia
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
Ashokrao Mane college of Pharmacy Peth-Vadgaon
 
Film vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movieFilm vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movie
Nicholas Montgomery
 
South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)
Academy of Science of South Africa
 
How to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold MethodHow to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold Method
Celine George
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
tarandeep35
 
PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.
Dr. Shivangi Singh Parihar
 
Top five deadliest dog breeds in America
Top five deadliest dog breeds in AmericaTop five deadliest dog breeds in America
Top five deadliest dog breeds in America
Bisnar Chase Personal Injury Attorneys
 
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Ashish Kohli
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
Krisztián Száraz
 
How to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP ModuleHow to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP Module
Celine George
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
IreneSebastianRueco1
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
History of Stoke Newington
 
writing about opinions about Australia the movie
writing about opinions about Australia the moviewriting about opinions about Australia the movie
writing about opinions about Australia the movie
Nicholas Montgomery
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
EverAndrsGuerraGuerr
 
A Independência da América Espanhola LAPBOOK.pdf
A Independência da América Espanhola LAPBOOK.pdfA Independência da América Espanhola LAPBOOK.pdf
A Independência da América Espanhola LAPBOOK.pdf
Jean Carlos Nunes Paixão
 
PIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf IslamabadPIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf Islamabad
AyyanKhan40
 

Recently uploaded (20)

MARY JANE WILSON, A “BOA MÃE” .
MARY JANE WILSON, A “BOA MÃE”           .MARY JANE WILSON, A “BOA MÃE”           .
MARY JANE WILSON, A “BOA MÃE” .
 
The basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptxThe basics of sentences session 5pptx.pptx
The basics of sentences session 5pptx.pptx
 
Azure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHatAzure Interview Questions and Answers PDF By ScholarHat
Azure Interview Questions and Answers PDF By ScholarHat
 
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptxChapter 4 - Islamic Financial Institutions in Malaysia.pptx
Chapter 4 - Islamic Financial Institutions in Malaysia.pptx
 
Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.Biological Screening of Herbal Drugs in detailed.
Biological Screening of Herbal Drugs in detailed.
 
Film vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movieFilm vocab for eal 3 students: Australia the movie
Film vocab for eal 3 students: Australia the movie
 
South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)South African Journal of Science: Writing with integrity workshop (2024)
South African Journal of Science: Writing with integrity workshop (2024)
 
How to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold MethodHow to Build a Module in Odoo 17 Using the Scaffold Method
How to Build a Module in Odoo 17 Using the Scaffold Method
 
S1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptxS1-Introduction-Biopesticides in ICM.pptx
S1-Introduction-Biopesticides in ICM.pptx
 
PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.PCOS corelations and management through Ayurveda.
PCOS corelations and management through Ayurveda.
 
Top five deadliest dog breeds in America
Top five deadliest dog breeds in AmericaTop five deadliest dog breeds in America
Top five deadliest dog breeds in America
 
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
Aficamten in HCM (SEQUOIA HCM TRIAL 2024)
 
Advantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO PerspectiveAdvantages and Disadvantages of CMS from an SEO Perspective
Advantages and Disadvantages of CMS from an SEO Perspective
 
How to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP ModuleHow to Add Chatter in the odoo 17 ERP Module
How to Add Chatter in the odoo 17 ERP Module
 
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
RPMS TEMPLATE FOR SCHOOL YEAR 2023-2024 FOR TEACHER 1 TO TEACHER 3
 
The History of Stoke Newington Street Names
The History of Stoke Newington Street NamesThe History of Stoke Newington Street Names
The History of Stoke Newington Street Names
 
writing about opinions about Australia the movie
writing about opinions about Australia the moviewriting about opinions about Australia the movie
writing about opinions about Australia the movie
 
Thesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.pptThesis Statement for students diagnonsed withADHD.ppt
Thesis Statement for students diagnonsed withADHD.ppt
 
A Independência da América Espanhola LAPBOOK.pdf
A Independência da América Espanhola LAPBOOK.pdfA Independência da América Espanhola LAPBOOK.pdf
A Independência da América Espanhola LAPBOOK.pdf
 
PIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf IslamabadPIMS Job Advertisement 2024.pdf Islamabad
PIMS Job Advertisement 2024.pdf Islamabad
 

SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx

  • 1. SHOULD ALGORITHMS DECIDE YOUR FUTURE? This publication was prepared by Kilian Vieth and Joanna Bronowicka from Centre for Internet and Human Rights at European University Viadrina. It was prepared based on a publication “The Ethics of Algorithms: from radical content to self-driving cars” with contributions from Zeynep Tufekci, Jillian C. York, Ben Wagner and Frederike Kaltheuner and an event on the Ethics of Algorithms, which took place on March 9-10, 2015 in Berlin. The research was support- ed by the Dutch Ministry of Foreign Affairs. Find out more: cihr.eu/ethics-of-algorithms/ Follow the discussion on Twitter: #EoA2015 Graphic design by Thiago Parizi cihr.eu @cihr_eu
  • 2. 1 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS | 2 WHAT IS AN ALGORITHM? ALGORITHMS SHAPE OUR WORLD(S)! Our everyday life is shaped by computers and our computers are shaped by algorithms. Digital computation is constantly changing how we commu- nicate, work, move, and learn. In short, digitally connected computers are changing how we live our lives. This revolution is unlikely to stop any time soon. Digitalization produces increasing amounts of datasets known as ‘big data’. So far, research focused on how ‘big data is produced and stored. Now, we begin to scrutinize how algorithms make sense of this growing amount of data Algorithms are the brains of our computers, mobiles, Internet of Things. Algorithms are increasingly used to make decisions for us, about us, or
  • 3. with us – oftentimes without us realizing it. This raises many questions about the ethical dimension of algorithms. WHY DO ALGORITHMS RAISE ETHICAL CONCERNS? First, let's have a closer look at some of the critical features of algorithms. What are typical functions they perform? What are negative impacts for human rights? Here are some examples that probably affect you too. THEY KEEP INFORMATION AWAY FROM US Increasingly, algorithms decide what gets attention, and what is ignored; and even what gets published at all, and what is censored. This is true for all kinds of search rankings, for example the way your social media news- feed looks. In other words, algorithms perform a gate-keeping function. EXAMPLE Hiring algorithms decide if you are invited for an interview. • Algorithms, rather than managers, are more and more taking part in
  • 4. hiring (and firing) of employees. Deciding who gets a job and who does not, is among the most powerful gate-keeping function in society. • Research shows that human managers display many different biases in hiring decisions, for example based on social class, race and gender. Clearly, human hiring systems are far from perfect. • Nevertheless, we may not simply assume that algorithmic hiring can easily overcome human biases. Algorithms might work more accurate in some areas, but can also create new, sometimes unintended, prob- lems depending on how they are programmed and what input data is used. Ethical implications: Algorithms work as gatekeepers that influence how we perceive the world, often without us realizing it. They channel our attention, which implies tremendous power.
  • 5. The term ‘algorithm’ refers to any computer code that carries out a set of instructions. Algorithms are essential to the way computers process data. Theoretically speaking, they are encoded procedures, which transform data based on specific calculations. They consist of a series of steps that are undertaken to solve a particular problem, like in a recipe. An algorithm is taking inputs (ingredients), breaking a task into its constituent parts, undertaking those tasks one by one, and then producing an output (e.g. a cake). A simple example of an algorithm is “find the largest number in this series of numbers”. THEY MAKE SUBJECTIVE DECISIONS Some algorithms deal with questions, which do not have a clear ‘yes or no’ answer. Thus, they move a way from a checkbox answer “Is this right or wrong?” to more complex judgements, such as “What is important? Who is
  • 6. the right person for the job? Who is a threat to public safety? Who should I date?” Quietly, these types of subjective decisions previously made by humans are turned over to algorithms. EXAMPLE Predictive policing based on statistical forecasting: • In early 2014, the Chicago Police Department made national headlines in the US for visiting residents who were considered to be most likely involved in violent crime. The selection of individuals, wo were not necessarily under investigation, was guided by a computer- generated “heat list” – an algorithm that seeks to predict future involvement in violent crime. • A key concern about predictive policing is that such automated systems may create an echo chamber or a self-fulfilling prophecy. In fact, heavy policing of a specific area can increase the likelihood that crime
  • 7. will be detected. Since more police means more opportunities to observe residents' activities, the algorithm might just confirm its own predic- tion. • Right now, police departments around the globe are testing and imple- menting predictive policing algorithms, but lack safeguards for discrim- inatory biases. Ethical implications: Predictions made by algorithms provide no guaran- tee that they are right. And officials acting on incorrect predictions may even create unjustified or biased investigations. CORE ISSUES – SHOULD AN ALGORITHM DECIDE YOUR FUTURE? Without the help of algorithms, many present-day applications would be unusable. We need them to cope with the enormous amounts of data we produce every day. Algorithms make our lives easier and more productive,
  • 8. and we certainly don't want to lose those advantages. But we need to be aware of what they do and how they decide. THEY CAN DISCRIMINATE AGAINST YOU, JUST LIKE HUMANS Computers are often regarded as objective and rational machines. Howev- er, algorithms are made by humans and can be just as biased. We need to be critical of the assumption, that algorithms can make “better” decisions than human beings. There are racist algorithms and sexist ones. Algorithms are not neutral, but rather they perpetuate the prejudices of their creators. Their creators, such as businesses or governments, can potentially have different goals in mind than the users would have. THEY MUST BE KNOWN TO THE USER Since algorithms make increasingly important decisions about our lives, users need to be informed about them. Knowledge about automated
  • 9. decision-making in everyday services is still very limited among consumers. Raising awareness should be at the heart of the debate about ethics of algorithms. PUBLIC POLICY APPROACHES TO REGULATE ALGORITHMS How do you regulate a black box? We need to open a discussion on how policy-makers are trying to deal with ethical concerns around algorithms. There have been some attempts to provide algorithmic accountability, but we need better data and more in-depth studies. If algorithms are written and used by corporations, it is government institutions like antitrust or consumer protection agencies, who should provide appropriate regulation and oversight. But who regulates the use of algorithms by the government itself? For cases like predictive policing ethical standards and legal safeguards are needed.
  • 10. Recently, regulatory approaches to algorithms have circled around trans- parency, notification, and direct regulation. Yet, experience shows that policy-makers are facing certain dilemmas of regulation when it comes to algorithms. TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE If you are faced with a complex and obscure algorithm, one common reaction is a demand for more transparency about what and how it works. The concern about black-box algorithms is that they make inherently subjective decisions, which might contain implicit or explicit biases. At the same time, making complex algorithms fully transparent can be extremely challenging: • It is not enough to merely publish the source code of an algorithm, because machine-learning systems will inevitably make decisions that
  • 11. have not been programmed directly. Complete transparency would require that we are able to explain why any particular outcome was produced. • Some investigations have reverse-engineered algorithms in order to create greater public awareness about them. That is one way how the public can perform a watchdog function. • Often, there might be good reasons why complex algorithms operate opaquely, because public access would make them much more vulnera- ble to manipulation. If every company knew how Google ranks its search results, it could optimize their behavior and render the ranking algorithm useless. NOTIFICATION | GIVING USERS THE RIGHT TO KNOW A different form of transparency is to give consumers control over their personal information that feeds into algorithms. Notification
  • 12. includes the rights to correct that personal information and demand it be excluded from databases of data vendors. Regaining control over your personal information ensures accountability to the users. DIRECT REGULATION | WHEN ALGORITHMS BECOME CRITICAL INFRASTRUCTURE • In some cases public regulators have been prone to create ways to manipulate algorithms directly. This is especially relevant for core infrastructure. Debate about algorithmic regulation is most advanced in the area of finance. Automated high-speed trading has potentially destabilizing effects on financial markets, so regulators have begun to demand the ability to modify these algorithms. • The ongoing antitrust investigations into Google's ‘search neutrality’ revolve around the same question: can regulators may require access to
  • 13. and modification of the search algorithm in the interest of the public? This approach is based on a contested assumption that it is possible to predict objectively how a certain algorithms will respond. Yet, there is simply no ‘right’ answer to how Google should rank its results. Antitrust agencies in the US and the EU have not yet found an regulatory response to this issue. In some cases direct regulation or complete and public transparency might be necessary. However, there is no one-size-fits-all regulatory response. More scrutiny of algorithms must be enabled, which requires new practices from industry and technologists. More consumer protection and direct regulation should be introduced where appropriate. WHY DO ALGORITHMS RAISE ETHICAL CONCERNS? First, let's have a closer look at some of the critical features of
  • 14. algorithms. What are typical functions they perform? What are negative impacts for human rights? Here are some examples that probably affect you too. THEY KEEP INFORMATION AWAY FROM US Increasingly, algorithms decide what gets attention, and what is ignored; and even what gets published at all, and what is censored. This is true for all kinds of search rankings, for example the way your social media news- feed looks. In other words, algorithms perform a gate-keeping function. EXAMPLE Hiring algorithms decide if you are invited for an interview. • Algorithms, rather than managers, are more and more taking part in hiring (and firing) of employees. Deciding who gets a job and who does not, is among the most powerful gate-keeping function in society. • Research shows that human managers display many different biases in
  • 15. hiring decisions, for example based on social class, race and gender. Clearly, human hiring systems are far from perfect. • Nevertheless, we may not simply assume that algorithmic hiring can easily overcome human biases. Algorithms might work more accurate in some areas, but can also create new, sometimes unintended, prob- lems depending on how they are programmed and what input data is used. Ethical implications: Algorithms work as gatekeepers that influence how we perceive the world, often without us realizing it. They channel our attention, which implies tremendous power. 3 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS | 4 THEY MAKE SUBJECTIVE DECISIONS Some algorithms deal with questions, which do not have a clear ‘yes or no’ answer. Thus, they move a way from a checkbox answer “Is this
  • 16. right or wrong?” to more complex judgements, such as “What is important? Who is the right person for the job? Who is a threat to public safety? Who should I date?” Quietly, these types of subjective decisions previously made by humans are turned over to algorithms. EXAMPLE Predictive policing based on statistical forecasting: • In early 2014, the Chicago Police Department made national headlines in the US for visiting residents who were considered to be most likely involved in violent crime. The selection of individuals, wo were not necessarily under investigation, was guided by a computer- generated “heat list” – an algorithm that seeks to predict future involvement in violent crime. • A key concern about predictive policing is that such automated systems
  • 17. may create an echo chamber or a self-fulfilling prophecy. In fact, heavy policing of a specific area can increase the likelihood that crime will be detected. Since more police means more opportunities to observe residents' activities, the algorithm might just confirm its own predic- tion. • Right now, police departments around the globe are testing and imple- menting predictive policing algorithms, but lack safeguards for discrim- inatory biases. Ethical implications: Predictions made by algorithms provide no guaran- tee that they are right. And officials acting on incorrect predictions may even create unjustified or biased investigations. WE DON’T ALWAYS KNOW HOW THEY WORK Complexity is huge: Many present day algorithms are very complicated and can be hard for humans to understand, even if their source
  • 18. code is shared with competent observers. What adds to the problem is opacity, as in lack of transparency of the code. Algorithms perform complex calcula- tions, which follow many potential steps along the way. They can consist of thousands, or even millions, of individual data points. Sometimes not even the programmers can predict how an algorithm will decide on a certain case. EXAMPLE Facebook newsfeed algorithms is complex and opaque: • Many users are not aware that when we open Facebook, an algorithm that puts together the postings, pictures and ads that we see. Indeed, it is an algorithm that decides what to show us and what to hold back. Facebook newsfeed algorithm filters the content you see, but do you know the principles it uses to hold back information from you?
  • 19. • In fact, a team of researchers tweaks this algorithm every week – they take thousands and thousands of metrics into consideration. This is why the effects of newsfeed algorithms are hard to predict - even by Facebook engineers! • If we asked Facebook how the algorithm works, they would not tell us. The principles behind the way newsfeed work (the source code) are in fact a business secret. Without knowing the exact code, nobody can evaluate how your newsfeed is composed. Ethical implications: Complex algorithms are often practically incompre- hensible to outsiders, but they inevitably have values, biases, and potential discrimination built in. CORE ISSUES – SHOULD AN ALGORITHM DECIDE YOUR FUTURE? Without the help of algorithms, many present-day applications would be unusable. We need them to cope with the enormous amounts of
  • 20. data we produce every day. Algorithms make our lives easier and more productive, and we certainly don't want to lose those advantages. But we need to be aware of what they do and how they decide. THEY CAN DISCRIMINATE AGAINST YOU, JUST LIKE HUMANS Computers are often regarded as objective and rational machines. Howev- er, algorithms are made by humans and can be just as biased. We need to be critical of the assumption, that algorithms can make “better” decisions than human beings. There are racist algorithms and sexist ones. Algorithms are not neutral, but rather they perpetuate the prejudices of their creators. Their creators, such as businesses or governments, can potentially have different goals in mind than the users would have. THEY MUST BE KNOWN TO THE USER Since algorithms make increasingly important decisions about
  • 21. our lives, users need to be informed about them. Knowledge about automated decision-making in everyday services is still very limited among consumers. Raising awareness should be at the heart of the debate about ethics of algorithms. PUBLIC POLICY APPROACHES TO REGULATE ALGORITHMS How do you regulate a black box? We need to open a discussion on how policy-makers are trying to deal with ethical concerns around algorithms. There have been some attempts to provide algorithmic accountability, but we need better data and more in-depth studies. If algorithms are written and used by corporations, it is government institutions like antitrust or consumer protection agencies, who should provide appropriate regulation and oversight. But who regulates the use of algorithms by the government itself? For cases like predictive
  • 22. policing ethical standards and legal safeguards are needed. Recently, regulatory approaches to algorithms have circled around trans- parency, notification, and direct regulation. Yet, experience shows that policy-makers are facing certain dilemmas of regulation when it comes to algorithms. TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE If you are faced with a complex and obscure algorithm, one common reaction is a demand for more transparency about what and how it works. The concern about black-box algorithms is that they make inherently subjective decisions, which might contain implicit or explicit biases. At the same time, making complex algorithms fully transparent can be extremely challenging: • It is not enough to merely publish the source code of an algorithm,
  • 23. because machine-learning systems will inevitably make decisions that have not been programmed directly. Complete transparency would require that we are able to explain why any particular outcome was produced. • Some investigations have reverse-engineered algorithms in order to create greater public awareness about them. That is one way how the public can perform a watchdog function. • Often, there might be good reasons why complex algorithms operate opaquely, because public access would make them much more vulnera- ble to manipulation. If every company knew how Google ranks its search results, it could optimize their behavior and render the ranking algorithm useless. NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
  • 24. A different form of transparency is to give consumers control over their personal information that feeds into algorithms. Notification includes the rights to correct that personal information and demand it be excluded from databases of data vendors. Regaining control over your personal information ensures accountability to the users. DIRECT REGULATION | WHEN ALGORITHMS BECOME CRITICAL INFRASTRUCTURE • In some cases public regulators have been prone to create ways to manipulate algorithms directly. This is especially relevant for core infrastructure. Debate about algorithmic regulation is most advanced in the area of finance. Automated high-speed trading has potentially destabilizing effects on financial markets, so regulators have begun to demand the ability to modify these algorithms. • The ongoing antitrust investigations into Google's ‘search
  • 25. neutrality’ revolve around the same question: can regulators may require access to and modification of the search algorithm in the interest of the public? This approach is based on a contested assumption that it is possible to predict objectively how a certain algorithms will respond. Yet, there is simply no ‘right’ answer to how Google should rank its results. Antitrust agencies in the US and the EU have not yet found an regulatory response to this issue. In some cases direct regulation or complete and public transparency might be necessary. However, there is no one-size-fits-all regulatory response. More scrutiny of algorithms must be enabled, which requires new practices from industry and technologists. More consumer protection and direct regulation should be introduced where appropriate.
  • 26. WHY DO ALGORITHMS RAISE ETHICAL CONCERNS? First, let's have a closer look at some of the critical features of algorithms. What are typical functions they perform? What are negative impacts for human rights? Here are some examples that probably affect you too. THEY KEEP INFORMATION AWAY FROM US Increasingly, algorithms decide what gets attention, and what is ignored; and even what gets published at all, and what is censored. This is true for all kinds of search rankings, for example the way your social media news- feed looks. In other words, algorithms perform a gate-keeping function. EXAMPLE Hiring algorithms decide if you are invited for an interview. • Algorithms, rather than managers, are more and more taking part in hiring (and firing) of employees. Deciding who gets a job and who does not, is among the most powerful gate-keeping function in society.
  • 27. • Research shows that human managers display many different biases in hiring decisions, for example based on social class, race and gender. Clearly, human hiring systems are far from perfect. • Nevertheless, we may not simply assume that algorithmic hiring can easily overcome human biases. Algorithms might work more accurate in some areas, but can also create new, sometimes unintended, prob- lems depending on how they are programmed and what input data is used. Ethical implications: Algorithms work as gatekeepers that influence how we perceive the world, often without us realizing it. They channel our attention, which implies tremendous power. THEY MAKE SUBJECTIVE DECISIONS Some algorithms deal with questions, which do not have a clear ‘yes or no’
  • 28. answer. Thus, they move a way from a checkbox answer “Is this right or wrong?” to more complex judgements, such as “What is important? Who is the right person for the job? Who is a threat to public safety? Who should I date?” Quietly, these types of subjective decisions previously made by humans are turned over to algorithms. EXAMPLE Predictive policing based on statistical forecasting: • In early 2014, the Chicago Police Department made national headlines in the US for visiting residents who were considered to be most likely involved in violent crime. The selection of individuals, wo were not necessarily under investigation, was guided by a computer- generated “heat list” – an algorithm that seeks to predict future involvement in violent crime. • A key concern about predictive policing is that such automated systems
  • 29. may create an echo chamber or a self-fulfilling prophecy. In fact, heavy policing of a specific area can increase the likelihood that crime will be detected. Since more police means more opportunities to observe residents' activities, the algorithm might just confirm its own predic- tion. • Right now, police departments around the globe are testing and imple- menting predictive policing algorithms, but lack safeguards for discrim- inatory biases. Ethical implications: Predictions made by algorithms provide no guaran- tee that they are right. And officials acting on incorrect predictions may even create unjustified or biased investigations. 5 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS | 6 CORE ISSUES – SHOULD AN ALGORITHM DECIDE YOUR FUTURE?
  • 30. Without the help of algorithms, many present-day applications would be unusable. We need them to cope with the enormous amounts of data we produce every day. Algorithms make our lives easier and more productive, and we certainly don't want to lose those advantages. But we need to be aware of what they do and how they decide. THEY CAN DISCRIMINATE AGAINST YOU, JUST LIKE HUMANS Computers are often regarded as objective and rational machines. Howev- er, algorithms are made by humans and can be just as biased. We need to be critical of the assumption, that algorithms can make “better” decisions than human beings. There are racist algorithms and sexist ones. Algorithms are not neutral, but rather they perpetuate the prejudices of their creators. Their creators, such as businesses or governments, can potentially have different goals in mind than the users would have.
  • 31. THEY MUST BE KNOWN TO THE USER Since algorithms make increasingly important decisions about our lives, users need to be informed about them. Knowledge about automated decision-making in everyday services is still very limited among consumers. Raising awareness should be at the heart of the debate about ethics of algorithms. PUBLIC POLICY APPROACHES TO REGULATE ALGORITHMS How do you regulate a black box? We need to open a discussion on how policy-makers are trying to deal with ethical concerns around algorithms. There have been some attempts to provide algorithmic accountability, but we need better data and more in-depth studies. If algorithms are written and used by corporations, it is government institutions like antitrust or consumer protection agencies, who should
  • 32. provide appropriate regulation and oversight. But who regulates the use of algorithms by the government itself? For cases like predictive policing ethical standards and legal safeguards are needed. Recently, regulatory approaches to algorithms have circled around trans- parency, notification, and direct regulation. Yet, experience shows that policy-makers are facing certain dilemmas of regulation when it comes to algorithms. TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE If you are faced with a complex and obscure algorithm, one common reaction is a demand for more transparency about what and how it works. The concern about black-box algorithms is that they make inherently subjective decisions, which might contain implicit or explicit biases. At the same time, making complex algorithms fully transparent can be extremely
  • 33. challenging: • It is not enough to merely publish the source code of an algorithm, because machine-learning systems will inevitably make decisions that have not been programmed directly. Complete transparency would require that we are able to explain why any particular outcome was produced. • Some investigations have reverse-engineered algorithms in order to create greater public awareness about them. That is one way how the public can perform a watchdog function. • Often, there might be good reasons why complex algorithms operate opaquely, because public access would make them much more vulnera- ble to manipulation. If every company knew how Google ranks its search results, it could optimize their behavior and render the ranking
  • 34. algorithm useless. NOTIFICATION | GIVING USERS THE RIGHT TO KNOW A different form of transparency is to give consumers control over their personal information that feeds into algorithms. Notification includes the rights to correct that personal information and demand it be excluded from databases of data vendors. Regaining control over your personal information ensures accountability to the users. DIRECT REGULATION | WHEN ALGORITHMS BECOME CRITICAL INFRASTRUCTURE • In some cases public regulators have been prone to create ways to manipulate algorithms directly. This is especially relevant for core infrastructure. Debate about algorithmic regulation is most advanced in the area of finance. Automated high-speed trading has potentially destabilizing effects on financial markets, so regulators have begun to demand the
  • 35. ability to modify these algorithms. • The ongoing antitrust investigations into Google's ‘search neutrality’ revolve around the same question: can regulators may require access to and modification of the search algorithm in the interest of the public? This approach is based on a contested assumption that it is possible to predict objectively how a certain algorithms will respond. Yet, there is simply no ‘right’ answer to how Google should rank its results. Antitrust agencies in the US and the EU have not yet found an regulatory response to this issue. In some cases direct regulation or complete and public transparency might be necessary. However, there is no one-size-fits-all regulatory response. More scrutiny of algorithms must be enabled, which requires new practices from industry and technologists. More consumer protection and direct
  • 36. regulation should be introduced where appropriate. WHY DO ALGORITHMS RAISE ETHICAL CONCERNS? First, let's have a closer look at some of the critical features of algorithms. What are typical functions they perform? What are negative impacts for human rights? Here are some examples that probably affect you too. THEY KEEP INFORMATION AWAY FROM US Increasingly, algorithms decide what gets attention, and what is ignored; and even what gets published at all, and what is censored. This is true for all kinds of search rankings, for example the way your social media news- feed looks. In other words, algorithms perform a gate-keeping function. EXAMPLE Hiring algorithms decide if you are invited for an interview. • Algorithms, rather than managers, are more and more taking part in hiring (and firing) of employees. Deciding who gets a job and
  • 37. who does not, is among the most powerful gate-keeping function in society. • Research shows that human managers display many different biases in hiring decisions, for example based on social class, race and gender. Clearly, human hiring systems are far from perfect. • Nevertheless, we may not simply assume that algorithmic hiring can easily overcome human biases. Algorithms might work more accurate in some areas, but can also create new, sometimes unintended, prob- lems depending on how they are programmed and what input data is used. Ethical implications: Algorithms work as gatekeepers that influence how we perceive the world, often without us realizing it. They channel our attention, which implies tremendous power. THEY MAKE SUBJECTIVE DECISIONS
  • 38. Some algorithms deal with questions, which do not have a clear ‘yes or no’ answer. Thus, they move a way from a checkbox answer “Is this right or wrong?” to more complex judgements, such as “What is important? Who is the right person for the job? Who is a threat to public safety? Who should I date?” Quietly, these types of subjective decisions previously made by humans are turned over to algorithms. EXAMPLE Predictive policing based on statistical forecasting: • In early 2014, the Chicago Police Department made national headlines in the US for visiting residents who were considered to be most likely involved in violent crime. The selection of individuals, wo were not necessarily under investigation, was guided by a computer- generated “heat list” – an algorithm that seeks to predict future involvement in
  • 39. violent crime. • A key concern about predictive policing is that such automated systems may create an echo chamber or a self-fulfilling prophecy. In fact, heavy policing of a specific area can increase the likelihood that crime will be detected. Since more police means more opportunities to observe residents' activities, the algorithm might just confirm its own predic- tion. • Right now, police departments around the globe are testing and imple- menting predictive policing algorithms, but lack safeguards for discrim- inatory biases. Ethical implications: Predictions made by algorithms provide no guaran- tee that they are right. And officials acting on incorrect predictions may even create unjustified or biased investigations. CORE ISSUES – SHOULD AN ALGORITHM
  • 40. DECIDE YOUR FUTURE? Without the help of algorithms, many present-day applications would be unusable. We need them to cope with the enormous amounts of data we produce every day. Algorithms make our lives easier and more productive, and we certainly don't want to lose those advantages. But we need to be aware of what they do and how they decide. THEY CAN DISCRIMINATE AGAINST YOU, JUST LIKE HUMANS Computers are often regarded as objective and rational machines. Howev- er, algorithms are made by humans and can be just as biased. We need to be critical of the assumption, that algorithms can make “better” decisions than human beings. There are racist algorithms and sexist ones. Algorithms are not neutral, but rather they perpetuate the prejudices of their creators. Their creators, such as businesses or governments, can potentially have
  • 41. different goals in mind than the users would have. THEY MUST BE KNOWN TO THE USER Since algorithms make increasingly important decisions about our lives, users need to be informed about them. Knowledge about automated decision-making in everyday services is still very limited among consumers. Raising awareness should be at the heart of the debate about ethics of algorithms. PUBLIC POLICY APPROACHES TO REGULATE ALGORITHMS How do you regulate a black box? We need to open a discussion on how policy-makers are trying to deal with ethical concerns around algorithms. There have been some attempts to provide algorithmic accountability, but we need better data and more in-depth studies. If algorithms are written and used by corporations, it is government institutions like antitrust or consumer protection agencies, who should
  • 42. provide appropriate regulation and oversight. But who regulates the use of algorithms by the government itself? For cases like predictive policing ethical standards and legal safeguards are needed. Recently, regulatory approaches to algorithms have circled around trans- parency, notification, and direct regulation. Yet, experience shows that policy-makers are facing certain dilemmas of regulation when it comes to algorithms. TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE If you are faced with a complex and obscure algorithm, one common reaction is a demand for more transparency about what and how it works. The concern about black-box algorithms is that they make inherently subjective decisions, which might contain implicit or explicit biases. At the same time, making complex algorithms fully transparent can be extremely
  • 43. challenging: • It is not enough to merely publish the source code of an algorithm, because machine-learning systems will inevitably make decisions that have not been programmed directly. Complete transparency would require that we are able to explain why any particular outcome was produced. • Some investigations have reverse-engineered algorithms in order to create greater public awareness about them. That is one way how the public can perform a watchdog function. • Often, there might be good reasons why complex algorithms operate opaquely, because public access would make them much more vulnera- ble to manipulation. If every company knew how Google ranks its search results, it could optimize their behavior and render the ranking
  • 44. algorithm useless. NOTIFICATION | GIVING USERS THE RIGHT TO KNOW A different form of transparency is to give consumers control over their personal information that feeds into algorithms. Notification includes the rights to correct that personal information and demand it be excluded from databases of data vendors. Regaining control over your personal information ensures accountability to the users. DIRECT REGULATION | WHEN ALGORITHMS BECOME CRITICAL INFRASTRUCTURE • In some cases public regulators have been prone to create ways to manipulate algorithms directly. This is especially relevant for core infrastructure. Debate about algorithmic regulation is most advanced in the area of finance. Automated high-speed trading has potentially destabilizing effects on financial markets, so regulators have begun to
  • 45. demand the ability to modify these algorithms. • The ongoing antitrust investigations into Google's ‘search neutrality’ revolve around the same question: can regulators may require access to and modification of the search algorithm in the interest of the public? This approach is based on a contested assumption that it is possible to predict objectively how a certain algorithms will respond. Yet, there is simply no ‘right’ answer to how Google should rank its results. Antitrust agencies in the US and the EU have not yet found an regulatory response to this issue. In some cases direct regulation or complete and public transparency might be necessary. However, there is no one-size-fits-all regulatory response. More scrutiny of algorithms must be enabled, which requires new practices from industry and technologists. More consumer protection and
  • 46. direct regulation should be introduced where appropriate. 7 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS | 8 EDS 1021 Week 6 Interactive Assignment Radiometric Dating Objective: Using a simulated practical application, explore the concepts of radioactive decay and the half-life of radioactive elements, and then apply the concept of radiometric dating to estimate the age of various objects. Background: Review the topics Half-Life, Radiometric Dating, and Decay Chains in Chapter 12 of The Sciences. Instructions: 1. PRINT a hard copy of this entire document, so that the experiment instructions may be easily referred to, and the data tables and questions (on the last three pages) can be completed as a rough draft. 2. Download the Radiometric Dating Game Answer Sheet from the course website. Transfer your data values and question answers from the completed rough draft to
  • 47. the answer sheet. Be sure to put your NAME on the answer sheet where indicated. Save your completed answer sheet on your computer. 3. SUBMIT ONLY the completed answer sheet, by uploading your file to the digital drop box for the assignment. Introduction to the Simulation 1. After reviewing the background information for this assignment, go to the website for the interactive simulation “Radioactive Dating Game” at http://phet.colorado.edu/en/simulation/radioactive-dating-game. Click on DOWNLOAD to run the simulation locally on your computer. 2. Software Requirements: You must have the latest version of Java software (free) loaded on your computer to run the simulation. If you do not or are not sure if you have the latest version, go to http://www.java.com/en/download/index.jsp . 3. Explore and experiment on the 4 different “tabs” (areas) of the simulation. While playing around, think about how the concepts of radioactive decay are being illustrated in the simulation.
  • 48. Half Life Tab – observe a sample of radioactive atoms decaying - Carbon-14, Uranium-238, or ? (a custom- made radioactive atom). Clicking on the “add 10” button adds 10 atoms at a time to the “decay area”. There are a total of 100 atoms in the bucket, so clicking the “add 10” button 10 times will empty the bucket into the decay area. Observe the pie chart and time graph as atoms decay. You can PAUSE, STEP (buttons at the bottom of the screen) the simulation in time as atoms are decaying, and RESET the simulation. Decay Rates Tab – Similar to the half-life tab, but different! Atom choices are carbon-14 and uranium-238. The bucket has a total of 1000 atoms. Drag the slide bar on the bucket to the right to increase the number of atoms added to the decay area. Observe the pie chart and time graph as atoms decay. Note that the graph for the Decay Rates tab provides different information than the graph for the Half Life tab. You can PAUSE, STEP (buttons at the bottom of the screen) the simulation in time as atoms are decaying, and RESET the simulation.
  • 49. Measurement Tab – Use a probe to virtually measure radioactive decay within an object - a tree or a volcanic rock. The probe can be set to detect either the decay of carbon-14 atoms, or the decay of uranium- 238 atoms. Follow prompts on the screen to run a simulation of a tree growing and dying, or of a volcano erupting and creating a rock, and then measuring the decay of atoms within each object. Dating Game Tab – Use a probe to virtually measure the percentage of radioactive atoms remaining within various objects and, knowing the half-life of radioactive elements being detected, estimate the ages of objects. The probe can be set to either detect carbon-14, uranium-238, or other “mystery” elements, as appropriate for determining the age of the object. Drag the probe over an object, select which element to measure, and then slide the arrow on the graph to match the percentage of atoms measured by the probe. The time (t) shown for the matching percentage can then be entered as the estimate in years of the object’s age. After playing around with the simulation, conduct the following four (4) short experiments. As you conduct
  • 50. the experiments and collect data, fill in the data tables and answer the questions on the last three pages of this document. http://phet.colorado.edu/en/simulation/radioactive-dating-game http://www.java.com/en/download/index.jsp http://www.java.com/en/download/index.jsp Experiment 1: Half Life 1. Click on the Half Life tab at the top of the simulation screen. 2. Procedure: Part I - Carbon-14 a. Click the blue “Pause” button at the bottom of the screen (i.e., set it so that it shows the “play” arrow). Click the “Add 10” button below the “Bucket o’ Atoms” repeatedly, until there are no more atoms left in the bucket. There are now 100 carbon-14 atoms in the decay area. b. The half-life of carbon-14 is about 5700 years. Based on the definition of half-life, if you left these 100 carbon-14 atoms to sit around for 5700 years, what is your prediction of how many carbon-14 atoms will decay into the stable element nitrogen-14 during that time? Write your prediction in the “prediction”
  • 51. column for the row labeled “carbon-14”, in data table 1. c. Click the blue “Play” arrow at the bottom of the screen. As the simulation runs, carefully observe what is happening to the carbon-14 atoms in the decay area, and the graphs at the top of the screen (both the pie chart and the time graph). Once all atoms have decayed into the stable isotope nitrogen-14, click the blue “Pause” button at the bottom of the screen (i.e., set it so that it shows the “play” arrow), and “Reset All Nuclei” button in the decay area. d. Repeat step c. until you have a good idea of what is going on in this simulation. e. Repeat step c. again, but this time, watch the graph at the top of the window carefully, and click “pause” when TIME reaches 5700 years (when the carbon-14 atom moving across the graph reaches the red dashed line labeled HALF LIFE on the TIME graph). f. If you do NOT pause the simulation on or very close to the red dashed line, click the “Reset All Nuclei” button and repeat step e. g. Once you have paused the simulation in the correct spot, look at the pie graph and determine the number of nuclei that have decayed into nitrogen-14 at Time = HALF LIFE. Write this number in data table 1, in the row labeled “carbon-14”, under “trial 1”. h. Click the “Reset All Nuclei” button in the decay area. i. Repeat steps e through h for two more trials. For each trial, write down in data table 1, the number of
  • 52. nuclei that have decayed into nitrogen-14 at Time = HALF LIFE, in the row labeled “carbon-14”, under “trial 2” and “trial 3” respectively. Part II – Uranium-238 a. Click “reset all” on the right side of the screen in the Decay Isotope box, and click “yes” in the box that pops up. Click on the radio button for uranium-238 in the Decay Isotope box. Click the blue “Pause” button at the bottom of the screen (i.e., set it so that it shows the “play” arrow). Click the “Add 10” button below the “Bucket o’ Atoms” repeatedly, until there are no more atoms left in the bucket. There are now 100 uranium-238 atoms in the decay area. b. The half-life of uranium-238 is 4.5 billion years!* Based on the definition of half-life, if you left these 100 uranium-238 atoms to sit around for 4.5 billion years, what is your prediction of how many uranium atoms will decay into lead-206 during that time? Write your prediction in the “prediction” column for the second row, labeled “uranium-238”, in data table 1. c. Click the “Play” button at the bottom of the window. Watch the graph at the top of the window carefully, and click “pause” when the time reaches 4.5 billion years (when the uranium-238 atom moving across the graph reaches the red dashed line labeled HALF LIFE on the time graph). d. If you don’t pause the simulation on or very close to the red line, click the “Reset All Nuclei” button and
  • 53. repeat step c. e. Once you have paused the simulation in the correct spot, look at the pie graph and determine the number of nuclei that have decayed into lead-206 at Time = HALF LIFE. Write this number in data table 1, in the row labeled “uranium-238”, under “trial 1”. f. Click the “Reset All Nuclei” button in the decay area. g. Repeat steps c through e for two more trials. For each trial, write down in data table 1, the number of nuclei that have decayed into lead-206 at Time = HALF LIFE, in the row labeled “uranium-238”, under “trial 2” and “trial 3” respectively. 3. Calculate the average number of atoms that decayed for all three trials of carbon-14 decay. Do the same for all three trials uranium-238 decay. Write the average values for each element under “averages”, the last column in data table 1. 4. Answer the five (5) questions for Experiment 1 on the Questions page. * Unlike Carbon-14, which undergoes only one radioactive decay to reach the stable nitrogen-14, uranium-238 undergoes MANY decays into many intermediate unstable elements before finally getting to the stable element lead-206 (see the decay chain for uranium-238 in chapter 12 for details).
  • 54. Experiment 2: Decay Rates 1. Set Up: Click on the Decay Rates tab at the top of the simulation screen. 2. Procedure Part I – Carbon-14 a. In the Choose Isotope area on the right side of the screen, click the button next to carbon-14. Recall that carbon-14 has a half-life of about 5700 years. b. Drag the slide bar on the bucket of atoms all the way to the right. This will put 1,000 radioactive atoms into the decay area. When you let go of the slide bar, the simulation will start right away. If you missed seeing the simulation run from the start, start it over again by clicking “Reset All Nuclei”, and keep your eyes on the screen. Watch the graph at the bottom of the screen as you allow the simulation to run until all atoms have decayed. c. Carefully interpret the graph to obtain the data requested to fill in data table 2 for “carbon-14”. Part II – Uranium-238
  • 55. a. In the Choose Isotope area on the right side of the screen, click the button next to uranium-238. Recall that the half-life of uranium-238 is about 4.5 billion years. b. Repeat step b. of part I for uranium-238. c. Carefully interpret the graph to obtain the data requested to fill in data table 2 for “uranium-238”. 3. Answer the two (2) questions for Experiment 2 on the Questions page. Experiment 3: Measurement 1. Set Up: Click on the Measurement tab at the top of the simulation screen. 2. Procedure a. In Choose an Object, click on the button for Tree. In the Probe Type box, click on the buttons for “carbon-14”, and “Objects”. b. Click “Plant Tree” at the bottom of the screen. Observe that the tree grows and lives for about 1200 years, then die and begin to decay. As the simulation runs, observe the probe reading (upper left box) and graph (upper right box) at the top of the screen. The graph is plotting the probe’s measurement of the percentage of carbon-14 remaining in the tree over time.
  • 56. c. On the right side of the screen, in the “Choose an Object” box, click the reset button. Set the probe to measure uranium-238 instead of carbon-14 (in the Probe Type box, click on the button for “uranium- 238”). Click “Plant Tree” and again observe the probe reading and graph. This time, the graph will plot the probe’s measurement of uranium-238 in the tree. d. On the right side of the screen, in the “Choose an Object” box, click the reset button and click the button for Rock. Keep the probe type set on “uranium-238”. e. Click “Erupt Volcano” and observe the volcano creating and ejecting HOT igneous rocks. As the simulation runs, observe the probe reading (upper left box) and graph (upper right box) at the top of the screen. The graph is plotting the probe’s measurement of uranium-238 in the rock over time. f. On the right side of the screen, in the “Choose an Object” box, click the reset button. Set the probe to measure carbon-14 instead of uranium-238 (in the Probe Type box, click on the button for “carbon-14”). Click “erupt volcano” and again observe the probe reading and graph. This time, the graph will plot the probe’s measurement of carbon-14 in the rock. 3. Answer the six (6) questions for Experiment 3 on the Questions page. Repeat the Measurement experiments as necessary to answer the questions.
  • 57. Experiment 4: Dating Game 1. Set Up: Click on the Dating Game tab at the top of the simulation screen. Verify that the “objects” button is clicked below the Probe Type box. 2. Procedure a. Select an item either at or below the Earth’s surface to determine the age of. Set the Probe Type to either carbon-14 or uranium-238, as appropriate for the item you are measuring, based on your findings in experiment 3. b. Drag the probe directly over the item you chose. With the probe over the item, the box in the upper left, above “Probe Type”, shows the percentage of element that remains in the item. c. Drag the green arrow on the graph to the right or left, until the percentage in the box on the graph matches the percentage of element in the object. Once you have the arrow positioned correctly, enter, in the “estimate” box, the time (t) shown, as the estimate for the age of the rock or fossil. Click “Check Estimate” to see if the estimate is correct. d. Example:
  • 58. 1) Drag the probe over the dead tree to the right of the house. Since this is a once-living thing, set the Probe Type to Carbon-14. 2) Look at the probe reading: it shows that there is 97.4% of carbon-14 remaining in the dead tree. 3) Drag the green arrows on the graph at the top of the screen to the right or left until the top line tells you that the carbon-14 percentage is 97.4%, matching the reading from the probe. 4) When the graph reads 97.4%, it shows that time (t) equals 220 years. 5) Type “220” into the box that says “Estimate age of dead tree:” and click “Check Estimate”. 6) You should see a green smiley face, indicating that you have correctly figured out the age of the dead tree, 220 years. This means that the tree died 220 years ago. e. Repeat step c. for all the other items on the Dating Game screen. Fill in data table 3. Hints: *When entering the estimate for the age of an object, you MUST enter it in YEARS. So, if t = 134.8 MY on the graph, which means 134.8 million years, you would enter the number 134,800,000 as the estimate in years for the age of a rock. The estimates entered
  • 59. do not have to be EXACT to be registered as correct, but must be CLOSE. For example, if 134,000,000 were entered for the above example, the simulation would return a green smiley face for a correct estimate. **For the last four items on the list, neither carbon-14 nor uranium-238 will work to determine the item’s age. Select “Custom”, and pick a half-life from the drop-down menu that allows the probe to detect something (i.e., a reading other than 0.0%). All of the “custom” elements are, like carbon-14, elements that would be found in living or once living things. 3. Answer the (1) question for Experiment 4 on the Questions page. Name: Data Each box to be filled in with a value is worth 1 point. Data Table 1
  • 60. Radioactive Element Number of atoms in the sample at Time = 0 Prediction of # atoms that will decay when time reaches one half- life Number of atoms that have decayed when Time = Half Life Average number of atoms that have decayed when Time = Half Life Trial #1 Trial #2
  • 61. Trial #3 Carbon-14 100 Uranium-238 100 Data Table 2 Radioactive Element Percentage of the original element remaining after: Percentage of the decay product present after: 1 half-life 2 half-lives 3 half-lives 1 half-life 2 half-lives 3 half-lives Carbon-14 Uranium-238 Data Table 3
  • 62. Item Age Element Used Animal Skull House Living Tree Dead Tree Bone Wooden Cup Large Human Skull Fish Bones Rock 1 Rock 3 Rock 5
  • 63. Fish Fossil* Dinosaur Skull* Trilobite* Small Human Skull* Questions For multiple choice questions, choose the letter of the best answer choice from the list below the question. For short answer questions, type your answer in the space provided below the question. Note that there are TWO PAGES OF QUESTIONS below. Each multiple question is worth 3 points and each short answer question is worth 5 points. Experiment 1 1. In 1 or 2 sentences, describe, in the space below, what you observed happening to the atoms in the decay area
  • 64. when the simulation is in “play” mode. 2. In this simulation (and in nature), carbon-14 is radioactively decays to nitrogen-14. What type of radioactive decay is carbon-14 undergoing? a. alpha decay b. beta decay c. gamma decay 3. In data table 1, compare your calculations for the average number of atoms that decayed to your predictions for how many atoms would decay. How did your predictions compare to the averages? a. Close b. Exact c. Nowhere Close 4. Which of the following statements best describes the decay of a sample of radioactive atoms? a. A statistical process, where the sample as a whole decays predictably, but individual atoms in the sample decay like popcorn kernels popping. b. An exact process where the time of decay of each atom in the sample can be predicted. c. A completely random process that is in no way predictable.
  • 65. 5. Based on your observations, the data you collected, and the comparison of the calculated averages to predictions in experiment 1, write in the space below, IN YOUR OWN WORDS, a definition of “half-life”. Phrase the definition in a way that you would explain the concept to one of your classmates. Experiment 2 1. Suppose you found a rock. Using radiometric dating, you determined that the rock contained the same percentage of lead-206 and uranium-238. How old would you conclude the rock to be? a. 2.25 billion years old b. 4.5 billion years old c. 9 billion years old 2. Using the data in data table 2, if the original sample contained 24 grams of carbon-14, how many grams of carbon-14 would be left, and how many grams of nitrogen-14 would be present, after 2 half-lives? a. 12; 12 b. 6; 18 c. 2; 22
  • 66. Experiment 3 1. When half of the carbon-14 atoms in the tree have decayed into nitrogen-14, it has been approximately 5700 years since the tree _____. a. was planted b. died 2. The probe measures that the percentage of carbon-14 in the tree is 100% while the tree is alive, and then measures the percentage of carbon-14 decreasing with time after the tree dies. This means that radiometric dating using carbon-14 is applicable to: a. living objects. B. once-living objects c. both living and once-living objects. 3. When half of the Uranium-238 in the rock has decayed, ______ years have passed since the volcano erupted. a. 4.5 billion years b. 5700 years
  • 67. 4. In step c of experiment 3, the probe was used to measure uranium-238 in the tree. In step f of experiment 3, the probe was used to measure carbon-14 in the rock. In both steps (c and f), the probe reading was _____. a. 100%, then decreasing with time. b. zero (0) . c. decreasing with time. 5. Uranium-238 must be used to measure the age of the rock, and not carbon-14, because: a. The isotope carbon-14 did not exist when the rock was created. b. rocks do not contain carbon-14, and even if they did, the carbon-14 would have likely decayed away prior to measuring the rock’s age. c. it is easier to measure uranium in rocks than it is carbon. 6. Based on your observations for experiment 3, and the answers to your questions above, _______ should be used to determine the ages of once-living things, and _______ should be used to determine the ages of rocks. a. carbon-14; uranium-238 b. uranium-238; carbon-14 c.
  • 68. either can be used for both objects. Experiment 4 1. Briefly explain why neither carbon-14 nor uranium-238 could be used to determine the ages of the last 4 items in data table 3. EDS 1021 Week 6 Interactive Assignment Radiometric Dating Objective: Using a simulated practical application, explore the concepts of radioactive decay and the half-life of radioactive elements, and then apply the concept of radiometric dating to estimate the age of various objects. Background: Review the topics Half-Life, Radiometric Dating, and Decay Chains in Chapter 12 of The Sciences. Instructions: 1. PRINT a hard copy of this entire document, so that the experiment instructions may be easily referred to,
  • 69. and the data tables and questions (on the last three pages) can be completed as a rough draft. 2. Download the Radiometric Dating Game Answer Sheet from the course website. Transfer your data values and question answers from the completed rough draft to the answer sheet. Be sure to put your NAME on the answer sheet where indicated. Save your completed answer sheet on your computer. 3. SUBMIT ONLY the completed answer sheet, by uploading your file to the digital drop box for the assignment. Introduction to the Simulation 1. After reviewing the background information for this assignment, go to the website for the interactive simulation “Radioactive Dating Game” at http://phet.colorado.edu/en/simulation/radioactive-dating-game. Click on DOWNLOAD to run the simulation locally on your computer. 2. Software Requirements: You must have the latest version of Java software (free) loaded on your computer to run the simulation. If you do not or are not sure if you have the latest version, go to http://www.java.com/en/download/index.jsp .
  • 70. 3. Explore and experiment on the 4 different “tabs” (areas) of the simulation. While playing around, think about how the concepts of radioactive decay are being illustrated in the simulation. Half Life Tab – observe a sample of radioactive atoms decaying - Carbon-14, Uranium-238, or ? (a custom- made radioactive atom). Clicking on the “add 10” button adds 10 atoms at a time to the “decay area”. There are a total of 100 atoms in the bucket, so clicking the “add 10” button 10 times will empty the bucket into the decay area. Observe the pie chart and time graph as atoms decay. You can PAUSE, STEP (buttons at the bottom of the screen) the simulation in time as atoms are decaying, and RESET the simulation. Decay Rates Tab – Similar to the half-life tab, but different! Atom choices are carbon-14 and uranium-238. The bucket has a total of 1000 atoms. Drag the slide bar on the bucket to the right to increase the number of atoms added to the decay area. Observe the pie chart and time graph as atoms decay. Note that the graph for the Decay Rates tab provides different information than the graph for the Half Life tab. You can
  • 71. PAUSE, STEP (buttons at the bottom of the screen) the simulation in time as atoms are decaying, and RESET the simulation. Measurement Tab – Use a probe to virtually measure radioactive decay within an object - a tree or a volcanic rock. The probe can be set to detect either the decay of carbon-14 atoms, or the decay of uranium- 238 atoms. Follow prompts on the screen to run a simulation of a tree growing and dying, or of a volcano erupting and creating a rock, and then measuring the decay of atoms within each object. Dating Game Tab – Use a probe to virtually measure the percentage of radioactive atoms remaining within various objects and, knowing the half-life of radioactive elements being detected, estimate the ages of objects. The probe can be set to either detect carbon-14, uranium-238, or other “mystery” elements, as appropriate for determining the age of the object. Drag the probe over an object, select which element to measure, and then slide the arrow on the graph to match the percentage of atoms measured by the probe. The time (t) shown for the matching percentage can then be
  • 72. entered as the estimate in years of the object’s age. After playing around with the simulation, conduct the following four (4) short experiments. As you conduct the experiments and collect data, fill in the data tables and answer the questions on the last three pages of this document. http://phet.colorado.edu/en/simulation/radioactive-dating-game http://www.java.com/en/download/index.jsp http://www.java.com/en/download/index.jsp Experiment 1: Half Life 1. Click on the Half Life tab at the top of the simulation screen. 2. Procedure: Part I - Carbon-14 a. Click the blue “Pause” button at the bottom of the screen (i.e., set it so that it shows the “play” arrow). Click the “Add 10” button below the “Bucket o’ Atoms” repeatedly, until there are no more atoms left in the bucket. There are now 100 carbon-14 atoms in the decay area.
  • 73. b. The half-life of carbon-14 is about 5700 years. Based on the definition of half-life, if you left these 100 carbon-14 atoms to sit around for 5700 years, what is your prediction of how many carbon-14 atoms will decay into the stable element nitrogen-14 during that time? Write your prediction in the “prediction” column for the row labeled “carbon-14”, in data table 1. c. Click the blue “Play” arrow at the bottom of the screen. As the simulation runs, carefully observe what is happening to the carbon-14 atoms in the decay area, and the graphs at the top of the screen (both the pie chart and the time graph). Once all atoms have decayed into the stable isotope nitrogen-14, click the blue “Pause” button at the bottom of the screen (i.e., set it so that it shows the “play” arrow), and “Reset All Nuclei” button in the decay area. d. Repeat step c. until you have a good idea of what is going on in this simulation. e. Repeat step c. again, but this time, watch the graph at the top of the window carefully, and click “pause” when TIME reaches 5700 years (when the carbon-14 atom moving across the graph reaches the red dashed line labeled HALF LIFE on the TIME graph). f. If you do NOT pause the simulation on or very close to the red dashed line, click the “Reset All Nuclei” button and repeat step e. g. Once you have paused the simulation in the correct spot, look at the pie graph and determine the number of nuclei that have decayed into nitrogen-14 at Time =
  • 74. HALF LIFE. Write this number in data table 1, in the row labeled “carbon-14”, under “trial 1”. h. Click the “Reset All Nuclei” button in the decay area. i. Repeat steps e through h for two more trials. For each trial, write down in data table 1, the number of nuclei that have decayed into nitrogen-14 at Time = HALF LIFE, in the row labeled “carbon-14”, under “trial 2” and “trial 3” respectively. Part II – Uranium-238 a. Click “reset all” on the right side of the screen in the Decay Isotope box, and click “yes” in the box that pops up. Click on the radio button for uranium-238 in the Decay Isotope box. Click the blue “Pause” button at the bottom of the screen (i.e., set it so that it shows the “play” arrow). Click the “Add 10” button below the “Bucket o’ Atoms” repeatedly, until there are no more atoms left in the bucket. There are now 100 uranium-238 atoms in the decay area. b. The half-life of uranium-238 is 4.5 billion years!* Based on the definition of half-life, if you left these 100 uranium-238 atoms to sit around for 4.5 billion years, what is your prediction of how many uranium atoms will decay into lead-206 during that time? Write your prediction in the “prediction” column for the second row, labeled “uranium-238”, in data table 1. c. Click the “Play” button at the bottom of the window. Watch the graph at the top of the window
  • 75. carefully, and click “pause” when the time reaches 4.5 billion years (when the uranium-238 atom moving across the graph reaches the red dashed line labeled HALF LIFE on the time graph). d. If you don’t pause the simulation on or very close to the red line, click the “Reset All Nuclei” button and repeat step c. e. Once you have paused the simulation in the correct spot, look at the pie graph and determine the number of nuclei that have decayed into lead-206 at Time = HALF LIFE. Write this number in data table 1, in the row labeled “uranium-238”, under “trial 1”. f. Click the “Reset All Nuclei” button in the decay area. g. Repeat steps c through e for two more trials. For each trial, write down in data table 1, the number of nuclei that have decayed into lead-206 at Time = HALF LIFE, in the row labeled “uranium-238”, under “trial 2” and “trial 3” respectively. 3. Calculate the average number of atoms that decayed for all three trials of carbon-14 decay. Do the same for all three trials uranium-238 decay. Write the average values for each element under “averages”, the last column in data table 1. 4. Answer the five (5) questions for Experiment 1 on the Questions page.
  • 76. * Unlike Carbon-14, which undergoes only one radioactive decay to reach the stable nitrogen-14, uranium-238 undergoes MANY decays into many intermediate unstable elements before finally getting to the stable element lead-206 (see the decay chain for uranium-238 in chapter 12 for details). Experiment 2: Decay Rates 1. Set Up: Click on the Decay Rates tab at the top of the simulation screen. 2. Procedure Part I – Carbon-14 a. In the Choose Isotope area on the right side of the screen, click the button next to carbon-14. Recall that carbon-14 has a half-life of about 5700 years. b. Drag the slide bar on the bucket of atoms all the way to the right. This will put 1,000 radioactive atoms into the decay area. When you let go of the slide bar, the simulation will start right away. If you missed seeing the simulation run from the start, start it over again by clicking “Reset All Nuclei”, and keep your eyes on the screen. Watch the graph at the bottom of the screen as you allow the simulation to run until all atoms have decayed.
  • 77. c. Carefully interpret the graph to obtain the data requested to fill in data table 2 for “carbon-14”. Part II – Uranium-238 a. In the Choose Isotope area on the right side of the screen, click the button next to uranium-238. Recall that the half-life of uranium-238 is about 4.5 billion years. b. Repeat step b. of part I for uranium-238. c. Carefully interpret the graph to obtain the data requested to fill in data table 2 for “uranium-238”. 3. Answer the two (2) questions for Experiment 2 on the Questions page. Experiment 3: Measurement 1. Set Up: Click on the Measurement tab at the top of the simulation screen. 2. Procedure a. In Choose an Object, click on the button for Tree. In the Probe Type box, click on the buttons for “carbon-14”, and “Objects”. b. Click “Plant Tree” at the bottom of the screen. Observe that the tree grows and lives for about 1200
  • 78. years, then die and begin to decay. As the simulation runs, observe the probe reading (upper left box) and graph (upper right box) at the top of the screen. The graph is plotting the probe’s measurement of the percentage of carbon-14 remaining in the tree over time. c. On the right side of the screen, in the “Choose an Object” box, click the reset button. Set the probe to measure uranium-238 instead of carbon-14 (in the Probe Type box, click on the button for “uranium- 238”). Click “Plant Tree” and again observe the probe reading and graph. This time, the graph will plot the probe’s measurement of uranium-238 in the tree. d. On the right side of the screen, in the “Choose an Object” box, click the reset button and click the button for Rock. Keep the probe type set on “uranium-238”. e. Click “Erupt Volcano” and observe the volcano creating and ejecting HOT igneous rocks. As the simulation runs, observe the probe reading (upper left box) and graph (upper right box) at the top of the screen. The graph is plotting the probe’s measurement of uranium-238 in the rock over time. f. On the right side of the screen, in the “Choose an Object” box, click the reset button. Set the probe to measure carbon-14 instead of uranium-238 (in the Probe Type box, click on the button for “carbon-14”). Click “erupt volcano” and again observe the probe reading and graph. This time, the graph will plot the probe’s measurement of carbon-14 in the rock.
  • 79. 3. Answer the six (6) questions for Experiment 3 on the Questions page. Repeat the Measurement experiments as necessary to answer the questions. Experiment 4: Dating Game 1. Set Up: Click on the Dating Game tab at the top of the simulation screen. Verify that the “objects” button is clicked below the Probe Type box. 2. Procedure a. Select an item either at or below the Earth’s surface to determine the age of. Set the Probe Type to either carbon-14 or uranium-238, as appropriate for the item you are measuring, based on your findings in experiment 3. b. Drag the probe directly over the item you chose. With the probe over the item, the box in the upper left, above “Probe Type”, shows the percentage of element that remains in the item. c. Drag the green arrow on the graph to the right or left, until the percentage in the box on the graph
  • 80. matches the percentage of element in the object. Once you have the arrow positioned correctly, enter, in the “estimate” box, the time (t) shown, as the estimate for the age of the rock or fossil. Click “Check Estimate” to see if the estimate is correct. d. Example: 1) Drag the probe over the dead tree to the right of the house. Since this is a once-living thing, set the Probe Type to Carbon-14. 2) Look at the probe reading: it shows that there is 97.4% of carbon-14 remaining in the dead tree. 3) Drag the green arrows on the graph at the top of the screen to the right or left until the top line tells you that the carbon-14 percentage is 97.4%, matching the reading from the probe. 4) When the graph reads 97.4%, it shows that time (t) equals 220 years. 5) Type “220” into the box that says “Estimate age of dead tree:” and click “Check Estimate”. 6) You should see a green smiley face, indicating that you have correctly figured out the age of the dead tree, 220 years. This means that the tree died 220 years ago. e. Repeat step c. for all the other items on the Dating Game screen. Fill in data table 3. Hints:
  • 81. *When entering the estimate for the age of an object, you MUST enter it in YEARS. So, if t = 134.8 MY on the graph, which means 134.8 million years, you would enter the number 134,800,000 as the estimate in years for the age of a rock. The estimates entered do not have to be EXACT to be registered as correct, but must be CLOSE. For example, if 134,000,000 were entered for the above example, the simulation would return a green smiley face for a correct estimate. **For the last four items on the list, neither carbon-14 nor uranium-238 will work to determine the item’s age. Select “Custom”, and pick a half-life from the drop-down menu that allows the probe to detect something (i.e., a reading other than 0.0%). All of the “custom” elements are, like carbon-14, elements that would be found in living or once living things. 3. Answer the (1) question for Experiment 4 on the Questions page. Name: Data
  • 82. Each box to be filled in with a value is worth 1 point. Data Table 1 Radioactive Element Number of atoms in the sample at Time = 0 Prediction of # atoms that will decay when time reaches one half- life Number of atoms that have decayed when Time = Half Life Average number of atoms that have decayed when Time = Half Life Trial
  • 83. #1 Trial #2 Trial #3 Carbon-14 100 Uranium-238 100 Data Table 2 Radioactive Element Percentage of the original element remaining after: Percentage of the decay product present after: 1 half-life 2 half-lives 3 half-lives 1 half-life 2 half-lives 3 half-lives Carbon-14
  • 84. Uranium-238 Data Table 3 Item Age Element Used Animal Skull House Living Tree Dead Tree Bone Wooden Cup Large Human Skull Fish Bones Rock 1
  • 85. Rock 3 Rock 5 Fish Fossil* Dinosaur Skull* Trilobite* Small Human Skull* Questions For multiple choice questions, choose the letter of the best answer choice from the list below the question. For short answer questions, type your answer in the space provided below the question. Note that there are TWO PAGES OF QUESTIONS below. Each multiple question is worth 3 points and each short answer question is worth 5 points.
  • 86. Experiment 1 1. In 1 or 2 sentences, describe, in the space below, what you observed happening to the atoms in the decay area when the simulation is in “play” mode. 2. In this simulation (and in nature), carbon-14 is radioactively decays to nitrogen-14. What type of radioactive decay is carbon-14 undergoing? a. alpha decay b. beta decay c. gamma decay 3. In data table 1, compare your calculations for the average number of atoms that decayed to your predictions for how many atoms would decay. How did your predictions compare to the averages? a. Close b. Exact c. Nowhere Close 4. Which of the following statements best describes the decay of a sample of radioactive atoms? a. A statistical process, where the sample as a whole decays predictably, but individual atoms in the sample
  • 87. decay like popcorn kernels popping. b. An exact process where the time of decay of each atom in the sample can be predicted. c. A completely random process that is in no way predictable. 5. Based on your observations, the data you collected, and the comparison of the calculated averages to predictions in experiment 1, write in the space below, IN YOUR OWN WORDS, a definition of “half-life”. Phrase the definition in a way that you would explain the concept to one of your classmates. Experiment 2 1. Suppose you found a rock. Using radiometric dating, you determined that the rock contained the same percentage of lead-206 and uranium-238. How old would you conclude the rock to be? a. 2.25 billion years old b. 4.5 billion years old c. 9 billion years old 2. Using the data in data table 2, if the original sample contained 24 grams of carbon-14, how many grams of carbon-14 would be left, and how many grams of
  • 88. nitrogen-14 would be present, after 2 half-lives? a. 12; 12 b. 6; 18 c. 2; 22 Experiment 3 1. When half of the carbon-14 atoms in the tree have decayed into nitrogen-14, it has been approximately 5700 years since the tree _____. a. was planted b. died 2. The probe measures that the percentage of carbon-14 in the tree is 100% while the tree is alive, and then measures the percentage of carbon-14 decreasing with time after the tree dies. This means that radiometric dating using carbon-14 is applicable to: a. living objects. B. once-living objects c. both living and once-living objects.
  • 89. 3. When half of the Uranium-238 in the rock has decayed, ______ years have passed since the volcano erupted. a. 4.5 billion years b. 5700 years 4. In step c of experiment 3, the probe was used to measure uranium-238 in the tree. In step f of experiment 3, the probe was used to measure carbon-14 in the rock. In both steps (c and f), the probe reading was _____. a. 100%, then decreasing with time. b. zero (0) . c. decreasing with time. 5. Uranium-238 must be used to measure the age of the rock, and not carbon-14, because: a. The isotope carbon-14 did not exist when the rock was created. b. rocks do not contain carbon-14, and even if they did, the carbon-14 would have likely decayed away prior to measuring the rock’s age. c. it is easier to measure uranium in rocks than it is carbon. 6. Based on your observations for experiment 3, and the answers
  • 90. to your questions above, _______ should be used to determine the ages of once-living things, and _______ should be used to determine the ages of rocks. a. carbon-14; uranium-238 b. uranium-238; carbon-14 c. either can be used for both objects. Experiment 4 1. Briefly explain why neither carbon-14 nor uranium-238 could be used to determine the ages of the last 4 items in data table 3. Innovations Perspective Is AI the end of jobs or a new beginning? By Vivek Wadhwa May 31 2017 WASHINGTON POST A visitor plays chess against a robot during a COMPUTEX event in Taiwan on May 30. This year’s COMPUTEX focuses on artificial intelligence and robotics as well as virtual reality and the Internet of things. (Ritchie B. Tongo/European Pressphoto Agency) Artificial Intelligence (AI) is advancing so rapidly that even its developers are being caught off guard. Google co-founder Sergey Brin said in Davos, Switzerland, in January that it “touches every single one of our main projects, ranging from
  • 91. search to photos to ads … everything we do … it definitely surprised me, even though I was sitting right there.” The long-promised AI, the stuff we’ve seen in science fiction, is coming and we need to be prepared. Today, AI is powering voice assistants such as Google Home, Amazon Alexa and Apple Siri, allowing them to have increasingly natural conversations with us and manage our lights, order food and schedule meetings. Businesses are infusing AI into their products to analyze the vast amounts of data and improve decision-making. In a decade or two, we will have robotic assistants that remind us of Rosie from “The Jetsons” and R2- D2 of “Star Wars.” This has profound implications for how we live and work, for better and worse. AI is going to become our guide and companion — and take millions of jobs away from people. We can deny this is happening, be angry or simply ignore it. But if we do, we will be the losers. As I discussed in my new book, “Driver in the Driverless Car,” technology is now advancing on an exponential curve and making science fiction a reality. We can’t stop it. All we can do is to understand it and use it to better ourselves — and humanity. Rosie and R2-D2 may be on their way but AI is still very limited in its capability, and will be for a long time. The voice assistants are examples of what technologists call narrow AI: systems that are useful, can interact with humans and bear some of the hallmarks of intelligence — but would never be mistaken for a human. They can, however, do a better job on a very specific range of tasks than humans can. I couldn’t, for example, recall the winning and losing pitcher in every baseball game of the major leagues from the previous night. Narrow-AI systems are much better than humans at accessing information stored in complex databases, but their capabilities exclude creative thought. If you asked Siri to find the perfect gift for your mother for Valentine’s Day, she might make a snarky comment but couldn’t venture an educated guess. If you asked her to write your term paper on the Napoleonic Wars, she
  • 92. couldn’t help. That is where the human element comes in and where the opportunities are for us to benefit from AI — and stay employed. In his book “Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins,” chess grandmaster Garry Kasparov tells of his shock and anger at being defeated by IBM’s Deep Blue supercomputer in 1997. He acknowledges that he is a sore loser but was clearly traumatized by having a machine outsmart him. He was aware of the evolution of the technology but never believed it would beat him at his own game. After coming to grips with his defeat, 20 years later, he says fail-safes are required … but so is courage. Kasparov wrote: “When I sat across from Deep Blue twenty years ago I sensed something new, something unsettling. Perhaps you will experience a similar feeling the first time you ride in a driverless car, or the first time your new computer boss issues an order at work. We must face these fears in order to get the most out of our technology and to get the most out of ourselves. Intelligent machines will continue that process, taking over the more menial aspects of cognition and elevating our mental lives toward creativity, curiosity, beauty, and joy. These are what truly make us human, not any particular activity or skill like swinging a hammer — or even playing chess.” In other words, we better get used to it and ride the wave. Human superiority over animals is based on our ability to create and use tools. The mental capacity to make things that improved our chances of survival led to a natural selection of better toolmakers and tool users. Nearly everything a human does involves technology. For adding numbers, we used abacuses and mechanical calculators and now spreadsheets. To improve our memory, we wrote on stones, parchment and paper, and now have disk drives and cloud storage. AI is the next step in improving our cognitive functions and decision-making. Think about it: When was the last time you tried memorizing your calendar or Rolodex or used a printed map? Just as we
  • 93. instinctively do everything on our smartphones, we will rely on AI. We may have forfeited skills such as the ability to add up the price of our groceries but we are smarter and more productive. With the help of Google and Wikipedia, we can be experts on any topic, and these don’t make us any dumber than encyclopedias, phone books and librarians did. A valid concern is that dependence on AI may cause us to forfeit human creativity. As Kasparov observes, the chess games on our smartphones are many times more powerful than the supercomputers that defeated him, yet this didn’t cause human chess players to become less capable — the opposite happened. There are now stronger chess players all over the world, and the game is played in a better way. As Kasparov explains: “It used to be that young players might acquire the style of their early coaches. If you worked with a coach who preferred sharp openings and speculative attacking play himself, it would influence his pupils to play similarly. … What happens when the early influential coach is a computer? The machine doesn’t care about style or patterns or hundreds of years of established theory. It counts up the values of the chess pieces, analyzes a few billion moves, and counts them up again. It is entirely free of prejudice and doctrine. … The heavy use of computers for practice and analysis has contributed to the development of a generation of players who are almost as free of dogma as the machines with which they train.” Perhaps this is the greatest benefit that AI will bring — humanity can be free of dogma and historical bias; it can do more intelligent decision-making. And instead of doing repetitive data analysis and number crunching, human workers can focus on enhancing their knowledge and being more creative. https://www.technologyreview.com/s/607970/experts-predict- when-artificial-intelligence-will-exceed-human-performance/ Intelligent Machines
  • 94. Experts Predict When Artificial Intelligence Will Exceed Human Performance Trucking will be computerized long before surgery, computer scientists say. · by Emerging Technology from the arXiv · May 31, 2017 Artificial intelligence is changing the world and doing it at breakneck speed. The promise is that intelligent machines will be able to do every task better and more cheaply than humans. Rightly or wrongly, one industry after another is falling under its spell, even though few have benefited significantly so far. And that raises an interesting question: when will artificial intelligence exceed human performance? More specifically, when will a machine do your job better than you? Today, we have an answer of sorts thanks to the work of Katja Grace at the Future of Humanity Institute at the University of Oxford and a few pals. To find out, these guys asked the experts. They surveyed the world’s leading researchers in artificial intelligence by asking them when they think intelligent machines will better humans in a wide range of tasks. And many of the answers are something of a surprise. The experts that Grace and co coopted were academics and industry experts who gave papers at the International Conference on Machine Learning in July 2015 and the Neural Information Processing Systems conference in December 2015. These are two of the most important events for experts in artificial intelligence, so it’s a good bet that many of the world’s experts were on this list. Grace and co asked them all—1,634 of them—to fill in a survey about when artificial intelligence would be better and cheaper than humans at a variety of tasks. Of these experts, 352 responded. Grave and co then calculated their median responses The experts predict that AI will outperform humans in the next 10 years in tasks such as translating languages (by 2024), writing high school essays (by 2026), and driving trucks (by
  • 95. 2027). But many other tasks will take much longer for machines to master. AI won’t be better than humans at working in retail until 2031, able to write a bestselling book until 2049, or capable of working as a surgeon until 2053. The experts are far from infallible. They predicted that AI would be better than humans at Go by about 2027. (This was in 2015, remember.) In fact, Google’s DeepMind subsidiary has already developed an artificial intelligence capable of beating the best humans. That took two years rather than 12. It’s easy to think that this gives the lie to these predictions. The experts go on to predict a 50 percent chance that AI will be better than humans at more or less everything in about 45 years. That’s the kind of prediction that needs to be taken with a pinch of salt. The 40-year prediction horizon should always raise alarm bells. According to some energy experts, cost-effective fusion energy is about 40 years away—but it always has been. It was 40 years away when researchers first explored fusion more than 50 years ago. But it has stayed a distant dream because the challenges have turned out to be more significant than anyone imagined. Forty years is an important number when humans make predictions because it is the length of most people’s working lives. So any predicted change that is further away than that means the change will happen beyond the working lifetime of everyone who is working today. In other words, it cannot happen with any technology that today’s experts have any practical experience with. That suggests it is a number to be treated with caution. But teasing apart the numbers shows something interesting. This 45-year prediction is the median figure from all the experts. Perhaps some subset of this group is more expert than the others? To find out if different groups made different predictions, Grace and co looked at how the predictions changed with the age of the researchers, the number of their citations (i.e., their
  • 96. expertise), and their region of origin. It turns out that age and expertise make no difference to the prediction, but origin does. While North American researchers expect AI to outperform humans at everything in 74 years, researchers from Asia expect it in just 30 years. That’s a big difference that is hard to explain. And it raises an interesting question: what do Asian researchers know that North Americans don’t (or vice versa)? Ref: arxiv.org/abs/1705.08807 : When Will AI Exceed Human Performance? Evidence from AI Experts