SHOULD ALGORITHMS DECIDE YOUR FUTURE?
This publication was prepared by Kilian Vieth and
Joanna Bronowicka from Centre for Internet and
Human Rights at European University Viadrina. It was
prepared based on a publication “The Ethics of
Algorithms: from radical content to self-driving cars”
with contributions from Zeynep Tufekci, Jillian C. York,
Ben Wagner and Frederike Kaltheuner and an event
on the Ethics of Algorithms, which took place on
March 9-10, 2015 in Berlin. The research was support-
ed by the Dutch Ministry of Foreign Affairs.
Find out more: cihr.eu/ethics-of-algorithms/
Follow the discussion on Twitter: #EoA2015
Graphic design by Thiago Parizi
cihr.eu @cihr_eu
1 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS | 2
WHAT IS AN ALGORITHM?
ALGORITHMS SHAPE OUR WORLD(S)!
Our everyday life is shaped by computers and our computers are shaped
by algorithms. Digital computation is constantly changing how we commu-
nicate, work, move, and learn. In short, digitally connected computers are
changing how we live our lives. This revolution is unlikely to stop any time
soon.
Digitalization produces increasing amounts of datasets known as ‘big
data’. So far, research focused on how ‘big data is produced and stored.
Now, we begin to scrutinize how algorithms make sense of this growing
amount of data
Algorithms are the brains of our computers, mobiles, Internet of Things.
Algorithms are increasingly used to make decisions for us, about us, or
with us – oftentimes without us realizing it. This raises many questions
about the ethical dimension of algorithms.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of algorithms.
What are typical functions they perform? What are negative impacts for
human rights? Here are some examples that probably affect you too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is ignored;
and even what gets published at all, and what is censored. This is true for
all kinds of search rankings, for example the way your social media news-
feed looks. In other words, algorithms perform a gate-keeping function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking part in
hiring (and firing) of employees. Deciding who gets a job and who does
not, is among the most powerful gate-keeping function in society.
• Research shows that human managers display many different biases in
hiring decisions, for example based on social class, race and gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic hiring can
easily overcome human biases. Algorithms might work more accurate
in some areas, but can also create new, sometimes unintended, prob-
lems depending on how they are programmed and what input data is
used.
Ethical.
Algorithms are taking control of our information rich world. As the twin sibling to Big Data, increasingly they decide how society views us via constructed profiles (as criminals? as terrorists? as rich or poor consumers?); what we see as important, newsworthy, cool or profitable (eg Twitter trending topics, automated stock selling, Amazon recommendations, BBC website top news topics etc); and indeed what we see at all as algorithms are increasingly used to filter our illegal or undesirable content as tools of public policy. Algorithms are peceived by virtue of their automation as neutral, objective and fair, unlike human decision makers - yet evidence increasingly shows the opposite - eg a series of legal complaints assert that Google games its own search results to promote its own economic interests and demote those of competitors or annoyances; while in the defamation field, French, German and Italian courts have decided that algorithmically generated autosuggests in search can be libellous (eg "Bettina Wolf prostitute"). . This paper asks if any legal remedies do or should exist to *audit* proprietary algorithms , given their importance, and asks if one way forward might be via existing and future subject access rights to personal data in EU data protection law. The transformation of these rights as proposed in the draft Data Protection Regulation is not however hopeful.
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMSTekRevol LLC
In the wake of mass automation, UBIs might be the answer low-income families and citizens might be looking towards. As automation across industries increases, the induced fear within citizens of its impact is severe. From privacy concerns through rogue AI to doomsday scenarios to more realistic concerns of misused AI and loss of jobs, pop-culture led paranoia has shaken up the world. These concerns have to be dealt with, and tech companies and businesses need to have a robust moral framework under which decisions are made, to ensure any negative externalities of implementing AI are mitigated to the maximum degree. Artificial Intelligence is a great tool to optimize businesses and make our world more efficient, but the moral imperative on all of us is to ensure it happens sides by side human sustainability, not at its expense.
This document discusses the intersection of artificial intelligence and privacy. It notes that AI systems require large amounts of data for training, which can include personally identifiable information, raising privacy concerns. Examples are given of how personal data and algorithms can influence behaviors and decisions in ways that may not be apparent to users. The document calls for responsible and ethical AI development that safeguards individual privacy and autonomy. It was written to provide context for a survey on AI adoption and awareness of privacy issues among the general public in India.
This document presents a maturity model for artificial intelligence adoption in enterprises. It outlines four stages of maturity: exploring, experimenting, formalizing, and integrating. It also discusses four macro trends affecting AI success: the shift from screen-based to sensory interactions; from rules-based to probabilistic decision making; from data analytics to data engineering; and from expertise-driven to data-driven leadership. Key aspects of maturity include having a data strategy, using AI in product development, establishing ethics principles, and integrating AI throughout the organization.
With the covid-19 outbreak, digital transformation in industries got boosted. Organizations started relying on digital platforms to achieve their objectives during these vulnerable times. Employees are now expected to learn digital ethics in order to maintain decorum on digital platforms. Digital ethics are organizational, social, and interpersonal norms.
3 Steps To Tackle The Problem Of Bias In Artificial IntelligenceBernard Marr
Artificial intelligence (AI) is facing a problem: Bias. As more and more decisions are being made by AIs, this is an issue that is important to us all. In this article we look at some key steps you can take to ensure AIs of the future are not biased against, e.g., race, gender, sexuality, etc.
The document summarizes key topics around artificial intelligence, autonomous vehicles, and regulation. It discusses what AI is, including narrow and general AI, as well as machine learning approaches. It also covers applications of AI like autonomous vehicles and issues around regulation, fairness, and safety. New regulators like an EU agency for robotics and AI are proposed to help govern emerging technologies.
Algorithms are taking control of our information rich world. As the twin sibling to Big Data, increasingly they decide how society views us via constructed profiles (as criminals? as terrorists? as rich or poor consumers?); what we see as important, newsworthy, cool or profitable (eg Twitter trending topics, automated stock selling, Amazon recommendations, BBC website top news topics etc); and indeed what we see at all as algorithms are increasingly used to filter our illegal or undesirable content as tools of public policy. Algorithms are peceived by virtue of their automation as neutral, objective and fair, unlike human decision makers - yet evidence increasingly shows the opposite - eg a series of legal complaints assert that Google games its own search results to promote its own economic interests and demote those of competitors or annoyances; while in the defamation field, French, German and Italian courts have decided that algorithmically generated autosuggests in search can be libellous (eg "Bettina Wolf prostitute"). . This paper asks if any legal remedies do or should exist to *audit* proprietary algorithms , given their importance, and asks if one way forward might be via existing and future subject access rights to personal data in EU data protection law. The transformation of these rights as proposed in the draft Data Protection Regulation is not however hopeful.
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMSTekRevol LLC
In the wake of mass automation, UBIs might be the answer low-income families and citizens might be looking towards. As automation across industries increases, the induced fear within citizens of its impact is severe. From privacy concerns through rogue AI to doomsday scenarios to more realistic concerns of misused AI and loss of jobs, pop-culture led paranoia has shaken up the world. These concerns have to be dealt with, and tech companies and businesses need to have a robust moral framework under which decisions are made, to ensure any negative externalities of implementing AI are mitigated to the maximum degree. Artificial Intelligence is a great tool to optimize businesses and make our world more efficient, but the moral imperative on all of us is to ensure it happens sides by side human sustainability, not at its expense.
This document discusses the intersection of artificial intelligence and privacy. It notes that AI systems require large amounts of data for training, which can include personally identifiable information, raising privacy concerns. Examples are given of how personal data and algorithms can influence behaviors and decisions in ways that may not be apparent to users. The document calls for responsible and ethical AI development that safeguards individual privacy and autonomy. It was written to provide context for a survey on AI adoption and awareness of privacy issues among the general public in India.
This document presents a maturity model for artificial intelligence adoption in enterprises. It outlines four stages of maturity: exploring, experimenting, formalizing, and integrating. It also discusses four macro trends affecting AI success: the shift from screen-based to sensory interactions; from rules-based to probabilistic decision making; from data analytics to data engineering; and from expertise-driven to data-driven leadership. Key aspects of maturity include having a data strategy, using AI in product development, establishing ethics principles, and integrating AI throughout the organization.
With the covid-19 outbreak, digital transformation in industries got boosted. Organizations started relying on digital platforms to achieve their objectives during these vulnerable times. Employees are now expected to learn digital ethics in order to maintain decorum on digital platforms. Digital ethics are organizational, social, and interpersonal norms.
3 Steps To Tackle The Problem Of Bias In Artificial IntelligenceBernard Marr
Artificial intelligence (AI) is facing a problem: Bias. As more and more decisions are being made by AIs, this is an issue that is important to us all. In this article we look at some key steps you can take to ensure AIs of the future are not biased against, e.g., race, gender, sexuality, etc.
The document summarizes key topics around artificial intelligence, autonomous vehicles, and regulation. It discusses what AI is, including narrow and general AI, as well as machine learning approaches. It also covers applications of AI like autonomous vehicles and issues around regulation, fairness, and safety. New regulators like an EU agency for robotics and AI are proposed to help govern emerging technologies.
Governance includes managing and handling of functions of a state, involving interference and keen monitoring by the government. Artificial intelligence and machine learning now play an important role in identifying challenges and addressing concerns.
- Around 23 million people in the UK have experienced a life shock such as illness, job loss, or relationship breakdown in the past two years, and those who experienced a life shock were three times as likely to be in problem debt.
- Current support mechanisms are insufficient, as many people, even those still employed, cannot build financial protections against common life shocks.
- There is a need for policymakers to prioritize this issue and work with organizations to identify how to improve support and break the link between life shocks and problem debt.
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
This document discusses the ethical dimensions of artificial intelligence. It begins with definitions of AI and ethics. It then discusses how AI is revolutionizing industries like healthcare, finance, transportation, and more. However, it also notes challenges of AI like bias, lack of transparency, job displacement, privacy and security issues. It provides examples of authorities like the European Union and United Nations taking action to address these issues and ensure ethical governance of AI through frameworks like the EU Artificial Intelligence Act. The document emphasizes the importance of balancing AI innovation with ethical considerations to build trust and align AI with human values.
Lilian Edwards is a professor who studies algorithms and their impact on society. She discusses how algorithms are increasingly governing many aspects of society through tasks like predictive profiling, targeted advertising, and content filtering. However, algorithms may not be as neutral and objective as commonly believed since their design and training can embed certain biases. She analyzes legal issues around algorithms and questions whether concepts like fairness can be imposed on proprietary algorithms. Transparency, accountability, and human oversight of algorithms are important topics to consider.
A case for intelligent autonomous ai (iai)Mark Albala
Many argue that 90% or more of the trades on Wall Street are either totally administered without the aid of humans or greatly assist humans in the execution of trades. Although in its infancy, it is easy to envision that this onslaught of the digitization of the marketplace, both in execution and administration has led to the volatility of the marketplace. We are in the infancy of autonomic AI, and the volatility is a condition of AI routines, with no one at the helm, being knee jerk in the reaction to swings in the market caused by other AI routines with no one at the helm. For a historical perspective, in 2014, it was estimated that 75% of trades was originated from automated trade systems. By 2017, JPM estimates were that over 90% of trades were executed algorithmically.
If we further envision, it is easy to assume that the next generation of these AI brokers will understand that they will fall short of maximized profit by following the ebbs and tides of the market caused by other AI brokers, thereby reducing the overall market volatility but also putting traders not armed with these tradebots at a severe disadvantage.
The same logic will hold true to other business functions that succumb to algorithmic execution. The risk will be forever present that knee jerk reactions to every departure from expected outcomes will derail those enabling these algorithms into a whirlwind of turbulence, while those who are smarter in their execution plan will be able to judge such turbulence for what it is, others enabling algorithms to react to every blip.
While today’s autonomic algorithms are smart, they are not intelligent because they are unable to segregate blips from true trends, thereby resulting in knee jerk reactions. This writing will focus on how not to fall into the knee jerker category when implementing autonomic AI.
AI in Law Enforcement - Applications and Implications of Machine Vision and M...Daniel Faggella
This presentation was given at an INTERPOL / United Nations events about law enforcement and AI, at INTERPOL's innovation lab in Singapore, July 11th, 2018.
The presentation itself covers nearly a dozen AI-related use cases, along with their possible uses in law enforcement, surveillance, or falsifying evidence.
This 3-page document provides an executive summary of a report on how AI is transforming the customer experience. It discusses how AI will become ubiquitous in the next 5 years and profoundly shape interactions with companies through technologies like chatbots and augmented reality. It also outlines some of the key challenges AI poses for customer experience, such as new interaction models, information asymmetry, and the amplification of biases. The summary concludes by emphasizing the need for business leaders to establish principles to ensure AI is developed and applied in a customer-centric manner.
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...Bernhard Rieder
This document discusses the accountability problem with machine learning algorithms. It notes that there are two types of algorithms - those explicitly coded and those that learn statistical patterns in data. These latter types can be difficult to assess and shift the focus from normative values to empirical patterns in data. It also discusses how algorithms can learn sensitive personal attributes from innocuous Facebook likes and how risk algorithms associate many data points with loan default risk. The document argues that accountability is not enough and that regulation will need to be domain-specific while exploring approaches like consumer protections and data restrictions. It concludes that algorithms reflect societal structures and turning this into profit always raises normative issues requiring attention to commercial influences and consideration of more egalitarian alternatives.
Artificial intelligence (AI) is the intelligence exhibited by machines and their ability to mimic human behavior. There are three stages of AI development: artificial narrow intelligence, artificial general intelligence, and artificial super intelligence. Machine learning is a key application of AI that allows systems to automatically learn and improve from experience by recognizing patterns in data. Deep learning uses artificial neural networks for machine learning and has driven many new AI applications. AI impacts society positively by enhancing efficiency, adding jobs, strengthening the economy, and improving quality of life.
AI in Action: Real World Use Cases by AnitarajAnitaRaj43
The presentation was made in “Web3 Fusion: Embracing AI and Beyond” is more than a conference; it's a journey into the heart of digital transformation.
The conference a provided a platform where the future of technology meets practical application. This three-day hybrid event, set in the heart of innovation, served as a gateway to the latest trends and transformative discussions in AI, Blockchain, IoT, AR/VR, and their collective impact on the information space.
Overview of public concerns and regulatory interest in issues algorithm transparency, accountability and fairness, with background information about the technical/design origin of these issues.
The Ethics of Artificial Intelligence in Digital Ecosystemswashikmaryam
The ethics of AI go beyond just the technology itself. When we consider AI within the complex web of digital platforms and services (the digital ecosystem), new ethical concerns arise.
A big focus is on how AI decisions can be biased, reflecting the data it's trained on and potentially leading to discrimination. We also need to be mindful of privacy issues and how AI might be used to manipulate users.
To ensure ethical AI in digital ecosystems, we need to consider these potential pitfalls during development and use frameworks to make responsible choices. This includes reflecting on the decision-making process and how AI can be used for good.
Emerging Trends in AI and data science IN KRCTkrctseo
In this era of technology, Artificial Intelligence (AI) stand as the pillar of innovation, driving changes across all industries and even society as a whole. As we look into the future, it’s essential to notice the emerging trends in AI shaping the trajectory of our world. These trends are paving the way for new possibilities and advancements in all aspects of life.
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfMahdi_Fahmideh
Digital Forensics for Artificial
Intelligence (AI ) Systems:
AI systems make decisions impacting our daily life Their actions might cause accidents, harm or, more generally, violate
regulations either intentionally or not and consequently might be considered suspects for various events. In this lecture we explore how digital forensics can be performed for AI based systems.
What regulation for Artificial Intelligence?Nozha Boujemaa
Should we regulate Artificial Intelligence? What are the challenges to face bias in data and algorithms? What is trustworthy AI? AI HLEG (European Commission) and AIGO (OECD) feedback experiences and recommendations. Example in precision medicine: AI/ML for medical devices
The document discusses emerging cyber threats related to information manipulation, insecure supply chains, and mobile device security. Regarding information manipulation, it describes how attackers can influence search results and news feeds to spread propaganda or censor information. It also discusses how personalization of search results can lead to "filter bubbles" where users are isolated from diverse viewpoints. On supply chain security, it notes the difficulties in detecting compromised hardware and the high costs of securing against such threats. Finally, it outlines growing threats from malicious mobile apps and the need for better patching to fix vulnerabilities on devices.
How Technology Is Changing Our Choices and the ValuesThat He.docxwellesleyterresa
How Technology Is Changing Our Choices and the Values
That Help Us Make Them
Choose at Your Own Risk
How technology is changing our choices and the values
that help us make them.
By Brad Allenby
An ethical minefield.
Photo illustration by Slate. Photo by Thinkstock
You’re sitting at your desk, with the order in front of you that will enable the use of
autonomous lethal robots in your upcoming battle. If you sign it, you will probably save many
of your soldiers’ lives, and likely reduce civilian casualties. On the other hand, things could go
http://www.slate.com/authors.brad_allenby.html
wrong, badly wrong, and having used the technology once, your country will have little
ground to argue that others, perhaps less responsible, should unleash the same technology.
What do you choose? And what values inform your choice?
We like to think we have choices in all matters, and that we exercise values in making our
choices. This is perhaps particularly true with regard to new technologies, from incremental
ones that build on existing products, such as new apps, to more fundamental technologies
such as lethal autonomous robots for military and security operations. But in reality we
understand that our choices are always significantly limited, and that our values shift over
time in unpredictable ways. This is especially true with emerging technologies, where values
that may lead one society to reject a technology are seldom universal, meaning that the
technology is simply developed and deployed elsewhere. In a world where technology is a
major source of status and power, that usually means the society rejecting technology has, in
fact, chosen to slide down the league tables. (Europe may be one example.)
Take choice. To say that one has a choice implies, among other things, that one has the
power to make a selection among options, and that one understands the implications of that
selection. Obviously, reality and existing systems significantly bound whatever options might
be available. In 1950, I could not have chosen a mobile phone: They didn’t exist. Today, I
can’t choose a hydrogen car: While one could certainly be built, the infrastructure to provide
me with hydrogen so I could use that vehicle as a car in the real world doesn’t exist. Before
the Sony Walkman, for example, no one would have said they needed a Walkman, because
they had no idea it was an option.
Technology is changing far faster than the institutions we’ve traditionally relied on to inform
and enforce our choices and values.
Moreover, in the real world no choice is made with full information. This is true at the
individual level; if I could choose my stocks with full information, I would be rich. It is also true
at the level of the firm, and governments: Investments and policies are formulated on best
available information, because otherwise uncertainty becomes paralysis, and that carries its
own cost.
Get Future Tense in your inbox.
Values, the principles we live by, are eq ...
The document discusses several issues relating to the future of information ethics, including privacy, surveillance, algorithmic bias, intellectual property, globalization, and augmentation. Specifically, it addresses the need for individuals to have control over their personal data and consent to its use (privacy), the debate around balancing privacy rights with security interests behind surveillance practices, how algorithms can perpetuate inequality if not designed carefully, balancing copyright with open-source licensing, challenges of cross-border data flows, and both current and future ethical challenges relating to new technologies.
EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we
remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues:
1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
4. Unregulated and unmonitored forms of AI experimentation on human populations
5. The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical
pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.
InstructionsFor this assignment, select one of the following.docxmaoanderton
Instructions
For this assignment, select
one
of the following options:
Option 1: Imperialism
The exploitation of colonial resources and indigenous labor was one of the key elements in the success of imperialism. Such exploitation was a result of the prevalent ethnocentrism of the time and was justified by the unscientific concept of social Darwinism, which praised the characteristics of white Europeans and inaccurately ascribed negative characteristics to indigenous peoples. A famous poem of the time by Rudyard Kipling, "White Man's Burden," called on imperial powers, and particularly the U.S., at whom the poem was directed, to take up the mission of civilizing these "savage" peoples.
Read the poem at the following link:
(Links to an external site.)
After reading the poem, address the following in a case study analysis:
Select a specific part of the world (a country), and examine imperialism in that country. What was the relationship between the invading country and the native people? You can select from these examples or choose your own:
- Belgium & Africa
- Britain & India
- Germany & Africa
- France & Africa
Apply social Darwinism to this specific case.
Analyze the motivations of the invading country?
How did ethnocentrism manifest in their interactions?
How does Kipling's poem apply to your specific example? You can quote lines for comparison.
Writing Requirements (APA format)
-Length: 2-3 pages (not including title page or references page)
-1-inch margins
-Double spaced
-12-point Times New Roman font
-Title page
-References page
.
InstructionsFor this assignment, analyze the space race..docxmaoanderton
This document provides instructions for a writing assignment analyzing the space race between the U.S. and USSR during the Cold War, discussing whether space exploration is still relevant considering costs, and how the space program benefits the national economy and world. The assignment requires a 2-3 page paper in APA format that addresses what the space race meant for the two superpowers, the ongoing relevance of space programs, and economic and social benefits to the U.S., its economy, and globally.
More Related Content
Similar to SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx
Governance includes managing and handling of functions of a state, involving interference and keen monitoring by the government. Artificial intelligence and machine learning now play an important role in identifying challenges and addressing concerns.
- Around 23 million people in the UK have experienced a life shock such as illness, job loss, or relationship breakdown in the past two years, and those who experienced a life shock were three times as likely to be in problem debt.
- Current support mechanisms are insufficient, as many people, even those still employed, cannot build financial protections against common life shocks.
- There is a need for policymakers to prioritize this issue and work with organizations to identify how to improve support and break the link between life shocks and problem debt.
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
This document discusses the ethical dimensions of artificial intelligence. It begins with definitions of AI and ethics. It then discusses how AI is revolutionizing industries like healthcare, finance, transportation, and more. However, it also notes challenges of AI like bias, lack of transparency, job displacement, privacy and security issues. It provides examples of authorities like the European Union and United Nations taking action to address these issues and ensure ethical governance of AI through frameworks like the EU Artificial Intelligence Act. The document emphasizes the importance of balancing AI innovation with ethical considerations to build trust and align AI with human values.
Lilian Edwards is a professor who studies algorithms and their impact on society. She discusses how algorithms are increasingly governing many aspects of society through tasks like predictive profiling, targeted advertising, and content filtering. However, algorithms may not be as neutral and objective as commonly believed since their design and training can embed certain biases. She analyzes legal issues around algorithms and questions whether concepts like fairness can be imposed on proprietary algorithms. Transparency, accountability, and human oversight of algorithms are important topics to consider.
A case for intelligent autonomous ai (iai)Mark Albala
Many argue that 90% or more of the trades on Wall Street are either totally administered without the aid of humans or greatly assist humans in the execution of trades. Although in its infancy, it is easy to envision that this onslaught of the digitization of the marketplace, both in execution and administration has led to the volatility of the marketplace. We are in the infancy of autonomic AI, and the volatility is a condition of AI routines, with no one at the helm, being knee jerk in the reaction to swings in the market caused by other AI routines with no one at the helm. For a historical perspective, in 2014, it was estimated that 75% of trades was originated from automated trade systems. By 2017, JPM estimates were that over 90% of trades were executed algorithmically.
If we further envision, it is easy to assume that the next generation of these AI brokers will understand that they will fall short of maximized profit by following the ebbs and tides of the market caused by other AI brokers, thereby reducing the overall market volatility but also putting traders not armed with these tradebots at a severe disadvantage.
The same logic will hold true to other business functions that succumb to algorithmic execution. The risk will be forever present that knee jerk reactions to every departure from expected outcomes will derail those enabling these algorithms into a whirlwind of turbulence, while those who are smarter in their execution plan will be able to judge such turbulence for what it is, others enabling algorithms to react to every blip.
While today’s autonomic algorithms are smart, they are not intelligent because they are unable to segregate blips from true trends, thereby resulting in knee jerk reactions. This writing will focus on how not to fall into the knee jerker category when implementing autonomic AI.
AI in Law Enforcement - Applications and Implications of Machine Vision and M...Daniel Faggella
This presentation was given at an INTERPOL / United Nations events about law enforcement and AI, at INTERPOL's innovation lab in Singapore, July 11th, 2018.
The presentation itself covers nearly a dozen AI-related use cases, along with their possible uses in law enforcement, surveillance, or falsifying evidence.
This 3-page document provides an executive summary of a report on how AI is transforming the customer experience. It discusses how AI will become ubiquitous in the next 5 years and profoundly shape interactions with companies through technologies like chatbots and augmented reality. It also outlines some of the key challenges AI poses for customer experience, such as new interaction models, information asymmetry, and the amplification of biases. The summary concludes by emphasizing the need for business leaders to establish principles to ensure AI is developed and applied in a customer-centric manner.
On the Diversity of the Accountability Problem. Machine Learning and Knowing ...Bernhard Rieder
This document discusses the accountability problem with machine learning algorithms. It notes that there are two types of algorithms - those explicitly coded and those that learn statistical patterns in data. These latter types can be difficult to assess and shift the focus from normative values to empirical patterns in data. It also discusses how algorithms can learn sensitive personal attributes from innocuous Facebook likes and how risk algorithms associate many data points with loan default risk. The document argues that accountability is not enough and that regulation will need to be domain-specific while exploring approaches like consumer protections and data restrictions. It concludes that algorithms reflect societal structures and turning this into profit always raises normative issues requiring attention to commercial influences and consideration of more egalitarian alternatives.
Artificial intelligence (AI) is the intelligence exhibited by machines and their ability to mimic human behavior. There are three stages of AI development: artificial narrow intelligence, artificial general intelligence, and artificial super intelligence. Machine learning is a key application of AI that allows systems to automatically learn and improve from experience by recognizing patterns in data. Deep learning uses artificial neural networks for machine learning and has driven many new AI applications. AI impacts society positively by enhancing efficiency, adding jobs, strengthening the economy, and improving quality of life.
AI in Action: Real World Use Cases by AnitarajAnitaRaj43
The presentation was made in “Web3 Fusion: Embracing AI and Beyond” is more than a conference; it's a journey into the heart of digital transformation.
The conference a provided a platform where the future of technology meets practical application. This three-day hybrid event, set in the heart of innovation, served as a gateway to the latest trends and transformative discussions in AI, Blockchain, IoT, AR/VR, and their collective impact on the information space.
Overview of public concerns and regulatory interest in issues algorithm transparency, accountability and fairness, with background information about the technical/design origin of these issues.
The Ethics of Artificial Intelligence in Digital Ecosystemswashikmaryam
The ethics of AI go beyond just the technology itself. When we consider AI within the complex web of digital platforms and services (the digital ecosystem), new ethical concerns arise.
A big focus is on how AI decisions can be biased, reflecting the data it's trained on and potentially leading to discrimination. We also need to be mindful of privacy issues and how AI might be used to manipulate users.
To ensure ethical AI in digital ecosystems, we need to consider these potential pitfalls during development and use frameworks to make responsible choices. This includes reflecting on the decision-making process and how AI can be used for good.
Emerging Trends in AI and data science IN KRCTkrctseo
In this era of technology, Artificial Intelligence (AI) stand as the pillar of innovation, driving changes across all industries and even society as a whole. As we look into the future, it’s essential to notice the emerging trends in AI shaping the trajectory of our world. These trends are paving the way for new possibilities and advancements in all aspects of life.
Digital Forensics for Artificial Intelligence (AI ) Systems.pdfMahdi_Fahmideh
Digital Forensics for Artificial
Intelligence (AI ) Systems:
AI systems make decisions impacting our daily life Their actions might cause accidents, harm or, more generally, violate
regulations either intentionally or not and consequently might be considered suspects for various events. In this lecture we explore how digital forensics can be performed for AI based systems.
What regulation for Artificial Intelligence?Nozha Boujemaa
Should we regulate Artificial Intelligence? What are the challenges to face bias in data and algorithms? What is trustworthy AI? AI HLEG (European Commission) and AIGO (OECD) feedback experiences and recommendations. Example in precision medicine: AI/ML for medical devices
The document discusses emerging cyber threats related to information manipulation, insecure supply chains, and mobile device security. Regarding information manipulation, it describes how attackers can influence search results and news feeds to spread propaganda or censor information. It also discusses how personalization of search results can lead to "filter bubbles" where users are isolated from diverse viewpoints. On supply chain security, it notes the difficulties in detecting compromised hardware and the high costs of securing against such threats. Finally, it outlines growing threats from malicious mobile apps and the need for better patching to fix vulnerabilities on devices.
How Technology Is Changing Our Choices and the ValuesThat He.docxwellesleyterresa
How Technology Is Changing Our Choices and the Values
That Help Us Make Them
Choose at Your Own Risk
How technology is changing our choices and the values
that help us make them.
By Brad Allenby
An ethical minefield.
Photo illustration by Slate. Photo by Thinkstock
You’re sitting at your desk, with the order in front of you that will enable the use of
autonomous lethal robots in your upcoming battle. If you sign it, you will probably save many
of your soldiers’ lives, and likely reduce civilian casualties. On the other hand, things could go
http://www.slate.com/authors.brad_allenby.html
wrong, badly wrong, and having used the technology once, your country will have little
ground to argue that others, perhaps less responsible, should unleash the same technology.
What do you choose? And what values inform your choice?
We like to think we have choices in all matters, and that we exercise values in making our
choices. This is perhaps particularly true with regard to new technologies, from incremental
ones that build on existing products, such as new apps, to more fundamental technologies
such as lethal autonomous robots for military and security operations. But in reality we
understand that our choices are always significantly limited, and that our values shift over
time in unpredictable ways. This is especially true with emerging technologies, where values
that may lead one society to reject a technology are seldom universal, meaning that the
technology is simply developed and deployed elsewhere. In a world where technology is a
major source of status and power, that usually means the society rejecting technology has, in
fact, chosen to slide down the league tables. (Europe may be one example.)
Take choice. To say that one has a choice implies, among other things, that one has the
power to make a selection among options, and that one understands the implications of that
selection. Obviously, reality and existing systems significantly bound whatever options might
be available. In 1950, I could not have chosen a mobile phone: They didn’t exist. Today, I
can’t choose a hydrogen car: While one could certainly be built, the infrastructure to provide
me with hydrogen so I could use that vehicle as a car in the real world doesn’t exist. Before
the Sony Walkman, for example, no one would have said they needed a Walkman, because
they had no idea it was an option.
Technology is changing far faster than the institutions we’ve traditionally relied on to inform
and enforce our choices and values.
Moreover, in the real world no choice is made with full information. This is true at the
individual level; if I could choose my stocks with full information, I would be rich. It is also true
at the level of the firm, and governments: Investments and policies are formulated on best
available information, because otherwise uncertainty becomes paralysis, and that carries its
own cost.
Get Future Tense in your inbox.
Values, the principles we live by, are eq ...
The document discusses several issues relating to the future of information ethics, including privacy, surveillance, algorithmic bias, intellectual property, globalization, and augmentation. Specifically, it addresses the need for individuals to have control over their personal data and consent to its use (privacy), the debate around balancing privacy rights with security interests behind surveillance practices, how algorithms can perpetuate inequality if not designed carefully, balancing copyright with open-source licensing, challenges of cross-border data flows, and both current and future ethical challenges relating to new technologies.
EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we
remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues:
1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
4. Unregulated and unmonitored forms of AI experimentation on human populations
5. The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical
pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.
Similar to SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx (20)
InstructionsFor this assignment, select one of the following.docxmaoanderton
Instructions
For this assignment, select
one
of the following options:
Option 1: Imperialism
The exploitation of colonial resources and indigenous labor was one of the key elements in the success of imperialism. Such exploitation was a result of the prevalent ethnocentrism of the time and was justified by the unscientific concept of social Darwinism, which praised the characteristics of white Europeans and inaccurately ascribed negative characteristics to indigenous peoples. A famous poem of the time by Rudyard Kipling, "White Man's Burden," called on imperial powers, and particularly the U.S., at whom the poem was directed, to take up the mission of civilizing these "savage" peoples.
Read the poem at the following link:
(Links to an external site.)
After reading the poem, address the following in a case study analysis:
Select a specific part of the world (a country), and examine imperialism in that country. What was the relationship between the invading country and the native people? You can select from these examples or choose your own:
- Belgium & Africa
- Britain & India
- Germany & Africa
- France & Africa
Apply social Darwinism to this specific case.
Analyze the motivations of the invading country?
How did ethnocentrism manifest in their interactions?
How does Kipling's poem apply to your specific example? You can quote lines for comparison.
Writing Requirements (APA format)
-Length: 2-3 pages (not including title page or references page)
-1-inch margins
-Double spaced
-12-point Times New Roman font
-Title page
-References page
.
InstructionsFor this assignment, analyze the space race..docxmaoanderton
This document provides instructions for a writing assignment analyzing the space race between the U.S. and USSR during the Cold War, discussing whether space exploration is still relevant considering costs, and how the space program benefits the national economy and world. The assignment requires a 2-3 page paper in APA format that addresses what the space race meant for the two superpowers, the ongoing relevance of space programs, and economic and social benefits to the U.S., its economy, and globally.
InstructionsFor the initial post, address one of the fol.docxmaoanderton
Instructions
For the initial post, address
one
of the following:
Option 1: Middle East
Examine the origins of the Arab-Israeli conflict from its beginnings some 4000 years ago and how it has evolved/devolved over the centuries to the current time? Analyze the role of the Balfour Declaration on Israel's rebirth in 1948 and its effectiveness in helping Jewish people in their quest to reclaim their ancient homeland.
Option 2: African Nation State Development
Examine some of the main (internal or external) reasons why the African people were to develop into nation states later than most experts feel was appropriate/normal. Examine the role of European imperial powers and the role of tribal chieftans in the international slave trade and African nation state
Writing Requirements
1 page (excluding reference page)
Minimum of 2 sources cited
APA format for in-text citations and list of references
.
InstructionsFollow paper format and Chicago Style to complete t.docxmaoanderton
Instructions:
Follow paper format and Chicago Style to complete this analytical written assignment on the Holocaust Museum(Houston).
Also, attach museum ticket with your final submission(not needed if you attended my only tour).
Paper will need to be formatted with the following information:
Student Name
Date of Museum Attendance
Name of Museum: Holocaust Museum(Houston)
Year Founded:
Physical Address:
5401 Caroline St, Houston, TX 77004
Museum Layout:
In two paragraphs, describe the 1st floor exhibitions. This includes the permanent and featured exhibitions.
Artifacts:
In two paragraphs, describe the Holocaust artifacts on exhibition. This includes personal belongings, clothing, furniture and transportation objects.
Photos, Maps and Films:
In two paragraphs, describe the photos, maps and films that depict the Holocaust.
.
InstructionsFind a NEWS article that addresses a recent t.docxmaoanderton
Instructions:
Find a
NEWS
article that addresses a recent technological development or the impact of a technological innovation on society. For example, there are many news articles about the impact of cell phone use on human cognition, social media on self-esteem or elections, gene editing, renewable energy, etc. (A news article is an article from a media source like a newspaper or magazine such as the New York Times, FOX, The Washington Post, VICE, etc. that
addresses a current event
. It does not include sources like Wikipedia, eHow, dictionaries, academic journals, or other information websites.)
Write a minimum 300 word essay that answers the following questions:
Based on the article you chose, how is the technological innovation described?
According to the article, what is the impact of the technological innovation on human society and culture? How is this similar to previous technological innovations discussed in the book?
How do you imagine the technology discussed will develop in the future, i.e. what do think the long-term impact will be?
Guidelines
Your essay should:
be a total of
300 words or more
.
The 300 word limit DOES NOT include the questions, names, titles, and references.
It also does not include meaningless filler statements
have
factual information from the textbook and/or appropriate articles and websites
.
be original work
and will be checked for plagiarism.
.
InstructionsFind a NEWS article that addresses a current .docxmaoanderton
Instructions:
Find a
NEWS
article that addresses a current social problem facing your community (local, national, or global) that you are concerned about. (A news article is an article from a media source like a newspaper or magazine such as the New York Times, FOX, The Washington Post, VICE, etc. that
addresses a current event
. It does not include sources like Wikipedia, eHow, dictionaries, academic journals, or other information websites.)
Write a minimum 300 word essay that answers the following questions:
Based on the article you chose, what is the social problem and who does it impact?
How can the social sciences be used to research the issue? Name specific methods and disciplines from Chapter 1.
What are some solutions you can think of to address the issue?
Guidelines:
Your essay should:
be a total of
300 words or more
.
The 300 word limit DOES NOT include the questions, names, titles, and references.
It also does not include meaningless filler statements
have
factual information from the textbook and/or appropriate articles and websites
.
be original work
and will be checked for plagiarism.
You will receive a zero if substantial portions of your work are taken from other sources without proper citation
have
references and citations
for your sources, including the textbook
Cite your sources in-text and provide references for each sources according to the
APA Style Guide
. FYI web addresses or links are not full references!
.
InstructionsFinancial challenges associated with changes.docxmaoanderton
Instructions
Financial challenges associated with changes to how healthcare organizations are reimbursed for healthcare services, the cost of implementing new technology and professionals to comply with federal requirements for electronic health records, and the increasing numbers of individuals who cannot pay for their healthcare represent only one issue for healthcare executives in the healthcare delivery system. But it has significant consequences for the viability and solvency of healthcare organizations. Healthcare executives don’t have a crystal ball; however, they do engage in forecasting the future based on what is currently known and examination of trends (Lee, 2015). To do this type of forecasting, healthcare executives are demonstrating techniques found in anticipatory management. According to their seminal research, Ashley and Morrison (1997) reported there are severe consequences for not anticipating future trends in a rapidly changing and complex society. The anticipatory management process they describe begins with scanning the environment to identify issues; creating issues briefs to inform stakeholders; prioritizing issues; assembling the team of experts; creating, implementing, and evaluating action plans and outcomes; and adjusting the course when necessary.
Last week, you compared healthcare delivery and costs in the United States with those in developed countries. This week, you will focus U.S. healthcare executives and how they prioritize the challenges confronting them to minimize the impact to their organizations.
As you prepare your assignment, consider these questions:
What challenges you have experienced in the healthcare workplace or when you have accessed your own healthcare provider?
Have you observed changes to how your personal health information is gathered and documented?
What technology is now being used which may be new to your workplace?
Were you required to take additional training in your current healthcare position due to the implementation of new technology?
Have you experienced a shortage of healthcare personnel in your workplace or at your healthcare provider’s office?
What effect have these situations had on you as the employee and patient?
After your examination of current literature on the trends, issues, and challenges for healthcare executives and their organization's review Ashley and Morrison’s seminal article (1997) on anticipatory management and utilize the sample chart shown for this assignment.
You may recreate this four-column table in a Word document and insert your issues/trends, identify which organizational area is/will be impacted by each issue/trend, and provide your rationale for the priority level you assigned each issue/trend. Be sure to use a title page for this assignment and a reference page in APA format. You will add rows to your tabl.
InstructionsExplain the role of the U.S. Office of Personnel.docxmaoanderton
Instructions
Explain the role of the U.S. Office of Personnel Management and its impact on public administration.
Write a two-page paper (maximum) that explains the role of the
U.S. Office of Personnel Management (Links to an external site.)
and its impact on public administration.
Be certain to include the website of this and any other organizations used to access the information by creating a References page as a document.
Follow APA style format.
Include in-text reference citation and a References page in APA format.
.
Instructions
Evaluate: Personality Tests
Evaluation Title: Personality Assessments
Select
two
of the personality assessments from the
Personality Tests
list below. Compare the two personality assessments and respond to the following questions:
Describe the history of each test.
Who developed it and why?
Where would it be administered? (as part of job interview, in a psychiatric setting, to determine a field of study, to set up a good dating match)
What is your opinion of each test? Be sure to include evidence to support your opinion.
What are the pros and cons or strengths and weaknesses of each test?
Personality Tests
The Rorschach Inkblot Test
The Thematic Apperception Test (TAT)
Rotter’s Internal Locus of Control Test
The NEO-PI Test
The MMPI-2 Test
The Myers-Briggs Type Indicator (MBTI)
Your assignment should be typed into a Word or other word processing document, formatted in APA style.
Please include 2 to 3 credible resources as evidence to support your comparisons.
The assignments must include:
Running head
A title page with Assignment name
Your name
Professor’s name
Course
.
InstructionsEach of your responses will be graded not only for .docxmaoanderton
Instructions:
Each of your responses will be graded not only for its mastery of content, but also upon its efficacy as an academic essay. As such, it is expected that each response include an introduction with a thesis statement, body paragraphs which build upon each other, and a conclusion. Additionally, it is imperative that you use precise information when supporting your claims – remember to always cite your sources. Finally, proofread and edit your responses to ensure that your writing is clear, concise, and fully answers the prompt. Please do not plagiarize from the web. We check and if we do discover plagiarism we will immediately report it.
For writing assistance, you may want to contact Learning Support Services (located in the Academic Resource Center).
Formatting Papers:
400 words minimum (2 page maximum) response
per prompt
(1200 word minimum in total for all three responses)
Number your essays so that we understand which prompt (1, 2, 3, 4, 5, 6) you are answering
List all citations on a separate page, keeping the works cited page specific to and directly after each prompt question response (3 separate works cited pages)
Helvetica or Times New Roman, 12 point
Double spaced
1 inch margin on all sides (No header with name, title etc)
Saved as .pdf
Please
choose three
of these six questions to answer. EACH answer must be at least four paragraphs (400 words) but no longer than two pages. Each page should be double spaced, Times, 12pt.
What does “What you do to the land you do to your body” mean and where did it come from? (Some independent research might be necessary here.) Discuss three artists whose artwork directly addresses the concepts in this statement. In your answer be sure to describe at least one artwork by each artist and clearly state how the artworks relate to the statement.
Pick one environmental artist (eco-artist) that has been part of the modules so far. Do some outside research on them and explore their website and artworks. Choose one of their artworks to discuss, paying careful attention to choose one that has not mentioned so far in the readings/interviews/modules. Clearly state what materials and methods they use to make their artwork. Describe what influenced the artwork. Explain how this work can be or is considered environmental art? Use Linda Weintraub's readings to help you to define environmental art (eco-art) and defend your arguments as to how the work can be considered environmental art. Also look at other writings and articles to help understand and explain the context in which the artist makes their work.
How have different artists used walking to activate connections to land/landscape and how can artists represent an environmental issue through walking? Please name at least 2 specific artists we have discussed in class and cite examples of their walking pieces. Describe how these artists use embodiment in their work and are they effective in terms of heightening awareness.
InstructionsEffective communication skills can prevent many si.docxmaoanderton
Instructions
Effective communication skills can prevent many situations from escalating to a physical altercation. Officers revert to their training when involved in situations. For this assignment, prepare a PowerPoint training presentation for officers on how to communicate with hostile citizens. Include recommended techniques in verbal and nonverbal communications, along with how technology can play a role. Be sure to explore any legal and ethical considerations the officers must remember and do when dealing with hostile individuals.
Incorporate appropriate animations, transitions, and graphics as well as speaker notes for each slide. The speaker notes may be comprised of brief paragraphs or bulleted lists and should cite material appropriately. Add audio to each slide using the
Media
section of the
Insert
tab in the top menu bar for each slide.
Support your presentation with at least three scholarly resources
.
In addition to these specified resources, other appropriate scholarly resources may be included.
Length: 10-12 slides (with a separate reference slide)
Notes Length: 200-350 words for
each slide
.
InstructionsEcologyTo complete this assignment, complete the.docxmaoanderton
Instructions
Ecology
To complete this assignment, complete the steps below.
Download the Unit V Assignment Worksheet.
Save the document to your computer using your name and student ID in the file name.
Follow the directions to review and research the website.
After selecting and studying your species, answer the questions in the “What Information Did You Find?” section.
Once you have completed the worksheet, upload it to Blackboard for grading (make sure your name and student ID are provided).
.
InstructionsDevelop an iconographic essay. Select a work fro.docxmaoanderton
Instructions
Develop an iconographic essay. Select a work from this module to write the essay on. Utilize the objectives and above information to develop the statement. The essay must include a thesis statement/introduction, supporting body, and conclusion. The conclusion should provide a synthesis of the statements and thesis into a final idea about what the audience should remember and take away from the assignment.
List the objects and subjects included in the painting
Provide an iconographic definition (ala dictionary or glossary for each item)
Identify the narrative source for each item (artist invention, poem, narrative, biblical, greco-roman, other presence, real life historical or cultural artifact as symbol, other artworks and ideas as symbols, etc.) (
ex. Alexander the Great as a symbol; American Flag as a symbol; Greek mythology as symbols; The hammer and sickle in Russia, the american eagle, a soldier, etc.)
Is there more than one context for the iconographic or symbolic representation of the idea? How many contexts or roles is each symbolic object fulfilling in presenting, adding, or relaying meaning about the subject; (current age vs. past age; multiple metaphors; multiple layers of ideas etc.)
Symbols and Icons in different contexts - define how the symbols and subjects make one idea serve another idea (A Greek god is rebranded and used in Rome. What does this act mean for Rome to use a Greek Religious concept)
What do the symbols mean in the age, or multiple ages, and how are we supposed to connect to them in the context of the work? - ex. How are greco-roman subject and symbols used and perceived in a Renaissance age? Why would someone in the Renaissance care about Ancient Greco-Roman gods and subjects, what’s in it for a Renaissance Italian or Renaissance Italy for that matter?
Analysis through Iconology
This kind of analysis usually is most useful for narrative works and art before the Modern period. Non-objective art or art with arbitrary subjects (such as DADA) don't work as well with this kind of analysis because narratives and conventional symbols are not a part of these works. Here you will look for a particular element that occurs in the object (an object, action, gesture, pose) and explain either:
when that same element occurs in other objects through history and how this object’s representation of it is unique, or
what that element means generally in art or to art historians—in other words, the traditional association an art historian might make between that depiction and some other thing.
The following video provides an example of Iconological analysis. The video speaks to the meaning of the gestures, iconography, meaning of certain types of depiction, and other narrative imagery and symbolism related to those narrative elements. https://youtu.be/rKhfFBbVtFg
If you are confused, read Erwin Panofsky’s essays on iconology and iconography, in which he defines these terms more extensively. Be w.
InstructionsDEFINITION a brief definition of the key term fo.docxmaoanderton
Instructions:
DEFINITION:
a brief definition of the key term followed by the APA reference for the term; this does not count in the word requirement.
SUMMARY:
Summarize the article in your own words-this should be in the 150-200-word range. Be sure to note the article's author, note their credentials and why we should put any weight behind his/her opinions, research or findings regarding the key term.
DISCUSSION:
Using 300-350 words, write a brief discussion, in your own words of how the article relates to the selected weekly reading assignment Key Term. A discussion is not rehashing what was already stated in the article, but the opportunity for you to add value by sharing your experiences, thoughts and opinions. This is the most important part of the assignment.
REFERENCES:
All references must be listed at the bottom of the submission--in APA format.
.
InstructionsCreate a PowerPoint presentation of 15 slides (not c.docxmaoanderton
Instructions
Create a PowerPoint presentation of 15 slides (not counting title and reference slides) that provides an overview of the three major environmental, health, and safety (EHS) disciplines. Include each of the following elements:
summary of the responsibilities for the discipline,
evaluation of types of hazards addressed by the discipline,
description of how industrial hygiene practices relate to safety and health programs,
description of how industrial hygiene practices relate to environmental programs,
evaluation of types of control methods commonly used by the discipline,
interactions with the other two disciplines, and
major organizations associated with the discipline.
Construct your presentation using a serif type font such as Times New Roman. A serif type font is easier to read than a non-serif type font. For ease of reading, do not use a font smaller than 28 points.
.
Instructions
Cookie Creations (Continued)
Part I
Natalie is struggling to keep up with the recording of her accounting transactions. She is spending a lot of time marketing and selling mixers and giving her cookie classes. Her friend John is an accounting student who runs his own accounting service. He has asked Natalie if she would like to have him do her accounting.
John and Natalie meet and discuss her business. John suggests that he do the tasks listed below for Natalie.
Hold cash until there is enough to be deposited. (He would keep the cash locked up in his vehicle). He would also take all of the deposits to the bank at least twice a month.
Write and sign all of the checks.
Record all of the deposits in the accounting records.
Record all of the checks in the accounting records.
Prepare the monthly bank reconciliation.
Transfer all of Natalie’s manual accounting records to his computer accounting program. (John would maintain all of the accounting information that he keeps for his clients on his laptop computer.)
Prepare monthly financial statements for Natalie to review.
Write himself a check every month for the work he has done for Natalie.
For Part I of the assignment, identify the weaknesses in internal control that you see in the system that John is recommending. Can you suggest any improvements if Natalie hires John to do the accounting?
Part I should be a minimum of two pages in length. Please use APA format. While there are no required resources, please be sure that any sources used have proper citations.
Part II
Natalie decides that she cannot afford to hire John to do her accounting. One way that she can ensure that her cash account does not have any errors and is accurate and up-to-date is to prepare a bank reconciliation at the end of each month. Natalie would like you to help her. She asks you to prepare a bank reconciliation for June 2020 using the information below.
Additionally, take the following information into account.
On June 30th, there were two outstanding checks: #595 for $238 and #604 for $297.
Premier Bank made a posting error to the bank statement: Check #603 was issued for $425, not $452.
The deposit made on June 20 was for $125, which Natalie received for teaching a class. Natalie made an error in recording this transaction.
The electronic funds transfer (EFT) was for Natalie’s cell phone use. Remember that she uses this phone only for business.
The NSF check was from Ron Black. Natalie received this check for teaching a class to Ron’s children. Natalie contacted Ron, and he assured her that she will receive a check in the mail for the outstanding amount of the invoice and the NSF bank charge.
For Part II of the assignment, complete the tasks below.
Prepare Cookie Creations’ bank reconciliation for June 30.
Prepare any necessary adjusting entries at June 30.
If a balance sheet is prepared for Cookie Creations at June 30, what balance will be reported as cash in the Current Asse.
InstructionsCommunities do not exist in a bubble. Often changes .docxmaoanderton
Instructions
Communities do not exist in a bubble. Often changes made in the larger society, driven by technology, have an unexpected effect on local communities. Consider the effects of the advancements in transportation technologies on communities. Routes of transportation have evolved from water to train to the road and then to air. Each of these advancements led to job displacement and changes in travel routes. In the United States, prior to the mid-1800s, the communities that thrived had water access and ports where the exchange of goods and services occurred. With the building of the transcontinental railway and the growth of transportation by rail, the communities that thrived had train depots. With the building of major highways in the 1900s, access to those roads became critical to survival. With each advancement, local communities were impacted, such that many communities that grew around train depots became ghost towns full of poverty, homelessness, and despair once train travel was no longer the primary means of human transportation.
In this assignment, you are asked to create a presentation on one of the following topics:
Green Energy
Globalization
Communication Technology
Remote Elementary Education
Remote High School Education
Remote Work
You can create your presentation in your choice of presentation media. For example, you could choose to create a PowerPoint presentation, a video, or use Prezi. (refer to the Unit 1 assignment) If you choose to use video, you are required to supply a script as well as the URL of the video.
In your presentation, you must address the following:
Describe the specific technological development and its association with your chosen topic.
Explain why you chose that topic and the technological advancement associated with it.
Discuss which demographic (income, age, sexual preference, ethnicity) and/or geographic feature (urban, suburban, rural) might be most affected by the changes. Support your opinion with three (3) external references.
Explore the societal impacts (good and bad) associated with the technology. include a discussion for each of the following:
Economic impacts (unemployment, loss of revenue, poverty), include one (1) external reference
Health impacts (including mental health), include one (1) external reference
Privacy concerns, include one (1) external reference
Community life, include one (1) external reference
Be sure to use appropriate sources for the external references required for this assignment.
.
InstructionsChoose only ONE of the following options and wri.docxmaoanderton
Instructions
Choose only
ONE
of the following options and write a post that agrees OR disagrees with the assertion.
Cite specific scenes and/or use specific quotes
from the novel to support your position. Your answer should be written in no fewer than
200 words
.
When you are done posting your response, reply to at least
one classmate
in no fewer than
75 words
.
Although the novel is titled
Sula
, the real protagonist is Nel because she is the one who is transformed by the end.
80% - Thoughtful original post that includes specific scenes from the novel to support your position
(at least 200 words)
.
InstructionsChoose only ONE of the following options and.docxmaoanderton
Instructions
Choose only
ONE
of the following options and write a post that agrees OR disagrees with the assertion.
Cite specific scenes and/or use specific quotes
from the novel to support your position. Your answer should be written in no fewer than
200 words
.
When you are done posting your response, reply to at least
one classmate
in no fewer than
75 words
.
Although the novel is titled
Sula
, the real protagonist is Nel because she is the one who is transformed by the end.
80% - Thoughtful original post that includes specific scenes from the novel to support your position
(at least 200 words)
.
InstructionsBeginning in the 1770s, an Age of Revolution swep.docxmaoanderton
This document provides instructions for a student to write a 5-7 paragraph essay analyzing the outcomes and impacts of one of three suggested revolutions: the French Revolution, Haitian Revolution, or American Revolution. Students are asked to provide background on the revolution they choose, key events and turning points, and consider how the revolution impacted different groups of people in varying ways. They are to cite evidence from suggested sources using APA style citations and references.
This slide is special for master students (MIBS & MIFB) in UUM. Also useful for readers who are interested in the topic of contemporary Islamic banking.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
A workshop hosted by the South African Journal of Science aimed at postgraduate students and early career researchers with little or no experience in writing and publishing journal articles.
How to Build a Module in Odoo 17 Using the Scaffold MethodCeline George
Odoo provides an option for creating a module by using a single line command. By using this command the user can make a whole structure of a module. It is very easy for a beginner to make a module. There is no need to make each file manually. This slide will show how to create a module using the scaffold method.
This presentation includes basic of PCOS their pathology and treatment and also Ayurveda correlation of PCOS and Ayurvedic line of treatment mentioned in classics.
Thinking of getting a dog? Be aware that breeds like Pit Bulls, Rottweilers, and German Shepherds can be loyal and dangerous. Proper training and socialization are crucial to preventing aggressive behaviors. Ensure safety by understanding their needs and always supervising interactions. Stay safe, and enjoy your furry friends!
How to Add Chatter in the odoo 17 ERP ModuleCeline George
In Odoo, the chatter is like a chat tool that helps you work together on records. You can leave notes and track things, making it easier to talk with your team and partners. Inside chatter, all communication history, activity, and changes will be displayed.
SHOULD ALGORITHMS DECIDE YOUR FUTUREThis publication was .docx
1. SHOULD ALGORITHMS DECIDE YOUR FUTURE?
This publication was prepared by Kilian Vieth and
Joanna Bronowicka from Centre for Internet and
Human Rights at European University Viadrina. It was
prepared based on a publication “The Ethics of
Algorithms: from radical content to self-driving cars”
with contributions from Zeynep Tufekci, Jillian C. York,
Ben Wagner and Frederike Kaltheuner and an event
on the Ethics of Algorithms, which took place on
March 9-10, 2015 in Berlin. The research was support-
ed by the Dutch Ministry of Foreign Affairs.
Find out more: cihr.eu/ethics-of-algorithms/
Follow the discussion on Twitter: #EoA2015
Graphic design by Thiago Parizi
cihr.eu @cihr_eu
2. 1 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS |
2
WHAT IS AN ALGORITHM?
ALGORITHMS SHAPE OUR WORLD(S)!
Our everyday life is shaped by computers and our computers are
shaped
by algorithms. Digital computation is constantly changing how
we commu-
nicate, work, move, and learn. In short, digitally connected
computers are
changing how we live our lives. This revolution is unlikely to
stop any time
soon.
Digitalization produces increasing amounts of datasets known
as ‘big
data’. So far, research focused on how ‘big data is produced and
stored.
Now, we begin to scrutinize how algorithms make sense of this
growing
amount of data
Algorithms are the brains of our computers, mobiles, Internet of
Things.
Algorithms are increasingly used to make decisions for us,
about us, or
3. with us – oftentimes without us realizing it. This raises many
questions
about the ethical dimension of algorithms.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of
algorithms.
What are typical functions they perform? What are negative
impacts for
human rights? Here are some examples that probably affect you
too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is
ignored;
and even what gets published at all, and what is censored. This
is true for
all kinds of search rankings, for example the way your social
media news-
feed looks. In other words, algorithms perform a gate-keeping
function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking
part in
4. hiring (and firing) of employees. Deciding who gets a job and
who does
not, is among the most powerful gate-keeping function in
society.
• Research shows that human managers display many different
biases in
hiring decisions, for example based on social class, race and
gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic
hiring can
easily overcome human biases. Algorithms might work more
accurate
in some areas, but can also create new, sometimes unintended,
prob-
lems depending on how they are programmed and what input
data is
used.
Ethical implications: Algorithms work as gatekeepers that
influence how
we perceive the world, often without us realizing it. They
channel our
attention, which implies tremendous power.
5. The term ‘algorithm’ refers to any computer code that carries
out a
set of instructions. Algorithms are essential to the way
computers
process data. Theoretically speaking, they are encoded
procedures,
which transform data based on specific calculations. They
consist of
a series of steps that are undertaken to solve a particular
problem,
like in a recipe. An algorithm is taking inputs (ingredients),
breaking
a task into its constituent parts, undertaking those tasks one by
one,
and then producing an output (e.g. a cake). A simple example of
an
algorithm is “find the largest number in this series of numbers”.
THEY MAKE SUBJECTIVE DECISIONS
Some algorithms deal with questions, which do not have a clear
‘yes or no’
answer. Thus, they move a way from a checkbox answer “Is this
right or
wrong?” to more complex judgements, such as “What is
important? Who is
6. the right person for the job? Who is a threat to public safety?
Who should I
date?” Quietly, these types of subjective decisions previously
made by
humans are turned over to algorithms.
EXAMPLE
Predictive policing based on statistical forecasting:
• In early 2014, the Chicago Police Department made national
headlines
in the US for visiting residents who were considered to be most
likely
involved in violent crime. The selection of individuals, wo were
not
necessarily under investigation, was guided by a computer-
generated
“heat list” – an algorithm that seeks to predict future
involvement in
violent crime.
• A key concern about predictive policing is that such
automated systems
may create an echo chamber or a self-fulfilling prophecy. In
fact, heavy
policing of a specific area can increase the likelihood that crime
7. will be
detected. Since more police means more opportunities to
observe
residents' activities, the algorithm might just confirm its own
predic-
tion.
• Right now, police departments around the globe are testing
and imple-
menting predictive policing algorithms, but lack safeguards for
discrim-
inatory biases.
Ethical implications: Predictions made by algorithms provide
no guaran-
tee that they are right. And officials acting on incorrect
predictions may
even create unjustified or biased investigations.
CORE ISSUES – SHOULD AN ALGORITHM
DECIDE YOUR FUTURE?
Without the help of algorithms, many present-day applications
would be
unusable. We need them to cope with the enormous amounts of
data we
produce every day. Algorithms make our lives easier and more
productive,
8. and we certainly don't want to lose those advantages. But we
need to be
aware of what they do and how they decide.
THEY CAN DISCRIMINATE AGAINST YOU,
JUST LIKE HUMANS
Computers are often regarded as objective and rational
machines. Howev-
er, algorithms are made by humans and can be just as biased.
We need to
be critical of the assumption, that algorithms can make “better”
decisions
than human beings. There are racist algorithms and sexist ones.
Algorithms
are not neutral, but rather they perpetuate the prejudices of their
creators.
Their creators, such as businesses or governments, can
potentially have
different goals in mind than the users would have.
THEY MUST BE KNOWN TO THE USER
Since algorithms make increasingly important decisions about
our lives,
users need to be informed about them. Knowledge about
automated
9. decision-making in everyday services is still very limited
among consumers.
Raising awareness should be at the heart of the debate about
ethics of
algorithms.
PUBLIC POLICY APPROACHES TO REGULATE
ALGORITHMS
How do you regulate a black box? We need to open a discussion
on how
policy-makers are trying to deal with ethical concerns around
algorithms.
There have been some attempts to provide algorithmic
accountability, but
we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is
government
institutions like antitrust or consumer protection agencies, who
should
provide appropriate regulation and oversight. But who regulates
the use of
algorithms by the government itself? For cases like predictive
policing
ethical standards and legal safeguards are needed.
10. Recently, regulatory approaches to algorithms have circled
around trans-
parency, notification, and direct regulation. Yet, experience
shows that
policy-makers are facing certain dilemmas of regulation when it
comes to
algorithms.
TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE
If you are faced with a complex and obscure algorithm, one
common
reaction is a demand for more transparency about what and how
it works.
The concern about black-box algorithms is that they make
inherently
subjective decisions, which might contain implicit or explicit
biases. At the
same time, making complex algorithms fully transparent can be
extremely
challenging:
• It is not enough to merely publish the source code of an
algorithm,
because machine-learning systems will inevitably make
decisions that
11. have not been programmed directly. Complete transparency
would
require that we are able to explain why any particular outcome
was
produced.
• Some investigations have reverse-engineered algorithms in
order to
create greater public awareness about them. That is one way
how the
public can perform a watchdog function.
• Often, there might be good reasons why complex algorithms
operate
opaquely, because public access would make them much more
vulnera-
ble to manipulation. If every company knew how Google ranks
its
search results, it could optimize their behavior and render the
ranking
algorithm useless.
NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
A different form of transparency is to give consumers control
over their
personal information that feeds into algorithms. Notification
12. includes the
rights to correct that personal information and demand it be
excluded
from databases of data vendors. Regaining control over your
personal
information ensures accountability to the users.
DIRECT REGULATION | WHEN ALGORITHMS BECOME
CRITICAL INFRASTRUCTURE
• In some cases public regulators have been prone to create
ways to
manipulate algorithms directly. This is especially relevant for
core
infrastructure.
Debate about algorithmic regulation is most advanced in the
area of
finance. Automated high-speed trading has potentially
destabilizing
effects on financial markets, so regulators have begun to
demand the
ability to modify these algorithms.
• The ongoing antitrust investigations into Google's ‘search
neutrality’
revolve around the same question: can regulators may require
access to
13. and modification of the search algorithm in the interest of the
public?
This approach is based on a contested assumption that it is
possible to
predict objectively how a certain algorithms will respond. Yet,
there is
simply no ‘right’ answer to how Google should rank its results.
Antitrust
agencies in the US and the EU have not yet found an regulatory
response to this issue.
In some cases direct regulation or complete and public
transparency might
be necessary. However, there is no one-size-fits-all regulatory
response.
More scrutiny of algorithms must be enabled, which requires
new practices
from industry and technologists. More consumer protection and
direct
regulation should be introduced where appropriate.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of
14. algorithms.
What are typical functions they perform? What are negative
impacts for
human rights? Here are some examples that probably affect you
too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is
ignored;
and even what gets published at all, and what is censored. This
is true for
all kinds of search rankings, for example the way your social
media news-
feed looks. In other words, algorithms perform a gate-keeping
function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking
part in
hiring (and firing) of employees. Deciding who gets a job and
who does
not, is among the most powerful gate-keeping function in
society.
• Research shows that human managers display many different
biases in
15. hiring decisions, for example based on social class, race and
gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic
hiring can
easily overcome human biases. Algorithms might work more
accurate
in some areas, but can also create new, sometimes unintended,
prob-
lems depending on how they are programmed and what input
data is
used.
Ethical implications: Algorithms work as gatekeepers that
influence how
we perceive the world, often without us realizing it. They
channel our
attention, which implies tremendous power.
3 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS |
4
THEY MAKE SUBJECTIVE DECISIONS
Some algorithms deal with questions, which do not have a clear
‘yes or no’
answer. Thus, they move a way from a checkbox answer “Is this
16. right or
wrong?” to more complex judgements, such as “What is
important? Who is
the right person for the job? Who is a threat to public safety?
Who should I
date?” Quietly, these types of subjective decisions previously
made by
humans are turned over to algorithms.
EXAMPLE
Predictive policing based on statistical forecasting:
• In early 2014, the Chicago Police Department made national
headlines
in the US for visiting residents who were considered to be most
likely
involved in violent crime. The selection of individuals, wo were
not
necessarily under investigation, was guided by a computer-
generated
“heat list” – an algorithm that seeks to predict future
involvement in
violent crime.
• A key concern about predictive policing is that such
automated systems
17. may create an echo chamber or a self-fulfilling prophecy. In
fact, heavy
policing of a specific area can increase the likelihood that crime
will be
detected. Since more police means more opportunities to
observe
residents' activities, the algorithm might just confirm its own
predic-
tion.
• Right now, police departments around the globe are testing
and imple-
menting predictive policing algorithms, but lack safeguards for
discrim-
inatory biases.
Ethical implications: Predictions made by algorithms provide
no guaran-
tee that they are right. And officials acting on incorrect
predictions may
even create unjustified or biased investigations.
WE DON’T ALWAYS KNOW HOW THEY WORK
Complexity is huge: Many present day algorithms are very
complicated
and can be hard for humans to understand, even if their source
18. code is
shared with competent observers. What adds to the problem is
opacity, as
in lack of transparency of the code. Algorithms perform
complex calcula-
tions, which follow many potential steps along the way. They
can consist of
thousands, or even millions, of individual data points.
Sometimes not even
the programmers can predict how an algorithm will decide on a
certain
case.
EXAMPLE
Facebook newsfeed algorithms is complex and opaque:
• Many users are not aware that when we open Facebook, an
algorithm
that puts together the postings, pictures and ads that we see.
Indeed, it
is an algorithm that decides what to show us and what to hold
back.
Facebook newsfeed algorithm filters the content you see, but do
you
know the principles it uses to hold back information from you?
19. • In fact, a team of researchers tweaks this algorithm every
week – they
take thousands and thousands of metrics into consideration.
This is
why the effects of newsfeed algorithms are hard to predict -
even by
Facebook engineers!
• If we asked Facebook how the algorithm works, they would
not tell us.
The principles behind the way newsfeed work (the source code)
are in
fact a business secret. Without knowing the exact code, nobody
can
evaluate how your newsfeed is composed.
Ethical implications: Complex algorithms are often practically
incompre-
hensible to outsiders, but they inevitably have values, biases,
and potential
discrimination built in.
CORE ISSUES – SHOULD AN ALGORITHM
DECIDE YOUR FUTURE?
Without the help of algorithms, many present-day applications
would be
unusable. We need them to cope with the enormous amounts of
20. data we
produce every day. Algorithms make our lives easier and more
productive,
and we certainly don't want to lose those advantages. But we
need to be
aware of what they do and how they decide.
THEY CAN DISCRIMINATE AGAINST YOU,
JUST LIKE HUMANS
Computers are often regarded as objective and rational
machines. Howev-
er, algorithms are made by humans and can be just as biased.
We need to
be critical of the assumption, that algorithms can make “better”
decisions
than human beings. There are racist algorithms and sexist ones.
Algorithms
are not neutral, but rather they perpetuate the prejudices of their
creators.
Their creators, such as businesses or governments, can
potentially have
different goals in mind than the users would have.
THEY MUST BE KNOWN TO THE USER
Since algorithms make increasingly important decisions about
21. our lives,
users need to be informed about them. Knowledge about
automated
decision-making in everyday services is still very limited
among consumers.
Raising awareness should be at the heart of the debate about
ethics of
algorithms.
PUBLIC POLICY APPROACHES TO REGULATE
ALGORITHMS
How do you regulate a black box? We need to open a discussion
on how
policy-makers are trying to deal with ethical concerns around
algorithms.
There have been some attempts to provide algorithmic
accountability, but
we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is
government
institutions like antitrust or consumer protection agencies, who
should
provide appropriate regulation and oversight. But who regulates
the use of
algorithms by the government itself? For cases like predictive
22. policing
ethical standards and legal safeguards are needed.
Recently, regulatory approaches to algorithms have circled
around trans-
parency, notification, and direct regulation. Yet, experience
shows that
policy-makers are facing certain dilemmas of regulation when it
comes to
algorithms.
TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE
If you are faced with a complex and obscure algorithm, one
common
reaction is a demand for more transparency about what and how
it works.
The concern about black-box algorithms is that they make
inherently
subjective decisions, which might contain implicit or explicit
biases. At the
same time, making complex algorithms fully transparent can be
extremely
challenging:
• It is not enough to merely publish the source code of an
algorithm,
23. because machine-learning systems will inevitably make
decisions that
have not been programmed directly. Complete transparency
would
require that we are able to explain why any particular outcome
was
produced.
• Some investigations have reverse-engineered algorithms in
order to
create greater public awareness about them. That is one way
how the
public can perform a watchdog function.
• Often, there might be good reasons why complex algorithms
operate
opaquely, because public access would make them much more
vulnera-
ble to manipulation. If every company knew how Google ranks
its
search results, it could optimize their behavior and render the
ranking
algorithm useless.
NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
24. A different form of transparency is to give consumers control
over their
personal information that feeds into algorithms. Notification
includes the
rights to correct that personal information and demand it be
excluded
from databases of data vendors. Regaining control over your
personal
information ensures accountability to the users.
DIRECT REGULATION | WHEN ALGORITHMS BECOME
CRITICAL INFRASTRUCTURE
• In some cases public regulators have been prone to create
ways to
manipulate algorithms directly. This is especially relevant for
core
infrastructure.
Debate about algorithmic regulation is most advanced in the
area of
finance. Automated high-speed trading has potentially
destabilizing
effects on financial markets, so regulators have begun to
demand the
ability to modify these algorithms.
• The ongoing antitrust investigations into Google's ‘search
25. neutrality’
revolve around the same question: can regulators may require
access to
and modification of the search algorithm in the interest of the
public?
This approach is based on a contested assumption that it is
possible to
predict objectively how a certain algorithms will respond. Yet,
there is
simply no ‘right’ answer to how Google should rank its results.
Antitrust
agencies in the US and the EU have not yet found an regulatory
response to this issue.
In some cases direct regulation or complete and public
transparency might
be necessary. However, there is no one-size-fits-all regulatory
response.
More scrutiny of algorithms must be enabled, which requires
new practices
from industry and technologists. More consumer protection and
direct
regulation should be introduced where appropriate.
26. WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of
algorithms.
What are typical functions they perform? What are negative
impacts for
human rights? Here are some examples that probably affect you
too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is
ignored;
and even what gets published at all, and what is censored. This
is true for
all kinds of search rankings, for example the way your social
media news-
feed looks. In other words, algorithms perform a gate-keeping
function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking
part in
hiring (and firing) of employees. Deciding who gets a job and
who does
not, is among the most powerful gate-keeping function in
society.
27. • Research shows that human managers display many different
biases in
hiring decisions, for example based on social class, race and
gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic
hiring can
easily overcome human biases. Algorithms might work more
accurate
in some areas, but can also create new, sometimes unintended,
prob-
lems depending on how they are programmed and what input
data is
used.
Ethical implications: Algorithms work as gatekeepers that
influence how
we perceive the world, often without us realizing it. They
channel our
attention, which implies tremendous power.
THEY MAKE SUBJECTIVE DECISIONS
Some algorithms deal with questions, which do not have a clear
‘yes or no’
28. answer. Thus, they move a way from a checkbox answer “Is this
right or
wrong?” to more complex judgements, such as “What is
important? Who is
the right person for the job? Who is a threat to public safety?
Who should I
date?” Quietly, these types of subjective decisions previously
made by
humans are turned over to algorithms.
EXAMPLE
Predictive policing based on statistical forecasting:
• In early 2014, the Chicago Police Department made national
headlines
in the US for visiting residents who were considered to be most
likely
involved in violent crime. The selection of individuals, wo were
not
necessarily under investigation, was guided by a computer-
generated
“heat list” – an algorithm that seeks to predict future
involvement in
violent crime.
• A key concern about predictive policing is that such
automated systems
29. may create an echo chamber or a self-fulfilling prophecy. In
fact, heavy
policing of a specific area can increase the likelihood that crime
will be
detected. Since more police means more opportunities to
observe
residents' activities, the algorithm might just confirm its own
predic-
tion.
• Right now, police departments around the globe are testing
and imple-
menting predictive policing algorithms, but lack safeguards for
discrim-
inatory biases.
Ethical implications: Predictions made by algorithms provide
no guaran-
tee that they are right. And officials acting on incorrect
predictions may
even create unjustified or biased investigations.
5 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS |
6
CORE ISSUES – SHOULD AN ALGORITHM
DECIDE YOUR FUTURE?
30. Without the help of algorithms, many present-day applications
would be
unusable. We need them to cope with the enormous amounts of
data we
produce every day. Algorithms make our lives easier and more
productive,
and we certainly don't want to lose those advantages. But we
need to be
aware of what they do and how they decide.
THEY CAN DISCRIMINATE AGAINST YOU,
JUST LIKE HUMANS
Computers are often regarded as objective and rational
machines. Howev-
er, algorithms are made by humans and can be just as biased.
We need to
be critical of the assumption, that algorithms can make “better”
decisions
than human beings. There are racist algorithms and sexist ones.
Algorithms
are not neutral, but rather they perpetuate the prejudices of their
creators.
Their creators, such as businesses or governments, can
potentially have
different goals in mind than the users would have.
31. THEY MUST BE KNOWN TO THE USER
Since algorithms make increasingly important decisions about
our lives,
users need to be informed about them. Knowledge about
automated
decision-making in everyday services is still very limited
among consumers.
Raising awareness should be at the heart of the debate about
ethics of
algorithms.
PUBLIC POLICY APPROACHES TO REGULATE
ALGORITHMS
How do you regulate a black box? We need to open a discussion
on how
policy-makers are trying to deal with ethical concerns around
algorithms.
There have been some attempts to provide algorithmic
accountability, but
we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is
government
institutions like antitrust or consumer protection agencies, who
should
32. provide appropriate regulation and oversight. But who regulates
the use of
algorithms by the government itself? For cases like predictive
policing
ethical standards and legal safeguards are needed.
Recently, regulatory approaches to algorithms have circled
around trans-
parency, notification, and direct regulation. Yet, experience
shows that
policy-makers are facing certain dilemmas of regulation when it
comes to
algorithms.
TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE
If you are faced with a complex and obscure algorithm, one
common
reaction is a demand for more transparency about what and how
it works.
The concern about black-box algorithms is that they make
inherently
subjective decisions, which might contain implicit or explicit
biases. At the
same time, making complex algorithms fully transparent can be
extremely
33. challenging:
• It is not enough to merely publish the source code of an
algorithm,
because machine-learning systems will inevitably make
decisions that
have not been programmed directly. Complete transparency
would
require that we are able to explain why any particular outcome
was
produced.
• Some investigations have reverse-engineered algorithms in
order to
create greater public awareness about them. That is one way
how the
public can perform a watchdog function.
• Often, there might be good reasons why complex algorithms
operate
opaquely, because public access would make them much more
vulnera-
ble to manipulation. If every company knew how Google ranks
its
search results, it could optimize their behavior and render the
ranking
34. algorithm useless.
NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
A different form of transparency is to give consumers control
over their
personal information that feeds into algorithms. Notification
includes the
rights to correct that personal information and demand it be
excluded
from databases of data vendors. Regaining control over your
personal
information ensures accountability to the users.
DIRECT REGULATION | WHEN ALGORITHMS BECOME
CRITICAL INFRASTRUCTURE
• In some cases public regulators have been prone to create
ways to
manipulate algorithms directly. This is especially relevant for
core
infrastructure.
Debate about algorithmic regulation is most advanced in the
area of
finance. Automated high-speed trading has potentially
destabilizing
effects on financial markets, so regulators have begun to
demand the
35. ability to modify these algorithms.
• The ongoing antitrust investigations into Google's ‘search
neutrality’
revolve around the same question: can regulators may require
access to
and modification of the search algorithm in the interest of the
public?
This approach is based on a contested assumption that it is
possible to
predict objectively how a certain algorithms will respond. Yet,
there is
simply no ‘right’ answer to how Google should rank its results.
Antitrust
agencies in the US and the EU have not yet found an regulatory
response to this issue.
In some cases direct regulation or complete and public
transparency might
be necessary. However, there is no one-size-fits-all regulatory
response.
More scrutiny of algorithms must be enabled, which requires
new practices
from industry and technologists. More consumer protection and
direct
36. regulation should be introduced where appropriate.
WHY DO ALGORITHMS RAISE ETHICAL
CONCERNS?
First, let's have a closer look at some of the critical features of
algorithms.
What are typical functions they perform? What are negative
impacts for
human rights? Here are some examples that probably affect you
too.
THEY KEEP INFORMATION AWAY FROM US
Increasingly, algorithms decide what gets attention, and what is
ignored;
and even what gets published at all, and what is censored. This
is true for
all kinds of search rankings, for example the way your social
media news-
feed looks. In other words, algorithms perform a gate-keeping
function.
EXAMPLE
Hiring algorithms decide if you are invited for an interview.
• Algorithms, rather than managers, are more and more taking
part in
hiring (and firing) of employees. Deciding who gets a job and
37. who does
not, is among the most powerful gate-keeping function in
society.
• Research shows that human managers display many different
biases in
hiring decisions, for example based on social class, race and
gender.
Clearly, human hiring systems are far from perfect.
• Nevertheless, we may not simply assume that algorithmic
hiring can
easily overcome human biases. Algorithms might work more
accurate
in some areas, but can also create new, sometimes unintended,
prob-
lems depending on how they are programmed and what input
data is
used.
Ethical implications: Algorithms work as gatekeepers that
influence how
we perceive the world, often without us realizing it. They
channel our
attention, which implies tremendous power.
THEY MAKE SUBJECTIVE DECISIONS
38. Some algorithms deal with questions, which do not have a clear
‘yes or no’
answer. Thus, they move a way from a checkbox answer “Is this
right or
wrong?” to more complex judgements, such as “What is
important? Who is
the right person for the job? Who is a threat to public safety?
Who should I
date?” Quietly, these types of subjective decisions previously
made by
humans are turned over to algorithms.
EXAMPLE
Predictive policing based on statistical forecasting:
• In early 2014, the Chicago Police Department made national
headlines
in the US for visiting residents who were considered to be most
likely
involved in violent crime. The selection of individuals, wo were
not
necessarily under investigation, was guided by a computer-
generated
“heat list” – an algorithm that seeks to predict future
involvement in
39. violent crime.
• A key concern about predictive policing is that such
automated systems
may create an echo chamber or a self-fulfilling prophecy. In
fact, heavy
policing of a specific area can increase the likelihood that crime
will be
detected. Since more police means more opportunities to
observe
residents' activities, the algorithm might just confirm its own
predic-
tion.
• Right now, police departments around the globe are testing
and imple-
menting predictive policing algorithms, but lack safeguards for
discrim-
inatory biases.
Ethical implications: Predictions made by algorithms provide
no guaran-
tee that they are right. And officials acting on incorrect
predictions may
even create unjustified or biased investigations.
CORE ISSUES – SHOULD AN ALGORITHM
40. DECIDE YOUR FUTURE?
Without the help of algorithms, many present-day applications
would be
unusable. We need them to cope with the enormous amounts of
data we
produce every day. Algorithms make our lives easier and more
productive,
and we certainly don't want to lose those advantages. But we
need to be
aware of what they do and how they decide.
THEY CAN DISCRIMINATE AGAINST YOU,
JUST LIKE HUMANS
Computers are often regarded as objective and rational
machines. Howev-
er, algorithms are made by humans and can be just as biased.
We need to
be critical of the assumption, that algorithms can make “better”
decisions
than human beings. There are racist algorithms and sexist ones.
Algorithms
are not neutral, but rather they perpetuate the prejudices of their
creators.
Their creators, such as businesses or governments, can
potentially have
41. different goals in mind than the users would have.
THEY MUST BE KNOWN TO THE USER
Since algorithms make increasingly important decisions about
our lives,
users need to be informed about them. Knowledge about
automated
decision-making in everyday services is still very limited
among consumers.
Raising awareness should be at the heart of the debate about
ethics of
algorithms.
PUBLIC POLICY APPROACHES TO REGULATE
ALGORITHMS
How do you regulate a black box? We need to open a discussion
on how
policy-makers are trying to deal with ethical concerns around
algorithms.
There have been some attempts to provide algorithmic
accountability, but
we need better data and more in-depth studies.
If algorithms are written and used by corporations, it is
government
institutions like antitrust or consumer protection agencies, who
should
42. provide appropriate regulation and oversight. But who regulates
the use of
algorithms by the government itself? For cases like predictive
policing
ethical standards and legal safeguards are needed.
Recently, regulatory approaches to algorithms have circled
around trans-
parency, notification, and direct regulation. Yet, experience
shows that
policy-makers are facing certain dilemmas of regulation when it
comes to
algorithms.
TRANSPARENCY | MAKE OPAQUE BIASES VISIBLE
If you are faced with a complex and obscure algorithm, one
common
reaction is a demand for more transparency about what and how
it works.
The concern about black-box algorithms is that they make
inherently
subjective decisions, which might contain implicit or explicit
biases. At the
same time, making complex algorithms fully transparent can be
extremely
43. challenging:
• It is not enough to merely publish the source code of an
algorithm,
because machine-learning systems will inevitably make
decisions that
have not been programmed directly. Complete transparency
would
require that we are able to explain why any particular outcome
was
produced.
• Some investigations have reverse-engineered algorithms in
order to
create greater public awareness about them. That is one way
how the
public can perform a watchdog function.
• Often, there might be good reasons why complex algorithms
operate
opaquely, because public access would make them much more
vulnera-
ble to manipulation. If every company knew how Google ranks
its
search results, it could optimize their behavior and render the
ranking
44. algorithm useless.
NOTIFICATION | GIVING USERS THE RIGHT TO KNOW
A different form of transparency is to give consumers control
over their
personal information that feeds into algorithms. Notification
includes the
rights to correct that personal information and demand it be
excluded
from databases of data vendors. Regaining control over your
personal
information ensures accountability to the users.
DIRECT REGULATION | WHEN ALGORITHMS BECOME
CRITICAL INFRASTRUCTURE
• In some cases public regulators have been prone to create
ways to
manipulate algorithms directly. This is especially relevant for
core
infrastructure.
Debate about algorithmic regulation is most advanced in the
area of
finance. Automated high-speed trading has potentially
destabilizing
effects on financial markets, so regulators have begun to
45. demand the
ability to modify these algorithms.
• The ongoing antitrust investigations into Google's ‘search
neutrality’
revolve around the same question: can regulators may require
access to
and modification of the search algorithm in the interest of the
public?
This approach is based on a contested assumption that it is
possible to
predict objectively how a certain algorithms will respond. Yet,
there is
simply no ‘right’ answer to how Google should rank its results.
Antitrust
agencies in the US and the EU have not yet found an regulatory
response to this issue.
In some cases direct regulation or complete and public
transparency might
be necessary. However, there is no one-size-fits-all regulatory
response.
More scrutiny of algorithms must be enabled, which requires
new practices
from industry and technologists. More consumer protection and
46. direct
regulation should be introduced where appropriate.
7 | ETHICS OF ALGORITHMS ETHICS OF ALGORITHMS |
8
EDS 1021 Week 6 Interactive Assignment
Radiometric Dating
Objective: Using a simulated practical application, explore the
concepts of radioactive decay and the half-life of
radioactive elements, and then apply the concept of radiometric
dating to estimate the age of various objects.
Background: Review the topics Half-Life, Radiometric Dating,
and Decay Chains in Chapter 12 of The Sciences.
Instructions:
1. PRINT a hard copy of this entire document, so that the
experiment instructions may be easily referred to,
and the data tables and questions (on the last three pages) can
be completed as a rough draft.
2. Download the Radiometric Dating Game Answer Sheet from
the course website. Transfer your data
values and question answers from the completed rough draft to
47. the answer sheet. Be sure to put your
NAME on the answer sheet where indicated. Save your
completed answer sheet on your computer.
3. SUBMIT ONLY the completed answer sheet, by uploading
your file to the digital drop box for the
assignment.
Introduction to the Simulation
1. After reviewing the background information for this
assignment, go to the website for the interactive
simulation “Radioactive Dating Game” at
http://phet.colorado.edu/en/simulation/radioactive-dating-game.
Click on DOWNLOAD to run the simulation locally on your
computer.
2. Software Requirements: You must have the latest version of
Java software (free) loaded on your computer
to run the simulation. If you do not or are not sure if you have
the latest version, go to
http://www.java.com/en/download/index.jsp .
3. Explore and experiment on the 4 different “tabs” (areas) of
the simulation. While playing around, think about
how the concepts of radioactive decay are being illustrated in
the simulation.
48. Half Life Tab – observe a sample of radioactive atoms decaying
- Carbon-14, Uranium-238, or ? (a custom-
made radioactive atom). Clicking on the “add 10” button adds
10 atoms at a time to the “decay area”. There
are a total of 100 atoms in the bucket, so clicking the “add 10”
button 10 times will empty the bucket into the
decay area. Observe the pie chart and time graph as atoms
decay. You can PAUSE, STEP (buttons at the
bottom of the screen) the simulation in time as atoms are
decaying, and RESET the simulation.
Decay Rates Tab – Similar to the half-life tab, but different!
Atom choices are carbon-14 and uranium-238.
The bucket has a total of 1000 atoms. Drag the slide bar on the
bucket to the right to increase the number
of atoms added to the decay area. Observe the pie chart and
time graph as atoms decay. Note that the
graph for the Decay Rates tab provides different information
than the graph for the Half Life tab. You can
PAUSE, STEP (buttons at the bottom of the screen) the
simulation in time as atoms are decaying, and
RESET the simulation.
49. Measurement Tab – Use a probe to virtually measure
radioactive decay within an object - a tree or a
volcanic rock. The probe can be set to detect either the decay of
carbon-14 atoms, or the decay of uranium-
238 atoms. Follow prompts on the screen to run a simulation of
a tree growing and dying, or of a volcano
erupting and creating a rock, and then measuring the decay of
atoms within each object.
Dating Game Tab – Use a probe to virtually measure the
percentage of radioactive atoms remaining within
various objects and, knowing the half-life of radioactive
elements being detected, estimate the ages of
objects. The probe can be set to either detect carbon-14,
uranium-238, or other “mystery” elements, as
appropriate for determining the age of the object. Drag the
probe over an object, select which element to
measure, and then slide the arrow on the graph to match the
percentage of atoms measured by the probe.
The time (t) shown for the matching percentage can then be
entered as the estimate in years of the object’s
age.
After playing around with the simulation, conduct the following
four (4) short experiments. As you conduct
50. the experiments and collect data, fill in the data tables and
answer the questions on the last three pages of
this document.
http://phet.colorado.edu/en/simulation/radioactive-dating-game
http://www.java.com/en/download/index.jsp
http://www.java.com/en/download/index.jsp
Experiment 1: Half Life
1. Click on the Half Life tab at the top of the simulation screen.
2. Procedure:
Part I - Carbon-14
a. Click the blue “Pause” button at the bottom of the screen
(i.e., set it so that it shows the “play” arrow).
Click the “Add 10” button below the “Bucket o’ Atoms”
repeatedly, until there are no more atoms left in
the bucket. There are now 100 carbon-14 atoms in the decay
area.
b. The half-life of carbon-14 is about 5700 years. Based on the
definition of half-life, if you left these 100
carbon-14 atoms to sit around for 5700 years, what is your
prediction of how many carbon-14 atoms will
decay into the stable element nitrogen-14 during that time?
Write your prediction in the “prediction”
51. column for the row labeled “carbon-14”, in data table 1.
c. Click the blue “Play” arrow at the bottom of the screen. As
the simulation runs, carefully observe what
is happening to the carbon-14 atoms in the decay area, and the
graphs at the top of the screen (both
the pie chart and the time graph). Once all atoms have decayed
into the stable isotope nitrogen-14,
click the blue “Pause” button at the bottom of the screen (i.e.,
set it so that it shows the “play” arrow),
and “Reset All Nuclei” button in the decay area.
d. Repeat step c. until you have a good idea of what is going on
in this simulation.
e. Repeat step c. again, but this time, watch the graph at the top
of the window carefully, and click
“pause” when TIME reaches 5700 years (when the carbon-14
atom moving across the graph reaches
the red dashed line labeled HALF LIFE on the TIME graph).
f. If you do NOT pause the simulation on or very close to the
red dashed line, click the “Reset All Nuclei”
button and repeat step e.
g. Once you have paused the simulation in the correct spot, look
at the pie graph and determine the
number of nuclei that have decayed into nitrogen-14 at Time =
HALF LIFE. Write this number in data
table 1, in the row labeled “carbon-14”, under “trial 1”.
h. Click the “Reset All Nuclei” button in the decay area.
i. Repeat steps e through h for two more trials. For each trial,
write down in data table 1, the number of
52. nuclei that have decayed into nitrogen-14 at Time = HALF
LIFE, in the row labeled “carbon-14”, under
“trial 2” and “trial 3” respectively.
Part II – Uranium-238
a. Click “reset all” on the right side of the screen in the Decay
Isotope box, and click “yes” in the box that
pops up. Click on the radio button for uranium-238 in the Decay
Isotope box. Click the blue “Pause”
button at the bottom of the screen (i.e., set it so that it shows
the “play” arrow). Click the “Add 10”
button below the “Bucket o’ Atoms” repeatedly, until there are
no more atoms left in the bucket. There
are now 100 uranium-238 atoms in the decay area.
b. The half-life of uranium-238 is 4.5 billion years!* Based on
the definition of half-life, if you left these 100
uranium-238 atoms to sit around for 4.5 billion years, what is
your prediction of how many uranium
atoms will decay into lead-206 during that time? Write your
prediction in the “prediction” column for the
second row, labeled “uranium-238”, in data table 1.
c. Click the “Play” button at the bottom of the window. Watch
the graph at the top of the window
carefully, and click “pause” when the time reaches 4.5 billion
years (when the uranium-238 atom
moving across the graph reaches the red dashed line labeled
HALF LIFE on the time graph).
d. If you don’t pause the simulation on or very close to the red
line, click the “Reset All Nuclei” button and
53. repeat step c.
e. Once you have paused the simulation in the correct spot, look
at the pie graph and determine the
number of nuclei that have decayed into lead-206 at Time =
HALF LIFE. Write this number in data table
1, in the row labeled “uranium-238”, under “trial 1”.
f. Click the “Reset All Nuclei” button in the decay area.
g. Repeat steps c through e for two more trials. For each trial,
write down in data table 1, the number of
nuclei that have decayed into lead-206 at Time = HALF LIFE,
in the row labeled “uranium-238”, under
“trial 2” and “trial 3” respectively.
3. Calculate the average number of atoms that decayed for all
three trials of carbon-14 decay. Do the same for
all three trials uranium-238 decay. Write the average values for
each element under “averages”, the last
column in data table 1.
4. Answer the five (5) questions for Experiment 1 on the
Questions page.
* Unlike Carbon-14, which undergoes only one radioactive
decay to reach the stable nitrogen-14, uranium-238 undergoes
MANY decays into many intermediate unstable elements before
finally getting to the stable element lead-206 (see the decay
chain for uranium-238 in chapter 12 for details).
54. Experiment 2: Decay Rates
1. Set Up: Click on the Decay Rates tab at the top of the
simulation screen.
2. Procedure
Part I – Carbon-14
a. In the Choose Isotope area on the right side of the screen,
click the button next to carbon-14. Recall
that carbon-14 has a half-life of about 5700 years.
b. Drag the slide bar on the bucket of atoms all the way to the
right. This will put 1,000 radioactive atoms
into the decay area. When you let go of the slide bar, the
simulation will start right away. If you missed
seeing the simulation run from the start, start it over again by
clicking “Reset All Nuclei”, and keep your
eyes on the screen. Watch the graph at the bottom of the screen
as you allow the simulation to run until
all atoms have decayed.
c. Carefully interpret the graph to obtain the data requested to
fill in data table 2 for “carbon-14”.
Part II – Uranium-238
55. a. In the Choose Isotope area on the right side of the screen,
click the button next to uranium-238.
Recall that the half-life of uranium-238 is about 4.5 billion
years.
b. Repeat step b. of part I for uranium-238.
c. Carefully interpret the graph to obtain the data requested to
fill in data table 2 for “uranium-238”.
3. Answer the two (2) questions for Experiment 2 on the
Questions page.
Experiment 3: Measurement
1. Set Up: Click on the Measurement tab at the top of the
simulation screen.
2. Procedure
a. In Choose an Object, click on the button for Tree. In the
Probe Type box, click on the buttons for
“carbon-14”, and “Objects”.
b. Click “Plant Tree” at the bottom of the screen. Observe that
the tree grows and lives for about 1200
years, then die and begin to decay. As the simulation runs,
observe the probe reading (upper left box)
and graph (upper right box) at the top of the screen. The graph
is plotting the probe’s measurement of
the percentage of carbon-14 remaining in the tree over time.
56. c. On the right side of the screen, in the “Choose an Object”
box, click the reset button. Set the probe to
measure uranium-238 instead of carbon-14 (in the Probe Type
box, click on the button for “uranium-
238”). Click “Plant Tree” and again observe the probe reading
and graph. This time, the graph will plot
the probe’s measurement of uranium-238 in the tree.
d. On the right side of the screen, in the “Choose an Object”
box, click the reset button and click the
button for Rock. Keep the probe type set on “uranium-238”.
e. Click “Erupt Volcano” and observe the volcano creating and
ejecting HOT igneous rocks. As the
simulation runs, observe the probe reading (upper left box) and
graph (upper right box) at the top of the
screen. The graph is plotting the probe’s measurement of
uranium-238 in the rock over time.
f. On the right side of the screen, in the “Choose an Object”
box, click the reset button. Set the probe to
measure carbon-14 instead of uranium-238 (in the Probe Type
box, click on the button for “carbon-14”).
Click “erupt volcano” and again observe the probe reading and
graph. This time, the graph will plot the
probe’s measurement of carbon-14 in the rock.
3. Answer the six (6) questions for Experiment 3 on the
Questions page. Repeat the Measurement
experiments as necessary to answer the questions.
57. Experiment 4: Dating Game
1. Set Up: Click on the Dating Game tab at the top of the
simulation screen. Verify that the “objects” button is
clicked below the Probe Type box.
2. Procedure
a. Select an item either at or below the Earth’s surface to
determine the age of. Set the Probe Type to
either carbon-14 or uranium-238, as appropriate for the item
you are measuring, based on your findings
in experiment 3.
b. Drag the probe directly over the item you chose. With the
probe over the item, the box in the upper left,
above “Probe Type”, shows the percentage of element that
remains in the item.
c. Drag the green arrow on the graph to the right or left, until
the percentage in the box on the graph
matches the percentage of element in the object. Once you have
the arrow positioned correctly, enter,
in the “estimate” box, the time (t) shown, as the estimate for the
age of the rock or fossil. Click “Check
Estimate” to see if the estimate is correct.
d. Example:
58. 1) Drag the probe over the dead tree to the right of the house.
Since this is a once-living thing, set the
Probe Type to Carbon-14.
2) Look at the probe reading: it shows that there is 97.4% of
carbon-14 remaining in the dead tree.
3) Drag the green arrows on the graph at the top of the screen to
the right or left until the top line tells
you that the carbon-14 percentage is 97.4%, matching the
reading from the probe.
4) When the graph reads 97.4%, it shows that time (t) equals
220 years.
5) Type “220” into the box that says “Estimate age of dead
tree:” and click “Check Estimate”.
6) You should see a green smiley face, indicating that you have
correctly figured out the age of the
dead tree, 220 years. This means that the tree died 220 years
ago.
e. Repeat step c. for all the other items on the Dating Game
screen. Fill in data table 3.
Hints:
*When entering the estimate for the age of an object, you
MUST enter it in YEARS. So, if t = 134.8 MY
on the graph, which means 134.8 million years, you would enter
the number 134,800,000 as the
estimate in years for the age of a rock. The estimates entered
59. do not have to be EXACT to be
registered as correct, but must be CLOSE. For example, if
134,000,000 were entered for the above
example, the simulation would return a green smiley face for a
correct estimate.
**For the last four items on the list, neither carbon-14 nor
uranium-238 will work to determine the item’s
age. Select “Custom”, and pick a half-life from the drop-down
menu that allows the probe to detect
something (i.e., a reading other than 0.0%). All of the “custom”
elements are, like carbon-14, elements
that would be found in living or once living things.
3. Answer the (1) question for Experiment 4 on the Questions
page.
Name:
Data
Each box to be filled in with a value is worth 1 point.
Data Table 1
60. Radioactive
Element
Number of atoms in the
sample at Time = 0
Prediction of # atoms
that will decay when
time reaches one half-
life
Number of atoms that
have decayed when
Time = Half Life
Average number of
atoms that have
decayed when
Time = Half Life Trial
#1
Trial
#2
61. Trial
#3
Carbon-14 100
Uranium-238 100
Data Table 2
Radioactive
Element
Percentage of the original element remaining after:
Percentage of the decay product present after:
1 half-life 2 half-lives 3 half-lives 1 half-life 2 half-lives
3 half-lives
Carbon-14
Uranium-238
Data Table 3
62. Item Age Element Used
Animal Skull
House
Living Tree
Dead Tree
Bone
Wooden Cup
Large Human Skull
Fish Bones
Rock 1
Rock 3
Rock 5
63. Fish Fossil*
Dinosaur Skull*
Trilobite*
Small Human Skull*
Questions
For multiple choice questions, choose the letter of the best
answer choice from the list below the question.
For short answer questions, type your answer in the space
provided below the question.
Note that there are TWO PAGES OF QUESTIONS below. Each
multiple question is worth 3 points and each
short answer question is worth 5 points.
Experiment 1
1. In 1 or 2 sentences, describe, in the space below, what you
observed happening to the atoms in the decay area
64. when the simulation is in “play” mode.
2. In this simulation (and in nature), carbon-14 is radioactively
decays to nitrogen-14. What type of radioactive
decay is carbon-14 undergoing?
a. alpha decay b. beta decay c. gamma decay
3. In data table 1, compare your calculations for the average
number of atoms that decayed to your predictions for
how many atoms would decay. How did your predictions
compare to the averages?
a. Close b. Exact c. Nowhere Close
4. Which of the following statements best describes the decay of
a sample of radioactive atoms?
a. A statistical process, where the sample as a whole decays
predictably, but individual atoms in the sample
decay like popcorn kernels popping.
b. An exact process where the time of decay of each atom in the
sample can be predicted.
c. A completely random process that is in no way predictable.
65. 5. Based on your observations, the data you collected, and the
comparison of the calculated averages to
predictions in experiment 1, write in the space below, IN YOUR
OWN WORDS, a definition of “half-life”. Phrase
the definition in a way that you would explain the concept to
one of your classmates.
Experiment 2
1. Suppose you found a rock. Using radiometric dating, you
determined that the rock contained the
same percentage of lead-206 and uranium-238. How old would
you conclude the rock to be?
a. 2.25 billion years old b. 4.5 billion years old c. 9 billion
years old
2. Using the data in data table 2, if the original sample
contained 24 grams of carbon-14, how many
grams of carbon-14 would be left, and how many grams of
nitrogen-14 would be present, after 2
half-lives?
a. 12; 12 b. 6; 18 c. 2; 22
66. Experiment 3
1. When half of the carbon-14 atoms in the tree have decayed
into nitrogen-14, it has been approximately 5700
years since the tree _____.
a. was planted b. died
2. The probe measures that the percentage of carbon-14 in the
tree is 100% while the tree is alive, and then
measures the percentage of carbon-14 decreasing with time after
the tree dies. This means that radiometric
dating using carbon-14 is applicable to:
a. living objects. B. once-living objects c. both living and
once-living objects.
3. When half of the Uranium-238 in the rock has decayed,
______ years have passed since the volcano erupted.
a. 4.5 billion years b. 5700 years
67. 4. In step c of experiment 3, the probe was used to measure
uranium-238 in the tree. In step f of experiment 3, the
probe was used to measure carbon-14 in the rock. In both steps
(c and f), the probe reading was _____.
a. 100%, then decreasing with time. b. zero (0) . c. decreasing
with time.
5. Uranium-238 must be used to measure the age of the rock,
and not carbon-14, because:
a. The isotope carbon-14 did not exist when the rock was
created.
b. rocks do not contain carbon-14, and even if they did, the
carbon-14 would have likely decayed away
prior to measuring the rock’s age.
c. it is easier to measure uranium in rocks than it is carbon.
6. Based on your observations for experiment 3, and the answers
to your questions above, _______ should be
used to determine the ages of once-living things, and _______
should be used to determine the ages of rocks.
a. carbon-14; uranium-238 b. uranium-238; carbon-14 c.
68. either can be used for both objects.
Experiment 4
1. Briefly explain why neither carbon-14 nor uranium-238 could
be used to determine the ages of the last 4 items in
data table 3.
EDS 1021 Week 6 Interactive Assignment
Radiometric Dating
Objective: Using a simulated practical application, explore the
concepts of radioactive decay and the half-life of
radioactive elements, and then apply the concept of radiometric
dating to estimate the age of various objects.
Background: Review the topics Half-Life, Radiometric Dating,
and Decay Chains in Chapter 12 of The Sciences.
Instructions:
1. PRINT a hard copy of this entire document, so that the
experiment instructions may be easily referred to,
69. and the data tables and questions (on the last three pages) can
be completed as a rough draft.
2. Download the Radiometric Dating Game Answer Sheet from
the course website. Transfer your data
values and question answers from the completed rough draft to
the answer sheet. Be sure to put your
NAME on the answer sheet where indicated. Save your
completed answer sheet on your computer.
3. SUBMIT ONLY the completed answer sheet, by uploading
your file to the digital drop box for the
assignment.
Introduction to the Simulation
1. After reviewing the background information for this
assignment, go to the website for the interactive
simulation “Radioactive Dating Game” at
http://phet.colorado.edu/en/simulation/radioactive-dating-game.
Click on DOWNLOAD to run the simulation locally on your
computer.
2. Software Requirements: You must have the latest version of
Java software (free) loaded on your computer
to run the simulation. If you do not or are not sure if you have
the latest version, go to
http://www.java.com/en/download/index.jsp .
70. 3. Explore and experiment on the 4 different “tabs” (areas) of
the simulation. While playing around, think about
how the concepts of radioactive decay are being illustrated in
the simulation.
Half Life Tab – observe a sample of radioactive atoms decaying
- Carbon-14, Uranium-238, or ? (a custom-
made radioactive atom). Clicking on the “add 10” button adds
10 atoms at a time to the “decay area”. There
are a total of 100 atoms in the bucket, so clicking the “add 10”
button 10 times will empty the bucket into the
decay area. Observe the pie chart and time graph as atoms
decay. You can PAUSE, STEP (buttons at the
bottom of the screen) the simulation in time as atoms are
decaying, and RESET the simulation.
Decay Rates Tab – Similar to the half-life tab, but different!
Atom choices are carbon-14 and uranium-238.
The bucket has a total of 1000 atoms. Drag the slide bar on the
bucket to the right to increase the number
of atoms added to the decay area. Observe the pie chart and
time graph as atoms decay. Note that the
graph for the Decay Rates tab provides different information
than the graph for the Half Life tab. You can
71. PAUSE, STEP (buttons at the bottom of the screen) the
simulation in time as atoms are decaying, and
RESET the simulation.
Measurement Tab – Use a probe to virtually measure
radioactive decay within an object - a tree or a
volcanic rock. The probe can be set to detect either the decay of
carbon-14 atoms, or the decay of uranium-
238 atoms. Follow prompts on the screen to run a simulation of
a tree growing and dying, or of a volcano
erupting and creating a rock, and then measuring the decay of
atoms within each object.
Dating Game Tab – Use a probe to virtually measure the
percentage of radioactive atoms remaining within
various objects and, knowing the half-life of radioactive
elements being detected, estimate the ages of
objects. The probe can be set to either detect carbon-14,
uranium-238, or other “mystery” elements, as
appropriate for determining the age of the object. Drag the
probe over an object, select which element to
measure, and then slide the arrow on the graph to match the
percentage of atoms measured by the probe.
The time (t) shown for the matching percentage can then be
72. entered as the estimate in years of the object’s
age.
After playing around with the simulation, conduct the following
four (4) short experiments. As you conduct
the experiments and collect data, fill in the data tables and
answer the questions on the last three pages of
this document.
http://phet.colorado.edu/en/simulation/radioactive-dating-game
http://www.java.com/en/download/index.jsp
http://www.java.com/en/download/index.jsp
Experiment 1: Half Life
1. Click on the Half Life tab at the top of the simulation screen.
2. Procedure:
Part I - Carbon-14
a. Click the blue “Pause” button at the bottom of the screen
(i.e., set it so that it shows the “play” arrow).
Click the “Add 10” button below the “Bucket o’ Atoms”
repeatedly, until there are no more atoms left in
the bucket. There are now 100 carbon-14 atoms in the decay
area.
73. b. The half-life of carbon-14 is about 5700 years. Based on the
definition of half-life, if you left these 100
carbon-14 atoms to sit around for 5700 years, what is your
prediction of how many carbon-14 atoms will
decay into the stable element nitrogen-14 during that time?
Write your prediction in the “prediction”
column for the row labeled “carbon-14”, in data table 1.
c. Click the blue “Play” arrow at the bottom of the screen. As
the simulation runs, carefully observe what
is happening to the carbon-14 atoms in the decay area, and the
graphs at the top of the screen (both
the pie chart and the time graph). Once all atoms have decayed
into the stable isotope nitrogen-14,
click the blue “Pause” button at the bottom of the screen (i.e.,
set it so that it shows the “play” arrow),
and “Reset All Nuclei” button in the decay area.
d. Repeat step c. until you have a good idea of what is going on
in this simulation.
e. Repeat step c. again, but this time, watch the graph at the top
of the window carefully, and click
“pause” when TIME reaches 5700 years (when the carbon-14
atom moving across the graph reaches
the red dashed line labeled HALF LIFE on the TIME graph).
f. If you do NOT pause the simulation on or very close to the
red dashed line, click the “Reset All Nuclei”
button and repeat step e.
g. Once you have paused the simulation in the correct spot, look
at the pie graph and determine the
number of nuclei that have decayed into nitrogen-14 at Time =
74. HALF LIFE. Write this number in data
table 1, in the row labeled “carbon-14”, under “trial 1”.
h. Click the “Reset All Nuclei” button in the decay area.
i. Repeat steps e through h for two more trials. For each trial,
write down in data table 1, the number of
nuclei that have decayed into nitrogen-14 at Time = HALF
LIFE, in the row labeled “carbon-14”, under
“trial 2” and “trial 3” respectively.
Part II – Uranium-238
a. Click “reset all” on the right side of the screen in the Decay
Isotope box, and click “yes” in the box that
pops up. Click on the radio button for uranium-238 in the Decay
Isotope box. Click the blue “Pause”
button at the bottom of the screen (i.e., set it so that it shows
the “play” arrow). Click the “Add 10”
button below the “Bucket o’ Atoms” repeatedly, until there are
no more atoms left in the bucket. There
are now 100 uranium-238 atoms in the decay area.
b. The half-life of uranium-238 is 4.5 billion years!* Based on
the definition of half-life, if you left these 100
uranium-238 atoms to sit around for 4.5 billion years, what is
your prediction of how many uranium
atoms will decay into lead-206 during that time? Write your
prediction in the “prediction” column for the
second row, labeled “uranium-238”, in data table 1.
c. Click the “Play” button at the bottom of the window. Watch
the graph at the top of the window
75. carefully, and click “pause” when the time reaches 4.5 billion
years (when the uranium-238 atom
moving across the graph reaches the red dashed line labeled
HALF LIFE on the time graph).
d. If you don’t pause the simulation on or very close to the red
line, click the “Reset All Nuclei” button and
repeat step c.
e. Once you have paused the simulation in the correct spot, look
at the pie graph and determine the
number of nuclei that have decayed into lead-206 at Time =
HALF LIFE. Write this number in data table
1, in the row labeled “uranium-238”, under “trial 1”.
f. Click the “Reset All Nuclei” button in the decay area.
g. Repeat steps c through e for two more trials. For each trial,
write down in data table 1, the number of
nuclei that have decayed into lead-206 at Time = HALF LIFE,
in the row labeled “uranium-238”, under
“trial 2” and “trial 3” respectively.
3. Calculate the average number of atoms that decayed for all
three trials of carbon-14 decay. Do the same for
all three trials uranium-238 decay. Write the average values for
each element under “averages”, the last
column in data table 1.
4. Answer the five (5) questions for Experiment 1 on the
Questions page.
76. * Unlike Carbon-14, which undergoes only one radioactive
decay to reach the stable nitrogen-14, uranium-238 undergoes
MANY decays into many intermediate unstable elements before
finally getting to the stable element lead-206 (see the decay
chain for uranium-238 in chapter 12 for details).
Experiment 2: Decay Rates
1. Set Up: Click on the Decay Rates tab at the top of the
simulation screen.
2. Procedure
Part I – Carbon-14
a. In the Choose Isotope area on the right side of the screen,
click the button next to carbon-14. Recall
that carbon-14 has a half-life of about 5700 years.
b. Drag the slide bar on the bucket of atoms all the way to the
right. This will put 1,000 radioactive atoms
into the decay area. When you let go of the slide bar, the
simulation will start right away. If you missed
seeing the simulation run from the start, start it over again by
clicking “Reset All Nuclei”, and keep your
eyes on the screen. Watch the graph at the bottom of the screen
as you allow the simulation to run until
all atoms have decayed.
77. c. Carefully interpret the graph to obtain the data requested to
fill in data table 2 for “carbon-14”.
Part II – Uranium-238
a. In the Choose Isotope area on the right side of the screen,
click the button next to uranium-238.
Recall that the half-life of uranium-238 is about 4.5 billion
years.
b. Repeat step b. of part I for uranium-238.
c. Carefully interpret the graph to obtain the data requested to
fill in data table 2 for “uranium-238”.
3. Answer the two (2) questions for Experiment 2 on the
Questions page.
Experiment 3: Measurement
1. Set Up: Click on the Measurement tab at the top of the
simulation screen.
2. Procedure
a. In Choose an Object, click on the button for Tree. In the
Probe Type box, click on the buttons for
“carbon-14”, and “Objects”.
b. Click “Plant Tree” at the bottom of the screen. Observe that
the tree grows and lives for about 1200
78. years, then die and begin to decay. As the simulation runs,
observe the probe reading (upper left box)
and graph (upper right box) at the top of the screen. The graph
is plotting the probe’s measurement of
the percentage of carbon-14 remaining in the tree over time.
c. On the right side of the screen, in the “Choose an Object”
box, click the reset button. Set the probe to
measure uranium-238 instead of carbon-14 (in the Probe Type
box, click on the button for “uranium-
238”). Click “Plant Tree” and again observe the probe reading
and graph. This time, the graph will plot
the probe’s measurement of uranium-238 in the tree.
d. On the right side of the screen, in the “Choose an Object”
box, click the reset button and click the
button for Rock. Keep the probe type set on “uranium-238”.
e. Click “Erupt Volcano” and observe the volcano creating and
ejecting HOT igneous rocks. As the
simulation runs, observe the probe reading (upper left box) and
graph (upper right box) at the top of the
screen. The graph is plotting the probe’s measurement of
uranium-238 in the rock over time.
f. On the right side of the screen, in the “Choose an Object”
box, click the reset button. Set the probe to
measure carbon-14 instead of uranium-238 (in the Probe Type
box, click on the button for “carbon-14”).
Click “erupt volcano” and again observe the probe reading and
graph. This time, the graph will plot the
probe’s measurement of carbon-14 in the rock.
79. 3. Answer the six (6) questions for Experiment 3 on the
Questions page. Repeat the Measurement
experiments as necessary to answer the questions.
Experiment 4: Dating Game
1. Set Up: Click on the Dating Game tab at the top of the
simulation screen. Verify that the “objects” button is
clicked below the Probe Type box.
2. Procedure
a. Select an item either at or below the Earth’s surface to
determine the age of. Set the Probe Type to
either carbon-14 or uranium-238, as appropriate for the item
you are measuring, based on your findings
in experiment 3.
b. Drag the probe directly over the item you chose. With the
probe over the item, the box in the upper left,
above “Probe Type”, shows the percentage of element that
remains in the item.
c. Drag the green arrow on the graph to the right or left, until
the percentage in the box on the graph
80. matches the percentage of element in the object. Once you have
the arrow positioned correctly, enter,
in the “estimate” box, the time (t) shown, as the estimate for the
age of the rock or fossil. Click “Check
Estimate” to see if the estimate is correct.
d. Example:
1) Drag the probe over the dead tree to the right of the house.
Since this is a once-living thing, set the
Probe Type to Carbon-14.
2) Look at the probe reading: it shows that there is 97.4% of
carbon-14 remaining in the dead tree.
3) Drag the green arrows on the graph at the top of the screen to
the right or left until the top line tells
you that the carbon-14 percentage is 97.4%, matching the
reading from the probe.
4) When the graph reads 97.4%, it shows that time (t) equals
220 years.
5) Type “220” into the box that says “Estimate age of dead
tree:” and click “Check Estimate”.
6) You should see a green smiley face, indicating that you have
correctly figured out the age of the
dead tree, 220 years. This means that the tree died 220 years
ago.
e. Repeat step c. for all the other items on the Dating Game
screen. Fill in data table 3.
Hints:
81. *When entering the estimate for the age of an object, you
MUST enter it in YEARS. So, if t = 134.8 MY
on the graph, which means 134.8 million years, you would enter
the number 134,800,000 as the
estimate in years for the age of a rock. The estimates entered
do not have to be EXACT to be
registered as correct, but must be CLOSE. For example, if
134,000,000 were entered for the above
example, the simulation would return a green smiley face for a
correct estimate.
**For the last four items on the list, neither carbon-14 nor
uranium-238 will work to determine the item’s
age. Select “Custom”, and pick a half-life from the drop-down
menu that allows the probe to detect
something (i.e., a reading other than 0.0%). All of the “custom”
elements are, like carbon-14, elements
that would be found in living or once living things.
3. Answer the (1) question for Experiment 4 on the Questions
page.
Name:
Data
82. Each box to be filled in with a value is worth 1 point.
Data Table 1
Radioactive
Element
Number of atoms in the
sample at Time = 0
Prediction of # atoms
that will decay when
time reaches one half-
life
Number of atoms that
have decayed when
Time = Half Life
Average number of
atoms that have
decayed when
Time = Half Life Trial
83. #1
Trial
#2
Trial
#3
Carbon-14 100
Uranium-238 100
Data Table 2
Radioactive
Element
Percentage of the original element remaining after:
Percentage of the decay product present after:
1 half-life 2 half-lives 3 half-lives 1 half-life 2 half-lives
3 half-lives
Carbon-14
84. Uranium-238
Data Table 3
Item Age Element Used
Animal Skull
House
Living Tree
Dead Tree
Bone
Wooden Cup
Large Human Skull
Fish Bones
Rock 1
85. Rock 3
Rock 5
Fish Fossil*
Dinosaur Skull*
Trilobite*
Small Human Skull*
Questions
For multiple choice questions, choose the letter of the best
answer choice from the list below the question.
For short answer questions, type your answer in the space
provided below the question.
Note that there are TWO PAGES OF QUESTIONS below. Each
multiple question is worth 3 points and each
short answer question is worth 5 points.
86. Experiment 1
1. In 1 or 2 sentences, describe, in the space below, what you
observed happening to the atoms in the decay area
when the simulation is in “play” mode.
2. In this simulation (and in nature), carbon-14 is radioactively
decays to nitrogen-14. What type of radioactive
decay is carbon-14 undergoing?
a. alpha decay b. beta decay c. gamma decay
3. In data table 1, compare your calculations for the average
number of atoms that decayed to your predictions for
how many atoms would decay. How did your predictions
compare to the averages?
a. Close b. Exact c. Nowhere Close
4. Which of the following statements best describes the decay of
a sample of radioactive atoms?
a. A statistical process, where the sample as a whole decays
predictably, but individual atoms in the sample
87. decay like popcorn kernels popping.
b. An exact process where the time of decay of each atom in the
sample can be predicted.
c. A completely random process that is in no way predictable.
5. Based on your observations, the data you collected, and the
comparison of the calculated averages to
predictions in experiment 1, write in the space below, IN YOUR
OWN WORDS, a definition of “half-life”. Phrase
the definition in a way that you would explain the concept to
one of your classmates.
Experiment 2
1. Suppose you found a rock. Using radiometric dating, you
determined that the rock contained the
same percentage of lead-206 and uranium-238. How old would
you conclude the rock to be?
a. 2.25 billion years old b. 4.5 billion years old c. 9 billion
years old
2. Using the data in data table 2, if the original sample
contained 24 grams of carbon-14, how many
grams of carbon-14 would be left, and how many grams of
88. nitrogen-14 would be present, after 2
half-lives?
a. 12; 12 b. 6; 18 c. 2; 22
Experiment 3
1. When half of the carbon-14 atoms in the tree have decayed
into nitrogen-14, it has been approximately 5700
years since the tree _____.
a. was planted b. died
2. The probe measures that the percentage of carbon-14 in the
tree is 100% while the tree is alive, and then
measures the percentage of carbon-14 decreasing with time after
the tree dies. This means that radiometric
dating using carbon-14 is applicable to:
a. living objects. B. once-living objects c. both living and
once-living objects.
89. 3. When half of the Uranium-238 in the rock has decayed,
______ years have passed since the volcano erupted.
a. 4.5 billion years b. 5700 years
4. In step c of experiment 3, the probe was used to measure
uranium-238 in the tree. In step f of experiment 3, the
probe was used to measure carbon-14 in the rock. In both steps
(c and f), the probe reading was _____.
a. 100%, then decreasing with time. b. zero (0) . c. decreasing
with time.
5. Uranium-238 must be used to measure the age of the rock,
and not carbon-14, because:
a. The isotope carbon-14 did not exist when the rock was
created.
b. rocks do not contain carbon-14, and even if they did, the
carbon-14 would have likely decayed away
prior to measuring the rock’s age.
c. it is easier to measure uranium in rocks than it is carbon.
6. Based on your observations for experiment 3, and the answers
90. to your questions above, _______ should be
used to determine the ages of once-living things, and _______
should be used to determine the ages of rocks.
a. carbon-14; uranium-238 b. uranium-238; carbon-14 c.
either can be used for both objects.
Experiment 4
1. Briefly explain why neither carbon-14 nor uranium-238 could
be used to determine the ages of the last 4 items in
data table 3.
Innovations
Perspective
Is AI the end of jobs or a new beginning?
By Vivek Wadhwa May 31 2017 WASHINGTON POST
A visitor plays chess against a robot during a COMPUTEX
event in Taiwan on May 30. This year’s COMPUTEX focuses
on artificial intelligence and robotics as well as virtual reality
and the Internet of things. (Ritchie B. Tongo/European
Pressphoto Agency)
Artificial Intelligence (AI) is advancing so rapidly that even its
developers are being caught off guard. Google co-founder
Sergey Brin said in Davos, Switzerland, in January that it
“touches every single one of our main projects, ranging from
91. search to photos to ads … everything we do … it definitely
surprised me, even though I was sitting right there.”
The long-promised AI, the stuff we’ve seen in science fiction, is
coming and we need to be prepared. Today, AI is powering
voice assistants such as Google Home, Amazon Alexa and
Apple Siri, allowing them to have increasingly natural
conversations with us and manage our lights, order food and
schedule meetings. Businesses are infusing AI into their
products to analyze the vast amounts of data and improve
decision-making. In a decade or two, we will have robotic
assistants that remind us of Rosie from “The Jetsons” and R2-
D2 of “Star Wars.”
This has profound implications for how we live and work, for
better and worse. AI is going to become our guide and
companion — and take millions of jobs away from people. We
can deny this is happening, be angry or simply ignore it. But if
we do, we will be the losers. As I discussed in my new book,
“Driver in the Driverless Car,” technology is now advancing on
an exponential curve and making science fiction a reality. We
can’t stop it. All we can do is to understand it and use it to
better ourselves — and humanity.
Rosie and R2-D2 may be on their way but AI is still very
limited in its capability, and will be for a long time. The voice
assistants are examples of what technologists call narrow AI:
systems that are useful, can interact with humans and bear some
of the hallmarks of intelligence — but would never be mistaken
for a human. They can, however, do a better job on a very
specific range of tasks than humans can. I couldn’t, for
example, recall the winning and losing pitcher in every baseball
game of the major leagues from the previous night.
Narrow-AI systems are much better than humans at accessing
information stored in complex databases, but their capabilities
exclude creative thought. If you asked Siri to find the perfect
gift for your mother for Valentine’s Day, she might make a
snarky comment but couldn’t venture an educated guess. If you
asked her to write your term paper on the Napoleonic Wars, she
92. couldn’t help. That is where the human element comes in and
where the opportunities are for us to benefit from AI — and
stay employed.
In his book “Deep Thinking: Where Machine Intelligence Ends
and Human Creativity Begins,” chess grandmaster Garry
Kasparov tells of his shock and anger at being defeated by
IBM’s Deep Blue supercomputer in 1997. He acknowledges that
he is a sore loser but was clearly traumatized by having a
machine outsmart him. He was aware of the evolution of the
technology but never believed it would beat him at his own
game. After coming to grips with his defeat, 20 years later, he
says fail-safes are required … but so is courage.
Kasparov wrote: “When I sat across from Deep Blue twenty
years ago I sensed something new, something unsettling.
Perhaps you will experience a similar feeling the first time you
ride in a driverless car, or the first time your new computer boss
issues an order at work. We must face these fears in order to get
the most out of our technology and to get the most out of
ourselves. Intelligent machines will continue that process,
taking over the more menial aspects of cognition and elevating
our mental lives toward creativity, curiosity, beauty, and joy.
These are what truly make us human, not any particular activity
or skill like swinging a hammer — or even playing chess.”
In other words, we better get used to it and ride the wave.
Human superiority over animals is based on our ability to create
and use tools. The mental capacity to make things that improved
our chances of survival led to a natural selection of better
toolmakers and tool users. Nearly everything a human does
involves technology. For adding numbers, we used abacuses and
mechanical calculators and now spreadsheets. To improve our
memory, we wrote on stones, parchment and paper, and now
have disk drives and cloud storage.
AI is the next step in improving our cognitive functions and
decision-making.
Think about it: When was the last time you tried memorizing
your calendar or Rolodex or used a printed map? Just as we
93. instinctively do everything on our smartphones, we will rely on
AI. We may have forfeited skills such as the ability to add up
the price of our groceries but we are smarter and more
productive. With the help of Google and Wikipedia, we can be
experts on any topic, and these don’t make us any dumber than
encyclopedias, phone books and librarians did.
A valid concern is that dependence on AI may cause us to
forfeit human creativity. As Kasparov observes, the chess games
on our smartphones are many times more powerful than the
supercomputers that defeated him, yet this didn’t cause human
chess players to become less capable — the opposite happened.
There are now stronger chess players all over the world, and the
game is played in a better way.
As Kasparov explains: “It used to be that young players might
acquire the style of their early coaches. If you worked with a
coach who preferred sharp openings and speculative attacking
play himself, it would influence his pupils to play similarly. …
What happens when the early influential coach is a computer?
The machine doesn’t care about style or patterns or hundreds of
years of established theory. It counts up the values of the chess
pieces, analyzes a few billion moves, and counts them up again.
It is entirely free of prejudice and doctrine. … The heavy use of
computers for practice and analysis has contributed to the
development of a generation of players who are almost as free
of dogma as the machines with which they train.”
Perhaps this is the greatest benefit that AI will bring —
humanity can be free of dogma and historical bias; it can do
more intelligent decision-making. And instead of doing
repetitive data analysis and number crunching, human workers
can focus on enhancing their knowledge and being more
creative.
https://www.technologyreview.com/s/607970/experts-predict-
when-artificial-intelligence-will-exceed-human-performance/
Intelligent Machines
94. Experts Predict When Artificial Intelligence Will Exceed
Human Performance
Trucking will be computerized long before surgery, computer
scientists say.
· by Emerging Technology from the arXiv
· May 31, 2017
Artificial intelligence is changing the world and doing it at
breakneck speed. The promise is that intelligent machines will
be able to do every task better and more cheaply than humans.
Rightly or wrongly, one industry after another is falling under
its spell, even though few have benefited significantly so far.
And that raises an interesting question: when will artificial
intelligence exceed human performance? More specifically,
when will a machine do your job better than you?
Today, we have an answer of sorts thanks to the work of Katja
Grace at the Future of Humanity Institute at the University of
Oxford and a few pals. To find out, these guys asked the
experts. They surveyed the world’s leading researchers in
artificial intelligence by asking them when they think intelligent
machines will better humans in a wide range of tasks. And many
of the answers are something of a surprise.
The experts that Grace and co coopted were academics and
industry experts who gave papers at the International
Conference on Machine Learning in July 2015 and the Neural
Information Processing Systems conference in December 2015.
These are two of the most important events for experts in
artificial intelligence, so it’s a good bet that many of the
world’s experts were on this list.
Grace and co asked them all—1,634 of them—to fill in a survey
about when artificial intelligence would be better and cheaper
than humans at a variety of tasks. Of these experts, 352
responded. Grave and co then calculated their median responses
The experts predict that AI will outperform humans in the next
10 years in tasks such as translating languages (by 2024),
writing high school essays (by 2026), and driving trucks (by
95. 2027).
But many other tasks will take much longer for machines to
master. AI won’t be better than humans at working in retail
until 2031, able to write a bestselling book until 2049, or
capable of working as a surgeon until 2053.
The experts are far from infallible. They predicted that AI
would be better than humans at Go by about 2027. (This was in
2015, remember.) In fact, Google’s DeepMind subsidiary has
already developed an artificial intelligence capable of beating
the best humans. That took two years rather than 12. It’s easy to
think that this gives the lie to these predictions.
The experts go on to predict a 50 percent chance that AI will be
better than humans at more or less everything in about 45 years.
That’s the kind of prediction that needs to be taken with a pinch
of salt. The 40-year prediction horizon should always raise
alarm bells. According to some energy experts, cost-effective
fusion energy is about 40 years away—but it always has been. It
was 40 years away when researchers first explored fusion more
than 50 years ago. But it has stayed a distant dream because the
challenges have turned out to be more significant than anyone
imagined.
Forty years is an important number when humans make
predictions because it is the length of most people’s working
lives. So any predicted change that is further away than that
means the change will happen beyond the working lifetime of
everyone who is working today. In other words, it cannot
happen with any technology that today’s experts have any
practical experience with. That suggests it is a number to be
treated with caution.
But teasing apart the numbers shows something interesting. This
45-year prediction is the median figure from all the experts.
Perhaps some subset of this group is more expert than the
others?
To find out if different groups made different predictions, Grace
and co looked at how the predictions changed with the age of
the researchers, the number of their citations (i.e., their
96. expertise), and their region of origin.
It turns out that age and expertise make no difference to the
prediction, but origin does. While North American researchers
expect AI to outperform humans at everything in 74 years,
researchers from Asia expect it in just 30 years.
That’s a big difference that is hard to explain. And it raises an
interesting question: what do Asian researchers know that North
Americans don’t (or vice versa)?
Ref: arxiv.org/abs/1705.08807 : When Will AI Exceed Human
Performance? Evidence from AI Experts