SlideShare a Scribd company logo
1 of 6
Anthony Dalke
Case Study Analysis: “Rise of the Machines”, from The Economist, 05/2015
In this case study analysis paper, I shall address the question, “Should
companies like Google, Facebook, Baidu, and Amazon morally lead cutting-edge
private-sector research on artificial intelligence (A.I.)?” in the affirmative, arguing that
mutually beneficial collaboration between private business, government, and society at
large would best balance the benefits from A.I. with the ethical issues its growth poses.
Case Facts
We begin this discussion by defining key terminology interspersed in the
Economist article. According to Miriam-Webster, “artificial intelligence” refers to “the
branch of computer science dealing with simulation of intelligent behavior in computers”.
Machine learning entails the application of mathematical or statistical formulas or
techniques to derive patterns, classifications, or predictions from data sets. Neural
networks represent a form of machine learning that involves multiple layers of
calculations building upon one another, similar to neurons in the human brain, to classify
or predict data. And lastly, deep learning has emerged as a particularly advanced
variety of neural networks, incorporating even greater numbers of layers of calculations.
A.I. research has existed for many years – decades, in fact. After undergoing
periods of growth and decline, recent years have seen the subject rise in prominence,
for two key reasons: exponentially increasing quantities of data produced by ever-more-
ubiquitous connected devices (smartphones, sensors, wearables, tablets, etc.) and
declining costs of processing power. Therefore, businesses, governments, and
researchers have identified greater ranges of A.I. applications.
Another recent trend has featured private industry, especially technology
companies like Google, Facebook, Baidu, and Amazon, surge into the cutting-edge of
A.I., developing capabilities such as facial, image, and voice recognition and text
language translation. While these firms have cited potential future applications, like
medical diagnoses, crime prevention, and Internet access for illiterate individuals, that
could have broader social benefits and implications, to this point their usage of A.I. has
primarily concentrated on enhancing their core search engine optimization, social
networking, and product marketing capabilities.
These dynamics give rise to three key ethical issues: the potential for A.I.-
induced job displacement, the fear of A.I. self-awareness, and the socially sub-optimal
allocation of research resources.
The primary stakeholders of this case consist of the aforementioned private
businesses and consumers, as consumers most directly feel the impacts of A.I., as the
typical users of the technologies. Governments and universities serve as indirect
stakeholders, as the former stands to benefit from increased economic growth and,
therefore, tax revenue, but bears a responsibility for the impact of A.I. on its citizens, and
the latter has suffered a version of “brain drain” from the previously cited businesses
hiring researchers. Lastly, the individuals prone to job displacement and religious
institutions and think tanks evaluating ethical implications of these technologies act as
secondary stakeholders.
Utilitarianism Moral Standard Analysis
The ethical issues of job displacement and resource allocation bear significant
import to a utilitarian consideration of this case. Regarding the former, A.I. can reduce
the need for or even eliminate numerous occupations, including but not limited to speech
translators, financial fraud investigators, medical professionals, semi truck drivers,
taxicab operators, even manufacturing workers. This would exacerbate trends
developed nations have witnessed with transitions from industrial to information-
centered economies. This would presumably also contribute to growing concerns about
income inequality, even facilitating civil unrest in worst-case scenarios. While aggregate
wealth may rise with the usage of A.I., the net overall benefit to society, especially when
measured in terms of net employment, median income, or income distribution, remains a
question.
As previously explained, cutting-edge A.I. research has recently originated from
companies like Google, Facebook, Baidu, and Amazon, chiefly for applications directly
related to their core operations. While society derives intangible benefits from enhanced
connectedness, greater access to information on the Internet, and more efficient and
enjoyable shopping experiences, these functions clearly offer less social impact than
healthcare, national security, and particle physics, examples of A.I. research applications
with inferior levels of supporting funding and brainpower. For example, one might
question the additional benefit to society from leading A.I. experts researching facial
recognition for crime detection instead of more effective photo storage.
With this in mind, multiple decision alternatives arise. On the binary side,
companies, governments, and we as society can simply adopt a “laissez-faire”
approach, trusting that the inherent efficiency of the marketplace will result in wealth
accumulating according to the laws of economics – that the marketplace will reward the
firms that implement A.I. in the fashion most attuned to demand, with the accompanying
wealth and profit allocated to new outlets for business investment or personal spending.
This could, in turn, create demand for new products and services the individuals who
suffered job displacement could provide. This alternative would also require allowing
market forces to determine the applications of A.I. By this line of reasoning, we would
assume that A.I. research would begin to concentrate on more egalitarian outlets like
healthcare and national security once the profit, whether tangible or intangible, starts to
exceed the costs. For example, in the event of a national security scare, the public
mood could drive up the intangible benefits of facial recognition to identify terrorist
threats. This “either/or” approach would have a short-term nature, as the described
decisions would take place within a short time horizon.
On the other side of the binary aisle, governments could assume a more
interventionist position in the name of prioritizing the broader welfare of society. This
would entail strict regulations protecting incumbent industries or firms at risk of “creative
destruction” at the hands of A.I. or heavily taxing firms participating in A.I. research.
After all, the Economist article clearly suggests A.I. may very well render certain careers
obsolete. Of course, those firms could also voluntarily opt to refocus their A.I. research
on more utilitarian applications. However, this seems impractical, as it would diminish
profitability of publicly traded firms under the watchful eye of Wall Street analysts. With
respect to the employment of some of the brightest minds in A.I. research in the
previously mentioned companies, government could take a more active role in
redirecting them to goals more in line with the public interest. For example,
governments could recruit and hire those individuals and assign them to A.I. applications
like crime prevention and national defense. This “either/or” approach would have a
short-term nature, as the described decisions would take place within a short time
horizon.
A more moderate approach – in favor of which I argue – grounded in
compromise would involve government crafting initiatives to invest added tax revenue
generated from economic growth aided by A.I. in retraining of those suffering job
displacement. Businesses could also increase charitable giving to anti-poverty and
educational programs, in the effort of equipping individuals with marketable skills and
aiding the most vulnerable. This approach amounts to a long-term, sustained endeavor
that would necessitate consistent vigilance and monitoring. In spite of this, it stands as
preferable, as it would harness the power of the market to realize growth in economic
output and wealth while remediating the negative impacts of A.I.’s expansion. The
government action outlined here would also help assuage and manage public opinion.
On the topic of resource allocation, innovation competitions boast the possibility of
attracting talented researchers to tackle clearly defined problems and topics, without
requiring government employment. For example, the Department of Defense could offer
monetary prizes for researchers who leverage A.I. techniques to address challenges
DoD personnel lack the training to tackle. Firms could assign entire teams to participate
in these events, with the goal of bolstering reputations for innovation (a powerful
branding tool) in addition to securing prize money. This approach also maintains
consistency with Catholic Social Teaching, which emphasizes the greater good and the
dignity of the individual – by balancing economic growth with its effects on income
distribution and net employment, we stand the greatest chance of achieving these
challenging goals.
Human Rights TheoryMoral Standard Analysis
An analysis of these ethical questions from the standpoint of Human Rights
Theory revolves around the more dystopian doomsday scenarios predicted by some
observers of A.I. These people caution about A.I.-infused machines becoming self-
aware, negating humans’ ability to control them and raising the specter of competition or
genuine conflict with them. Furthermore, many warn of the implications of the
“Technological Singularity”, the moment when computers begin designing and creating
more powerful versions of themselves, thus outpacing the capacity for humans to
operate them.
These scenarios would threaten damage and outright destruction of wealth,
property, food, clothing, and even life. Even under less dire circumstances – say, the
human-driven development of A.I.-powered replacements for human bodyparts, could
endanger human rights. After all, if life-changing yet scarce or high-priced innovations
become reality, debates about the distribution of these technologies could venture to
human rights grounds, with activists arguing denial constitutes violations of the
fundamental right to life.
Here, we also observe alternatives ranging from binary to compromising options.
In the case of the former, society can embrace a hands-off approach, treating the dire
predictions as hyperbole. Indeed, the article states that researchers have no desire to
“program intent” into their technologies. Unless this changes, computers should never
seek to “break free” of their human creators. Furthermore, researchers have never
discovered examples of A.I. attempting to step outside the boundaries of their narrowly
defined environments. These two facts designate this long-term-oriented option as the
most optimal path forward; if we see no evidence corroborating those vivid fears, we can
conclude they stand as simply that: fear, not rational prediction.
Aside from this, governments can choose a different binary approach with a more
short-term nature: imposing limits on A.I. research – penalties on private companies for
attempting to further recreate the workings of the brain, public relations campaigns to
generate public disapproval of the technology, and reductions or elimination of any
public funding of A.I. This would constitute a short-term course of action that would
reduce or eliminate risks imposed by A.I. but also prevent benefits from becoming
reality. Since, as previously explained, those risks have a nature more speculative than
anything, this option seems simplistic and exaggerated.
Still, a middle ground does exist: governments and businesses can institute a
long-term practice of “remaining vigilant” about A.I. crossing boundaries toward self-
reliance and Singularity. However, this would rest on two impractical assumptions: that
private industry could arrive at commonly accepted definitions of “self-awareness” and
“Singularity” and governments could encourage or require transparency from firms
researching A.I., so the State could ensure the firms avoid projects that could trigger
self-awareness. Again, these two assumptions appear unreasonable – estimates of
when the Singularity might become reality vary wildly, and private business could always
and fairly easily conceal certain A.I. research initiatives. Because of this and, once
again, the realization that fears of self-awareness and Singularity appear overblown, this
option proves sub-optimal.
Justice TheoryMoral Standard Analysis
The notions of distributive, compensatory, and retributive justice come into play
when considering these ethical questions. Do the benefits of A.I. accrue in proportion to
need? When A.I. causes harm to people – for instance, if an autonomous automobile
malfunctions and causes an accident, how do the victims get compensated? And also in
this event, how much punishment should those responsible receive?
These questions evoke fewer alternative choices than the other moral theories.
A hands-off approach such as those laid out in prior sections of this analysis seems too
impractical to even suggest; this would essentially leave the market to self-regulate in
the short-term. This runs the risk of fomenting raucous public outcry in the event of A.I.
harming people, which could, in turn, swing public opinion drastically against the
technology and stifle future innovation, if firms see demand for A.I. collapsing.
Additionally, this path would answer the question of A.I. benefits aligning with need by
defining “need” as “ability to acquire”. This stands in opposition to the line of thought
encouraged by Catholic Social Teaching – the “preferential option for the poor”, which
defines those in need as the most vulnerable in society. For these reasons, I reject this
option.
The preferred option would, again, consists of a compromise between the binary
approaches: government should make crafting and enacting legislation to regulate A.I.
(and other emerging technologies, for that matter) a priority in the near future. The
legislative process would almost necessarily result in rules and standards that strike
somewhat of a balance between the producers and consumers of A.I.-infused products.
While this would virtually certainly result in solutions that fail to perfectly observe
distributive, compensatory, and retributive justice, the simple presence of uncertain
future outcomes and imperfect information obviate the probability of these ideals
becoming reality. From a Catholic Social Teaching perspective, this route, along with
advocacy in favor of the rights of consumers, would provide the most practical
implementation of the preferential option for the poor. Thus, I recommend this
alternative.
Care TheoryMoral Standard Analysis
Lastly, when examining this case through the lens of Care Theory, two
relationships come into primary focus: that of A.I. producers (like Google, Facebook,
Amazon, and Baidu) and customers and government and its citizens. Since customers
enter into purchase decisions with imperfect information, producers bear a responsibility
to provide reasonable and adequate (admittedly, these terms come with substantial
degrees of subjectivity) product safety. If the producer sacrifices product safety in the
name of profit, the relationship becomes strained and may even become irreconcilable.
If the producer enjoys overwhelming market power and leverages it to charge
unreasonable prices (again, this term admittedly comes with subjectivity), the
relationship with the customer may also suffer, since the customer will presumably lose
the ability to meet its demand.
In simple terms, governments have responsibility for the safety and security of
their citizens. Thus, A.I. raises issues vital to this relationship. For instance, if A.I.
research receives public funding, government assumes a duty to maximize the
probability that the research will benefit the broad public – say, by only awarding the
funding for applications with broad social implications. Additionally but less directly,
government should carefully consider how to care for those displaced or otherwise
negatively affected by A.I. For a final example, applications of A.I. to security, whether
local or national, would likely create ethical questions regarding the tradeoff between
security and privacy. In order to fulfill its obligations in its relationship with citizens,
government should aim to strike a reasonable balance.
Like the other theories enumerated here, one can manage these ethical
questions via a range of options: binary decisions to always err on the side of caution –
e.g. incur added costs and accept reduced profitability to guarantee product safety in the
producer-customer relationship, or default toward the “privacy” side of the
privacy/security equation in the government-citizen relationship – or always err against
caution – to use the same examples, minimizing product safety-related expenses to
maximize profitability or always prioritizing security over privacy – or a compromised,
case-by-case-oriented approach hoping to strike a balance between the two.
Here, the binary approaches lack practicality: steadfast adherence to either
would seemingly inevitably lead to vocal disapproval and dissatisfaction from the losing
party in the relationship. Those losing parties would seek remedial action, probably
through the legal system, which would eventually result in the “balance of power” in the
relationships shifting toward the middle. Therefore, the comprise-grounded path
becomes sensible to pursue at the outset. It would likely require fairly common
disagreements and tension in the relationships, but it remains the most viable long-term
solution.
Conclusion
After considering the ethical implications, yes, companies such as Google,
Facebook, Amazon, and Baidu should morally lead cutting-edge private-sector research
on artificial intelligence, providing the primary and indirect stakeholders – businesses,
consumers, and governments – adopt proper approaches for addressing those ethical
implications in a mutually beneficial fashion.

More Related Content

What's hot

Beyond the Gig Economy
Beyond the Gig EconomyBeyond the Gig Economy
Beyond the Gig EconomyJon Lieber
 
Future of work in government
Future of work in governmentFuture of work in government
Future of work in governmentTemitope Ausi
 
Governance of artificial intelligence
Governance of artificial intelligenceGovernance of artificial intelligence
Governance of artificial intelligenceAraz Taeihagh
 
Kenney & Zysman - The Rise of the Platform Economy (Spring 2016 IST)x
Kenney & Zysman - The Rise of the Platform Economy (Spring 2016 IST)xKenney & Zysman - The Rise of the Platform Economy (Spring 2016 IST)x
Kenney & Zysman - The Rise of the Platform Economy (Spring 2016 IST)xMartin Kenney
 
Math Will Rock Your World
Math Will Rock Your WorldMath Will Rock Your World
Math Will Rock Your Worldbharadwajh
 
Best practices in_business_incubation
Best practices in_business_incubationBest practices in_business_incubation
Best practices in_business_incubationNIABI
 
More than Magic - IBM Institute for Business Value
More than Magic - IBM Institute for Business Value More than Magic - IBM Institute for Business Value
More than Magic - IBM Institute for Business Value FiweSystems
 
Innovation climate survey2014
Innovation climate survey2014Innovation climate survey2014
Innovation climate survey2014Xavier Lepot
 
La trame business vccm english
La trame business vccm englishLa trame business vccm english
La trame business vccm englishRené MANDEL
 
Seizing opportunities with AI in the cognitive economy
Seizing opportunities with AI in the cognitive economySeizing opportunities with AI in the cognitive economy
Seizing opportunities with AI in the cognitive economybaghdad
 
Enterprise Mobility Transforming Public Service and Citizen Engagement
Enterprise Mobility Transforming Public Service and Citizen EngagementEnterprise Mobility Transforming Public Service and Citizen Engagement
Enterprise Mobility Transforming Public Service and Citizen EngagementSAP Asia Pacific
 
White Paper | Connected Government in a Connected World
White Paper | Connected Government in a Connected WorldWhite Paper | Connected Government in a Connected World
White Paper | Connected Government in a Connected WorldThe Microsoft Openness Network
 
Digital Strategy and Building Government as a Platform
Digital Strategy and Building Government as a PlatformDigital Strategy and Building Government as a Platform
Digital Strategy and Building Government as a PlatformCamden
 
Government as a platform: engaging the public with social media
Government as a platform: engaging the public with social mediaGovernment as a platform: engaging the public with social media
Government as a platform: engaging the public with social mediaPatrick McCormick
 
Creating Trustworthy AI: A Mozilla White Paper
Creating Trustworthy AI: A Mozilla White PaperCreating Trustworthy AI: A Mozilla White Paper
Creating Trustworthy AI: A Mozilla White PaperRebecca Ricks
 

What's hot (20)

Beyond the Gig Economy
Beyond the Gig EconomyBeyond the Gig Economy
Beyond the Gig Economy
 
Future of work in government
Future of work in governmentFuture of work in government
Future of work in government
 
MIT THE FUTURE OF WORK 201616-02-Work
MIT THE FUTURE OF WORK 201616-02-WorkMIT THE FUTURE OF WORK 201616-02-Work
MIT THE FUTURE OF WORK 201616-02-Work
 
Governance of artificial intelligence
Governance of artificial intelligenceGovernance of artificial intelligence
Governance of artificial intelligence
 
Kenney & Zysman - The Rise of the Platform Economy (Spring 2016 IST)x
Kenney & Zysman - The Rise of the Platform Economy (Spring 2016 IST)xKenney & Zysman - The Rise of the Platform Economy (Spring 2016 IST)x
Kenney & Zysman - The Rise of the Platform Economy (Spring 2016 IST)x
 
Math Will Rock Your World
Math Will Rock Your WorldMath Will Rock Your World
Math Will Rock Your World
 
M-reading by Business Strategy Review
M-reading by Business Strategy ReviewM-reading by Business Strategy Review
M-reading by Business Strategy Review
 
Rohan dev
Rohan devRohan dev
Rohan dev
 
Best practices in_business_incubation
Best practices in_business_incubationBest practices in_business_incubation
Best practices in_business_incubation
 
Startups embroiled in debate over ethics of facial recognition
Startups embroiled in debate over ethics of facial recognitionStartups embroiled in debate over ethics of facial recognition
Startups embroiled in debate over ethics of facial recognition
 
More than Magic - IBM Institute for Business Value
More than Magic - IBM Institute for Business Value More than Magic - IBM Institute for Business Value
More than Magic - IBM Institute for Business Value
 
Innovation climate survey2014
Innovation climate survey2014Innovation climate survey2014
Innovation climate survey2014
 
La trame business vccm english
La trame business vccm englishLa trame business vccm english
La trame business vccm english
 
Seizing opportunities with AI in the cognitive economy
Seizing opportunities with AI in the cognitive economySeizing opportunities with AI in the cognitive economy
Seizing opportunities with AI in the cognitive economy
 
Enterprise Mobility Transforming Public Service and Citizen Engagement
Enterprise Mobility Transforming Public Service and Citizen EngagementEnterprise Mobility Transforming Public Service and Citizen Engagement
Enterprise Mobility Transforming Public Service and Citizen Engagement
 
White Paper | Connected Government in a Connected World
White Paper | Connected Government in a Connected WorldWhite Paper | Connected Government in a Connected World
White Paper | Connected Government in a Connected World
 
Digital Strategy and Building Government as a Platform
Digital Strategy and Building Government as a PlatformDigital Strategy and Building Government as a Platform
Digital Strategy and Building Government as a Platform
 
Globalization and Complexity
Globalization and ComplexityGlobalization and Complexity
Globalization and Complexity
 
Government as a platform: engaging the public with social media
Government as a platform: engaging the public with social mediaGovernment as a platform: engaging the public with social media
Government as a platform: engaging the public with social media
 
Creating Trustworthy AI: A Mozilla White Paper
Creating Trustworthy AI: A Mozilla White PaperCreating Trustworthy AI: A Mozilla White Paper
Creating Trustworthy AI: A Mozilla White Paper
 

Similar to Ethics_Paper_Dalke

The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...Willy Marroquin (WillyDevNET)
 
AI NOW REPORT 2018
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018Peerasak C.
 
The FDA’s role in the approval and subsequent review of Vioxx, a.docx
The FDA’s role in the approval and subsequent review of Vioxx, a.docxThe FDA’s role in the approval and subsequent review of Vioxx, a.docx
The FDA’s role in the approval and subsequent review of Vioxx, a.docxmehek4
 
Twintangibles - IP & IA in the Social Media Age
Twintangibles - IP & IA in the Social Media AgeTwintangibles - IP & IA in the Social Media Age
Twintangibles - IP & IA in the Social Media Agetwintangibles
 
SME industry has been continuously developing from a new proto-ind.docx
SME industry has been continuously developing from a new proto-ind.docxSME industry has been continuously developing from a new proto-ind.docx
SME industry has been continuously developing from a new proto-ind.docxpbilly1
 
Please ignore the health care reform in two states. That has nothi.docx
Please ignore the health care reform in two states. That has nothi.docxPlease ignore the health care reform in two states. That has nothi.docx
Please ignore the health care reform in two states. That has nothi.docxstilliegeorgiana
 
An Enhanced Right to Open Data
An Enhanced Right to Open DataAn Enhanced Right to Open Data
An Enhanced Right to Open Datanoblex1
 
The Crisis of Self Sovereignty in The Age of Surveillance Capitalism
The Crisis of Self Sovereignty in The Age of Surveillance CapitalismThe Crisis of Self Sovereignty in The Age of Surveillance Capitalism
The Crisis of Self Sovereignty in The Age of Surveillance CapitalismJongseung Kim
 
Three big questions about AI in financial services
Three big questions about AI in financial servicesThree big questions about AI in financial services
Three big questions about AI in financial servicesWhite & Case
 
TECHNOLOGYVENTURES-SYNTHESISREPORT
TECHNOLOGYVENTURES-SYNTHESISREPORTTECHNOLOGYVENTURES-SYNTHESISREPORT
TECHNOLOGYVENTURES-SYNTHESISREPORTAngel Salazar
 
Healthcare Technology Predictions 2016
Healthcare Technology Predictions 2016Healthcare Technology Predictions 2016
Healthcare Technology Predictions 2016Damo Consulting Inc.
 
31611 1028 AMPowerSearch PrintPage 1 of 3httpfind.ga.docx
31611 1028 AMPowerSearch PrintPage 1 of 3httpfind.ga.docx31611 1028 AMPowerSearch PrintPage 1 of 3httpfind.ga.docx
31611 1028 AMPowerSearch PrintPage 1 of 3httpfind.ga.docxgilbertkpeters11344
 
Truth, Trust and The Future of Commerce
Truth, Trust and The Future of CommerceTruth, Trust and The Future of Commerce
Truth, Trust and The Future of Commercesparks & honey
 
Digital Authoritarianism: Implications, Ethics, and Safegaurds
Digital Authoritarianism: Implications, Ethics, and SafegaurdsDigital Authoritarianism: Implications, Ethics, and Safegaurds
Digital Authoritarianism: Implications, Ethics, and SafegaurdsAndy Aukerman
 
everis 2016 InsurTech study - executive summary
everis 2016 InsurTech study - executive summaryeveris 2016 InsurTech study - executive summary
everis 2016 InsurTech study - executive summaryDirk Croenen
 

Similar to Ethics_Paper_Dalke (20)

2017 12-10 13 d
2017 12-10 13 d2017 12-10 13 d
2017 12-10 13 d
 
Where is Our Government? Part II
Where is Our Government? Part IIWhere is Our Government? Part II
Where is Our Government? Part II
 
The AI Now Report The Social and Economic Implications of Artificial Intelli...
The AI Now Report  The Social and Economic Implications of Artificial Intelli...The AI Now Report  The Social and Economic Implications of Artificial Intelli...
The AI Now Report The Social and Economic Implications of Artificial Intelli...
 
AI NOW REPORT 2018
AI NOW REPORT 2018AI NOW REPORT 2018
AI NOW REPORT 2018
 
The FDA’s role in the approval and subsequent review of Vioxx, a.docx
The FDA’s role in the approval and subsequent review of Vioxx, a.docxThe FDA’s role in the approval and subsequent review of Vioxx, a.docx
The FDA’s role in the approval and subsequent review of Vioxx, a.docx
 
Twintangibles - IP & IA in the Social Media Age
Twintangibles - IP & IA in the Social Media AgeTwintangibles - IP & IA in the Social Media Age
Twintangibles - IP & IA in the Social Media Age
 
SME industry has been continuously developing from a new proto-ind.docx
SME industry has been continuously developing from a new proto-ind.docxSME industry has been continuously developing from a new proto-ind.docx
SME industry has been continuously developing from a new proto-ind.docx
 
Please ignore the health care reform in two states. That has nothi.docx
Please ignore the health care reform in two states. That has nothi.docxPlease ignore the health care reform in two states. That has nothi.docx
Please ignore the health care reform in two states. That has nothi.docx
 
An Enhanced Right to Open Data
An Enhanced Right to Open DataAn Enhanced Right to Open Data
An Enhanced Right to Open Data
 
The Crisis of Self Sovereignty in The Age of Surveillance Capitalism
The Crisis of Self Sovereignty in The Age of Surveillance CapitalismThe Crisis of Self Sovereignty in The Age of Surveillance Capitalism
The Crisis of Self Sovereignty in The Age of Surveillance Capitalism
 
Three big questions about AI in financial services
Three big questions about AI in financial servicesThree big questions about AI in financial services
Three big questions about AI in financial services
 
TECHNOLOGYVENTURES-SYNTHESISREPORT
TECHNOLOGYVENTURES-SYNTHESISREPORTTECHNOLOGYVENTURES-SYNTHESISREPORT
TECHNOLOGYVENTURES-SYNTHESISREPORT
 
Healthcare Technology Predictions 2016
Healthcare Technology Predictions 2016Healthcare Technology Predictions 2016
Healthcare Technology Predictions 2016
 
31611 1028 AMPowerSearch PrintPage 1 of 3httpfind.ga.docx
31611 1028 AMPowerSearch PrintPage 1 of 3httpfind.ga.docx31611 1028 AMPowerSearch PrintPage 1 of 3httpfind.ga.docx
31611 1028 AMPowerSearch PrintPage 1 of 3httpfind.ga.docx
 
Big data: Bringing competition policy to the digital era – MANNE – November 2...
Big data: Bringing competition policy to the digital era – MANNE – November 2...Big data: Bringing competition policy to the digital era – MANNE – November 2...
Big data: Bringing competition policy to the digital era – MANNE – November 2...
 
ICE-B.pptx
ICE-B.pptxICE-B.pptx
ICE-B.pptx
 
Truth, Trust and The Future of Commerce
Truth, Trust and The Future of CommerceTruth, Trust and The Future of Commerce
Truth, Trust and The Future of Commerce
 
Collaborative economy
Collaborative economyCollaborative economy
Collaborative economy
 
Digital Authoritarianism: Implications, Ethics, and Safegaurds
Digital Authoritarianism: Implications, Ethics, and SafegaurdsDigital Authoritarianism: Implications, Ethics, and Safegaurds
Digital Authoritarianism: Implications, Ethics, and Safegaurds
 
everis 2016 InsurTech study - executive summary
everis 2016 InsurTech study - executive summaryeveris 2016 InsurTech study - executive summary
everis 2016 InsurTech study - executive summary
 

Ethics_Paper_Dalke

  • 1. Anthony Dalke Case Study Analysis: “Rise of the Machines”, from The Economist, 05/2015 In this case study analysis paper, I shall address the question, “Should companies like Google, Facebook, Baidu, and Amazon morally lead cutting-edge private-sector research on artificial intelligence (A.I.)?” in the affirmative, arguing that mutually beneficial collaboration between private business, government, and society at large would best balance the benefits from A.I. with the ethical issues its growth poses. Case Facts We begin this discussion by defining key terminology interspersed in the Economist article. According to Miriam-Webster, “artificial intelligence” refers to “the branch of computer science dealing with simulation of intelligent behavior in computers”. Machine learning entails the application of mathematical or statistical formulas or techniques to derive patterns, classifications, or predictions from data sets. Neural networks represent a form of machine learning that involves multiple layers of calculations building upon one another, similar to neurons in the human brain, to classify or predict data. And lastly, deep learning has emerged as a particularly advanced variety of neural networks, incorporating even greater numbers of layers of calculations. A.I. research has existed for many years – decades, in fact. After undergoing periods of growth and decline, recent years have seen the subject rise in prominence, for two key reasons: exponentially increasing quantities of data produced by ever-more- ubiquitous connected devices (smartphones, sensors, wearables, tablets, etc.) and declining costs of processing power. Therefore, businesses, governments, and researchers have identified greater ranges of A.I. applications. Another recent trend has featured private industry, especially technology companies like Google, Facebook, Baidu, and Amazon, surge into the cutting-edge of A.I., developing capabilities such as facial, image, and voice recognition and text language translation. While these firms have cited potential future applications, like medical diagnoses, crime prevention, and Internet access for illiterate individuals, that could have broader social benefits and implications, to this point their usage of A.I. has primarily concentrated on enhancing their core search engine optimization, social networking, and product marketing capabilities. These dynamics give rise to three key ethical issues: the potential for A.I.- induced job displacement, the fear of A.I. self-awareness, and the socially sub-optimal allocation of research resources. The primary stakeholders of this case consist of the aforementioned private businesses and consumers, as consumers most directly feel the impacts of A.I., as the typical users of the technologies. Governments and universities serve as indirect stakeholders, as the former stands to benefit from increased economic growth and, therefore, tax revenue, but bears a responsibility for the impact of A.I. on its citizens, and the latter has suffered a version of “brain drain” from the previously cited businesses hiring researchers. Lastly, the individuals prone to job displacement and religious institutions and think tanks evaluating ethical implications of these technologies act as secondary stakeholders.
  • 2. Utilitarianism Moral Standard Analysis The ethical issues of job displacement and resource allocation bear significant import to a utilitarian consideration of this case. Regarding the former, A.I. can reduce the need for or even eliminate numerous occupations, including but not limited to speech translators, financial fraud investigators, medical professionals, semi truck drivers, taxicab operators, even manufacturing workers. This would exacerbate trends developed nations have witnessed with transitions from industrial to information- centered economies. This would presumably also contribute to growing concerns about income inequality, even facilitating civil unrest in worst-case scenarios. While aggregate wealth may rise with the usage of A.I., the net overall benefit to society, especially when measured in terms of net employment, median income, or income distribution, remains a question. As previously explained, cutting-edge A.I. research has recently originated from companies like Google, Facebook, Baidu, and Amazon, chiefly for applications directly related to their core operations. While society derives intangible benefits from enhanced connectedness, greater access to information on the Internet, and more efficient and enjoyable shopping experiences, these functions clearly offer less social impact than healthcare, national security, and particle physics, examples of A.I. research applications with inferior levels of supporting funding and brainpower. For example, one might question the additional benefit to society from leading A.I. experts researching facial recognition for crime detection instead of more effective photo storage. With this in mind, multiple decision alternatives arise. On the binary side, companies, governments, and we as society can simply adopt a “laissez-faire” approach, trusting that the inherent efficiency of the marketplace will result in wealth accumulating according to the laws of economics – that the marketplace will reward the firms that implement A.I. in the fashion most attuned to demand, with the accompanying wealth and profit allocated to new outlets for business investment or personal spending. This could, in turn, create demand for new products and services the individuals who suffered job displacement could provide. This alternative would also require allowing market forces to determine the applications of A.I. By this line of reasoning, we would assume that A.I. research would begin to concentrate on more egalitarian outlets like healthcare and national security once the profit, whether tangible or intangible, starts to exceed the costs. For example, in the event of a national security scare, the public mood could drive up the intangible benefits of facial recognition to identify terrorist threats. This “either/or” approach would have a short-term nature, as the described decisions would take place within a short time horizon. On the other side of the binary aisle, governments could assume a more interventionist position in the name of prioritizing the broader welfare of society. This would entail strict regulations protecting incumbent industries or firms at risk of “creative destruction” at the hands of A.I. or heavily taxing firms participating in A.I. research. After all, the Economist article clearly suggests A.I. may very well render certain careers obsolete. Of course, those firms could also voluntarily opt to refocus their A.I. research on more utilitarian applications. However, this seems impractical, as it would diminish profitability of publicly traded firms under the watchful eye of Wall Street analysts. With respect to the employment of some of the brightest minds in A.I. research in the previously mentioned companies, government could take a more active role in
  • 3. redirecting them to goals more in line with the public interest. For example, governments could recruit and hire those individuals and assign them to A.I. applications like crime prevention and national defense. This “either/or” approach would have a short-term nature, as the described decisions would take place within a short time horizon. A more moderate approach – in favor of which I argue – grounded in compromise would involve government crafting initiatives to invest added tax revenue generated from economic growth aided by A.I. in retraining of those suffering job displacement. Businesses could also increase charitable giving to anti-poverty and educational programs, in the effort of equipping individuals with marketable skills and aiding the most vulnerable. This approach amounts to a long-term, sustained endeavor that would necessitate consistent vigilance and monitoring. In spite of this, it stands as preferable, as it would harness the power of the market to realize growth in economic output and wealth while remediating the negative impacts of A.I.’s expansion. The government action outlined here would also help assuage and manage public opinion. On the topic of resource allocation, innovation competitions boast the possibility of attracting talented researchers to tackle clearly defined problems and topics, without requiring government employment. For example, the Department of Defense could offer monetary prizes for researchers who leverage A.I. techniques to address challenges DoD personnel lack the training to tackle. Firms could assign entire teams to participate in these events, with the goal of bolstering reputations for innovation (a powerful branding tool) in addition to securing prize money. This approach also maintains consistency with Catholic Social Teaching, which emphasizes the greater good and the dignity of the individual – by balancing economic growth with its effects on income distribution and net employment, we stand the greatest chance of achieving these challenging goals. Human Rights TheoryMoral Standard Analysis An analysis of these ethical questions from the standpoint of Human Rights Theory revolves around the more dystopian doomsday scenarios predicted by some observers of A.I. These people caution about A.I.-infused machines becoming self- aware, negating humans’ ability to control them and raising the specter of competition or genuine conflict with them. Furthermore, many warn of the implications of the “Technological Singularity”, the moment when computers begin designing and creating more powerful versions of themselves, thus outpacing the capacity for humans to operate them. These scenarios would threaten damage and outright destruction of wealth, property, food, clothing, and even life. Even under less dire circumstances – say, the human-driven development of A.I.-powered replacements for human bodyparts, could endanger human rights. After all, if life-changing yet scarce or high-priced innovations become reality, debates about the distribution of these technologies could venture to human rights grounds, with activists arguing denial constitutes violations of the fundamental right to life. Here, we also observe alternatives ranging from binary to compromising options. In the case of the former, society can embrace a hands-off approach, treating the dire predictions as hyperbole. Indeed, the article states that researchers have no desire to “program intent” into their technologies. Unless this changes, computers should never
  • 4. seek to “break free” of their human creators. Furthermore, researchers have never discovered examples of A.I. attempting to step outside the boundaries of their narrowly defined environments. These two facts designate this long-term-oriented option as the most optimal path forward; if we see no evidence corroborating those vivid fears, we can conclude they stand as simply that: fear, not rational prediction. Aside from this, governments can choose a different binary approach with a more short-term nature: imposing limits on A.I. research – penalties on private companies for attempting to further recreate the workings of the brain, public relations campaigns to generate public disapproval of the technology, and reductions or elimination of any public funding of A.I. This would constitute a short-term course of action that would reduce or eliminate risks imposed by A.I. but also prevent benefits from becoming reality. Since, as previously explained, those risks have a nature more speculative than anything, this option seems simplistic and exaggerated. Still, a middle ground does exist: governments and businesses can institute a long-term practice of “remaining vigilant” about A.I. crossing boundaries toward self- reliance and Singularity. However, this would rest on two impractical assumptions: that private industry could arrive at commonly accepted definitions of “self-awareness” and “Singularity” and governments could encourage or require transparency from firms researching A.I., so the State could ensure the firms avoid projects that could trigger self-awareness. Again, these two assumptions appear unreasonable – estimates of when the Singularity might become reality vary wildly, and private business could always and fairly easily conceal certain A.I. research initiatives. Because of this and, once again, the realization that fears of self-awareness and Singularity appear overblown, this option proves sub-optimal. Justice TheoryMoral Standard Analysis The notions of distributive, compensatory, and retributive justice come into play when considering these ethical questions. Do the benefits of A.I. accrue in proportion to need? When A.I. causes harm to people – for instance, if an autonomous automobile malfunctions and causes an accident, how do the victims get compensated? And also in this event, how much punishment should those responsible receive? These questions evoke fewer alternative choices than the other moral theories. A hands-off approach such as those laid out in prior sections of this analysis seems too impractical to even suggest; this would essentially leave the market to self-regulate in the short-term. This runs the risk of fomenting raucous public outcry in the event of A.I. harming people, which could, in turn, swing public opinion drastically against the technology and stifle future innovation, if firms see demand for A.I. collapsing. Additionally, this path would answer the question of A.I. benefits aligning with need by defining “need” as “ability to acquire”. This stands in opposition to the line of thought encouraged by Catholic Social Teaching – the “preferential option for the poor”, which defines those in need as the most vulnerable in society. For these reasons, I reject this option. The preferred option would, again, consists of a compromise between the binary approaches: government should make crafting and enacting legislation to regulate A.I. (and other emerging technologies, for that matter) a priority in the near future. The legislative process would almost necessarily result in rules and standards that strike
  • 5. somewhat of a balance between the producers and consumers of A.I.-infused products. While this would virtually certainly result in solutions that fail to perfectly observe distributive, compensatory, and retributive justice, the simple presence of uncertain future outcomes and imperfect information obviate the probability of these ideals becoming reality. From a Catholic Social Teaching perspective, this route, along with advocacy in favor of the rights of consumers, would provide the most practical implementation of the preferential option for the poor. Thus, I recommend this alternative. Care TheoryMoral Standard Analysis Lastly, when examining this case through the lens of Care Theory, two relationships come into primary focus: that of A.I. producers (like Google, Facebook, Amazon, and Baidu) and customers and government and its citizens. Since customers enter into purchase decisions with imperfect information, producers bear a responsibility to provide reasonable and adequate (admittedly, these terms come with substantial degrees of subjectivity) product safety. If the producer sacrifices product safety in the name of profit, the relationship becomes strained and may even become irreconcilable. If the producer enjoys overwhelming market power and leverages it to charge unreasonable prices (again, this term admittedly comes with subjectivity), the relationship with the customer may also suffer, since the customer will presumably lose the ability to meet its demand. In simple terms, governments have responsibility for the safety and security of their citizens. Thus, A.I. raises issues vital to this relationship. For instance, if A.I. research receives public funding, government assumes a duty to maximize the probability that the research will benefit the broad public – say, by only awarding the funding for applications with broad social implications. Additionally but less directly, government should carefully consider how to care for those displaced or otherwise negatively affected by A.I. For a final example, applications of A.I. to security, whether local or national, would likely create ethical questions regarding the tradeoff between security and privacy. In order to fulfill its obligations in its relationship with citizens, government should aim to strike a reasonable balance. Like the other theories enumerated here, one can manage these ethical questions via a range of options: binary decisions to always err on the side of caution – e.g. incur added costs and accept reduced profitability to guarantee product safety in the producer-customer relationship, or default toward the “privacy” side of the privacy/security equation in the government-citizen relationship – or always err against caution – to use the same examples, minimizing product safety-related expenses to maximize profitability or always prioritizing security over privacy – or a compromised, case-by-case-oriented approach hoping to strike a balance between the two. Here, the binary approaches lack practicality: steadfast adherence to either would seemingly inevitably lead to vocal disapproval and dissatisfaction from the losing party in the relationship. Those losing parties would seek remedial action, probably through the legal system, which would eventually result in the “balance of power” in the relationships shifting toward the middle. Therefore, the comprise-grounded path becomes sensible to pursue at the outset. It would likely require fairly common disagreements and tension in the relationships, but it remains the most viable long-term solution.
  • 6. Conclusion After considering the ethical implications, yes, companies such as Google, Facebook, Amazon, and Baidu should morally lead cutting-edge private-sector research on artificial intelligence, providing the primary and indirect stakeholders – businesses, consumers, and governments – adopt proper approaches for addressing those ethical implications in a mutually beneficial fashion.