EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is responsible when AI systems harm us? How do we understand these harms, and how do we
remedy them? Where are the points of intervention, and what additional research and regulation is needed to ensure those interventions are effective? Currently there are few answers to these questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central problem and addresses the following key issues:
1. The growing accountability gap in AI, which favors those who create and deploy these technologies at the expense of those most affected
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial and affect recognition, increasing the potential for centralized control and oppression
3. Increasing government use of automated decision systems that directly impact individuals and communities without established accountability structures
4. Unregulated and unmonitored forms of AI experimentation on human populations
5. The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide recommendations regarding AI development, deployment, and regulation. We offer practical
pathways informed by research so that policymakers, the public, and technologists can better understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where several of the world’s largest AI companies are based.
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
ACS EMERGING & DEEP TECH WEBINAR: THE RISE OF AI AND DATA SCIENCE AND ITS IMP...Kelvin Ross
In recent years Big Data, Data Science and AI has accelerated to point where technological systems are becoming more pervasive in our everyday lives. All aspects of society, work and industry are transforming in this 4th Industrial Revolution. Our personal data is now used to control our searches, news feeds and viewing recommendations. AI in healthcare is diagnosing disease, and proposing medical interventions. Facial recognition is granting us access, and monitoring our safety. Chat bots and automated agents are automatically handling our requests and vetting our applications.
With the increasing power of data and analytics comes responsibility. Our tech titans have gathered enormous power through collection of our personalised data. Recent failures have also highlighted how self-regulation has failed our data can be used weaponised against us, such as reflecting inherent racial biases or manipulating election outcomes. Community expectation is for government to regulate, and put in place appropriate governance and oversight structures.
In this talk Kelvin will explore the technological paradigm shift of AI and data science, review emerging ethical issues, and discuss regulatory and governance trends.
In this video from the Global Tech Jam 2018, Jerry Power from the USC Marshall School of Business presents: Global Tech Jam: I3 Intelligent IoT Integrator.
Watch the video: http://insidesmartcities.com/global-tech-jam-video-i3-intelligent-iot-integrator/
Learn more: https://globaltechjam.com/2018-global-tech-jam-presentations/
and
http://insideSmartCities.com
In this video from the Global Tech Jam 2018, Jerry Power from the USC Marshall School of Business presents: Global Tech Jam: I3 Intelligent IoT Integrator.
Watch the video: http://insidesmartcities.com/global-tech-jam-video-i3-intelligent-iot-integrator/
Learn more: http://i3.usc.edu
https://globaltechjam.com/2018-global-tech-jam-presentations/
and
http://insideSmartCities.com
Ethical Questions of Facial Recognition Technologies by Mika Nieminen Mindtrek
SAFETY AND SECURITY track - Tuesday 28th
"While facial recognition technology is utilised increasingly across the globe, there are extending debates on the ethical aspects and acceptability of facial recognition. Such issues include e.g. that facial recognition is not an accurate tech, it is creating step by step everywhere reaching “surveillance state”, there are challenges with individual privacy and data security, as well as it may have distorting effects on democratic processes. It is suggested, among other things, that facial recognition technology needs to be well regulated, system needs to be transparent and include “bias checks” as well as there needs to be an administrational procedure for correcting technological and social biases and faults in the system."
MIKA NIEMINEN, Principal Scientist, VTT, Technical Research Centre of Finland
Smart City Mindtrek 2020 – conference
28th-29th January
Tampere, Finland
www.mindtrek.org/2020/
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
ACS EMERGING & DEEP TECH WEBINAR: THE RISE OF AI AND DATA SCIENCE AND ITS IMP...Kelvin Ross
In recent years Big Data, Data Science and AI has accelerated to point where technological systems are becoming more pervasive in our everyday lives. All aspects of society, work and industry are transforming in this 4th Industrial Revolution. Our personal data is now used to control our searches, news feeds and viewing recommendations. AI in healthcare is diagnosing disease, and proposing medical interventions. Facial recognition is granting us access, and monitoring our safety. Chat bots and automated agents are automatically handling our requests and vetting our applications.
With the increasing power of data and analytics comes responsibility. Our tech titans have gathered enormous power through collection of our personalised data. Recent failures have also highlighted how self-regulation has failed our data can be used weaponised against us, such as reflecting inherent racial biases or manipulating election outcomes. Community expectation is for government to regulate, and put in place appropriate governance and oversight structures.
In this talk Kelvin will explore the technological paradigm shift of AI and data science, review emerging ethical issues, and discuss regulatory and governance trends.
In this video from the Global Tech Jam 2018, Jerry Power from the USC Marshall School of Business presents: Global Tech Jam: I3 Intelligent IoT Integrator.
Watch the video: http://insidesmartcities.com/global-tech-jam-video-i3-intelligent-iot-integrator/
Learn more: https://globaltechjam.com/2018-global-tech-jam-presentations/
and
http://insideSmartCities.com
In this video from the Global Tech Jam 2018, Jerry Power from the USC Marshall School of Business presents: Global Tech Jam: I3 Intelligent IoT Integrator.
Watch the video: http://insidesmartcities.com/global-tech-jam-video-i3-intelligent-iot-integrator/
Learn more: http://i3.usc.edu
https://globaltechjam.com/2018-global-tech-jam-presentations/
and
http://insideSmartCities.com
Ethical Questions of Facial Recognition Technologies by Mika Nieminen Mindtrek
SAFETY AND SECURITY track - Tuesday 28th
"While facial recognition technology is utilised increasingly across the globe, there are extending debates on the ethical aspects and acceptability of facial recognition. Such issues include e.g. that facial recognition is not an accurate tech, it is creating step by step everywhere reaching “surveillance state”, there are challenges with individual privacy and data security, as well as it may have distorting effects on democratic processes. It is suggested, among other things, that facial recognition technology needs to be well regulated, system needs to be transparent and include “bias checks” as well as there needs to be an administrational procedure for correcting technological and social biases and faults in the system."
MIKA NIEMINEN, Principal Scientist, VTT, Technical Research Centre of Finland
Smart City Mindtrek 2020 – conference
28th-29th January
Tampere, Finland
www.mindtrek.org/2020/
Oxford Internet Institute 19 Sept 2019: Disinformation – Platform, publisher ...Chris Marsden
With the move to a more digital, mobile, and platform-dominated media environment people increasingly find and access news and information via platforms like search engines and social media. These have empowered citizens in many ways and are important drivers of attention to established publishers but have also enabled the distribution of disinformation from a range of different actors. In a context where citizens are often increasingly sceptical of both platforms, publishers, and public authorities, what do we know about the scale and scope of disinformation problems and what can different actors do to counter the problems we face?
https://www.scl.org/articles/10662-interoperability-an-answer-to-regulating-ai-and-social-media-platforms
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
What framework for a responsible use of AI in education?OECD Berlin Centre
Präsentation von Stéphan Vincent-Lacrin im Rahmen der Veranstaltung "Künstliche Intelligenz an der Schule?" von OECD Berlin Centre und Konrad-Adenauer-Stiftung am 2.Oktober 2020
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE. This report is a product of Access Now. We thank lead author Lindsey Andersen for her
significant contributions. If you have questions about this report or you would like more information, you can contact info@accessnow.org.
Government Web Application Security: Issues and Challenges - A Case of IndiaEditor IJCATR
The public services offered by the government are trustworthy, for that reason the government needs to understand various
threats, vulnerabilities, and trends in order to protect the citizen database and offered services. This paper studied various acts, rules,
policies, guidelines and standards adopted by the government departments for development of design, development & deployment of
web-based applications and cited various problems related to coding, manpower and funding issues as a case of India. This study
shows, the majority of government departments is developing and audited web applications before hosting on the public domain. But,
for this most departments have to depend on the private organizations. This drawback arises in the government departments because of
lack of certified or educated staff. Thus the government departments ought to train their staff along with administrators in information
security from time to time. This will ensure making improvements to the internal protection and reduce the dependency on private
organization tremendously.
Offdata: a prosumer law agency to govern big data in the public interestChris Marsden
Presentation to St Petersburg International Legal Forum 19 May 2016 Track Smart Society4.5. Information Security in the Digital Environment: Limits of Big Data Use http://regulatingcode.blogspot.co.uk/2016/05/offdata-prosumer-law-agency-to-govern.html
Big Data can generate, through inferences, new knowledge and perspectives. The paradigm that results from using Big Data creates new opportunities. Big Data has great influence at the governmental level, positively affecting society. These systems can be made more efficient by applying transparency and open governance policies, such as Open Data. After developing predictive models for target audience behavior, Big Data can be used to generate early warnings for various situations. There is thus a positive feedback between research and practice, with rapid discoveries taken from practice.
DOI: 10.13140/RG.2.2.14677.17120
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
Globally, the extensive use of smartphone devices has led to an increase in storage and transmission of enormous volumes of data that could be potentially be used as digital evidence in a forensic investigation. Digital evidence can sometimes be difficult to extract from these devices given the various versions and models of smartphone devices in the market. Forensic analysis of smartphones to extract digital evidence can be carried out in many ways, however, prior knowledge of smartphone forensic tools is paramount to a successful forensic investigation. In this paper, the authors outline challenges, limitations and reliability issues faced when using smartphone device forensic tools and accompanied forensic techniques. The main objective of this paper is intended to be consciousness-raising than suggesting best practices to these forensic work challenges.
"Data Breaches & the Upcoming Data Protection Legal Framework: What’s the Buz...Cédric Laurant
Cédric Laurant: Presentation at the SecureWorld Web Conference: "Incident Response: Clean Up on Aisle Nine" (29 Nov. 2012)
Presentation can be downloaded at http://cedriclaurant.com/about/presentations/, http://blog.cedriclaurant.org and http://security-breaches.com.
Legal Risks of Operating in the World of Connected Technologies (Internet of ...Quarles & Brady
Program Overview:
What Your Company Needs to Understand to Stay Ahead of
the Competition
Companies are exponentially expanding their use and production of connected products and technologies. It is estimated that in 2021, 22.5 billion IoT devices will be shipped globally. With that growth comes a litany of legal challenges. We will discuss the scope of the IoT landscape and address some of the critical legal areas for companies using or selling IoT products, including:
Data privacy and security risks associated with use of IoT devices, The tension between engineering and marketing departments' desire to retain and mine IoT data and the legal risks of accessing, aggregating, and storing the data, Product liability and other legal issues arising from IoT devices on product liability claims, and the ever changing landscape of industry specific regulatory requirements.
Assessing the adoption of e government using tam model case of egyptIJMIT JOURNAL
Electronic government (e-government) was known as an efficient method for government expertness and proficiency as a vital facilitator for citizen-oriented services. Since their initiation over a decade ago, Egovernment services are recognised as a vehicle for accessing online public services. Both governments and academic researchers understand the difficulty of low-level adoption of e-government services among citizens; a common problem between both developing and developed countries. This paper investigates determinants and factors necessary to enhance adoption of citizens for e-government services in developing countries, with particular focus on Egypt, by extending the Technology Acceptance Model (TAM) using a set of political, social, and design constructs that were developed from different sources of research literature.
ASSESSING THE ADOPTION OF E-GOVERNMENT USING TAM MODEL: CASE OF EGYPTIJMIT JOURNAL
Electronic government (e-government) was known as an efficient method for government expertness and proficiency as a vital facilitator for citizen-oriented services. Since their initiation over a decade ago, Egovernment services are recognised as a vehicle for accessing online public services. Both governments and academic researchers understand the difficulty of low-level adoption of e-government services among citizens; a common problem between both developing and developed countries. This paper investigates determinants and factors necessary to enhance adoption of citizens for e-government services in developing countries, with particular focus on Egypt, by extending the Technology Acceptance Model (TAM) using a set of political, social, and design constructs that were developed from different sources of research literature.
Ethics of Computing in Pharmaceutical ResearchAshwani Dhingra
Computing ethics is a set of moral principles that regulate the use of computers. In pharmaceutical research computers, computing technology, and consequent information system has produced ethical challenges and conflicts.
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI 1 systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains. The potential wide-ranging impact make it necessary to look carefully at the ways in which these technologies are being applied now, whom they’re benefiting, and how they’re structuring our social, economic, and interpersonal lives.
A REVIEW OF THE ETHICS OF ARTIFICIAL INTELLIGENCE AND ITS APPLICATIONS IN THE...IJCI JOURNAL
This study is focused on the ethics of Artificial Intelligence and its application in the United States, the
paper highlights the impact AI has in every sector of the US economy and multiple facets of the
technological space and the resultant effect on entities spanning businesses, government, academia, and
civil society. There is a need for ethical considerations as these entities are beginning to depend on AI for
delivering various crucial tasks, which immensely influence their operations, decision-making, and
interactions with each other. The adoption of ethical principles, guidelines, and standards of work is
therefore required throughout the entire process of AI development, deployment, and usage to ensure
responsible and ethical AI practices. Our discussion explores eleven fundamental 'ethical principles'
structured as overarching themes. These encompass Transparency, Justice, Fairness, Equity, NonMaleficence, Responsibility, Accountability, Privacy, Beneficence, Freedom, Autonomy, Trust, Dignity,
Sustainability, and Solidarity. These principles collectively serve as a guiding framework, directing the
ethical path for the responsible development, deployment, and utilization of artificial intelligence (AI)
technologies across diverse sectors and entities within the United States. The paper also discusses the
revolutionary impact of AI applications, such as Machine Learning, and explores various approaches used
to implement AI ethics. This examination is crucial to address the growing concerns surrounding the
inherent risks associated with the widespread use of artificial intelligence.
Oxford Internet Institute 19 Sept 2019: Disinformation – Platform, publisher ...Chris Marsden
With the move to a more digital, mobile, and platform-dominated media environment people increasingly find and access news and information via platforms like search engines and social media. These have empowered citizens in many ways and are important drivers of attention to established publishers but have also enabled the distribution of disinformation from a range of different actors. In a context where citizens are often increasingly sceptical of both platforms, publishers, and public authorities, what do we know about the scale and scope of disinformation problems and what can different actors do to counter the problems we face?
https://www.scl.org/articles/10662-interoperability-an-answer-to-regulating-ai-and-social-media-platforms
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
What framework for a responsible use of AI in education?OECD Berlin Centre
Präsentation von Stéphan Vincent-Lacrin im Rahmen der Veranstaltung "Künstliche Intelligenz an der Schule?" von OECD Berlin Centre und Konrad-Adenauer-Stiftung am 2.Oktober 2020
HUMAN RIGHTS IN THE AGE OF ARTIFICIAL INTELLIGENCE. This report is a product of Access Now. We thank lead author Lindsey Andersen for her
significant contributions. If you have questions about this report or you would like more information, you can contact info@accessnow.org.
Government Web Application Security: Issues and Challenges - A Case of IndiaEditor IJCATR
The public services offered by the government are trustworthy, for that reason the government needs to understand various
threats, vulnerabilities, and trends in order to protect the citizen database and offered services. This paper studied various acts, rules,
policies, guidelines and standards adopted by the government departments for development of design, development & deployment of
web-based applications and cited various problems related to coding, manpower and funding issues as a case of India. This study
shows, the majority of government departments is developing and audited web applications before hosting on the public domain. But,
for this most departments have to depend on the private organizations. This drawback arises in the government departments because of
lack of certified or educated staff. Thus the government departments ought to train their staff along with administrators in information
security from time to time. This will ensure making improvements to the internal protection and reduce the dependency on private
organization tremendously.
Offdata: a prosumer law agency to govern big data in the public interestChris Marsden
Presentation to St Petersburg International Legal Forum 19 May 2016 Track Smart Society4.5. Information Security in the Digital Environment: Limits of Big Data Use http://regulatingcode.blogspot.co.uk/2016/05/offdata-prosumer-law-agency-to-govern.html
Big Data can generate, through inferences, new knowledge and perspectives. The paradigm that results from using Big Data creates new opportunities. Big Data has great influence at the governmental level, positively affecting society. These systems can be made more efficient by applying transparency and open governance policies, such as Open Data. After developing predictive models for target audience behavior, Big Data can be used to generate early warnings for various situations. There is thus a positive feedback between research and practice, with rapid discoveries taken from practice.
DOI: 10.13140/RG.2.2.14677.17120
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
Globally, the extensive use of smartphone devices has led to an increase in storage and transmission of enormous volumes of data that could be potentially be used as digital evidence in a forensic investigation. Digital evidence can sometimes be difficult to extract from these devices given the various versions and models of smartphone devices in the market. Forensic analysis of smartphones to extract digital evidence can be carried out in many ways, however, prior knowledge of smartphone forensic tools is paramount to a successful forensic investigation. In this paper, the authors outline challenges, limitations and reliability issues faced when using smartphone device forensic tools and accompanied forensic techniques. The main objective of this paper is intended to be consciousness-raising than suggesting best practices to these forensic work challenges.
"Data Breaches & the Upcoming Data Protection Legal Framework: What’s the Buz...Cédric Laurant
Cédric Laurant: Presentation at the SecureWorld Web Conference: "Incident Response: Clean Up on Aisle Nine" (29 Nov. 2012)
Presentation can be downloaded at http://cedriclaurant.com/about/presentations/, http://blog.cedriclaurant.org and http://security-breaches.com.
Legal Risks of Operating in the World of Connected Technologies (Internet of ...Quarles & Brady
Program Overview:
What Your Company Needs to Understand to Stay Ahead of
the Competition
Companies are exponentially expanding their use and production of connected products and technologies. It is estimated that in 2021, 22.5 billion IoT devices will be shipped globally. With that growth comes a litany of legal challenges. We will discuss the scope of the IoT landscape and address some of the critical legal areas for companies using or selling IoT products, including:
Data privacy and security risks associated with use of IoT devices, The tension between engineering and marketing departments' desire to retain and mine IoT data and the legal risks of accessing, aggregating, and storing the data, Product liability and other legal issues arising from IoT devices on product liability claims, and the ever changing landscape of industry specific regulatory requirements.
Assessing the adoption of e government using tam model case of egyptIJMIT JOURNAL
Electronic government (e-government) was known as an efficient method for government expertness and proficiency as a vital facilitator for citizen-oriented services. Since their initiation over a decade ago, Egovernment services are recognised as a vehicle for accessing online public services. Both governments and academic researchers understand the difficulty of low-level adoption of e-government services among citizens; a common problem between both developing and developed countries. This paper investigates determinants and factors necessary to enhance adoption of citizens for e-government services in developing countries, with particular focus on Egypt, by extending the Technology Acceptance Model (TAM) using a set of political, social, and design constructs that were developed from different sources of research literature.
ASSESSING THE ADOPTION OF E-GOVERNMENT USING TAM MODEL: CASE OF EGYPTIJMIT JOURNAL
Electronic government (e-government) was known as an efficient method for government expertness and proficiency as a vital facilitator for citizen-oriented services. Since their initiation over a decade ago, Egovernment services are recognised as a vehicle for accessing online public services. Both governments and academic researchers understand the difficulty of low-level adoption of e-government services among citizens; a common problem between both developing and developed countries. This paper investigates determinants and factors necessary to enhance adoption of citizens for e-government services in developing countries, with particular focus on Egypt, by extending the Technology Acceptance Model (TAM) using a set of political, social, and design constructs that were developed from different sources of research literature.
Ethics of Computing in Pharmaceutical ResearchAshwani Dhingra
Computing ethics is a set of moral principles that regulate the use of computers. In pharmaceutical research computers, computing technology, and consequent information system has produced ethical challenges and conflicts.
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI 1 systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains. The potential wide-ranging impact make it necessary to look carefully at the ways in which these technologies are being applied now, whom they’re benefiting, and how they’re structuring our social, economic, and interpersonal lives.
A REVIEW OF THE ETHICS OF ARTIFICIAL INTELLIGENCE AND ITS APPLICATIONS IN THE...IJCI JOURNAL
This study is focused on the ethics of Artificial Intelligence and its application in the United States, the
paper highlights the impact AI has in every sector of the US economy and multiple facets of the
technological space and the resultant effect on entities spanning businesses, government, academia, and
civil society. There is a need for ethical considerations as these entities are beginning to depend on AI for
delivering various crucial tasks, which immensely influence their operations, decision-making, and
interactions with each other. The adoption of ethical principles, guidelines, and standards of work is
therefore required throughout the entire process of AI development, deployment, and usage to ensure
responsible and ethical AI practices. Our discussion explores eleven fundamental 'ethical principles'
structured as overarching themes. These encompass Transparency, Justice, Fairness, Equity, NonMaleficence, Responsibility, Accountability, Privacy, Beneficence, Freedom, Autonomy, Trust, Dignity,
Sustainability, and Solidarity. These principles collectively serve as a guiding framework, directing the
ethical path for the responsible development, deployment, and utilization of artificial intelligence (AI)
technologies across diverse sectors and entities within the United States. The paper also discusses the
revolutionary impact of AI applications, such as Machine Learning, and explores various approaches used
to implement AI ethics. This examination is crucial to address the growing concerns surrounding the
inherent risks associated with the widespread use of artificial intelligence.
The AI Governance Market focuses on offering extensive frameworks and resources to facilitate the responsible progression, implementation, and oversight of AI systems. As AI continues to be integrated into various industries like finance, healthcare, and technology, there's a growing recognition of the ethical implications and possible biases inherent in AI algorithms. AI governance solutions encompass a range of approaches, including ethical standards, transparency protocols, and audit capabilities, all aimed at fostering confidence and accountability in the deployment of AI technologies.
* "Responsible AI Leadership: A Global Summit on Generative AI"
*April 2023 guide for experts and policymakers
* Developing and governing generative AI systems
* + 100 thought leaders and practitioners participated
* Recommendations for responsible development, open innovation & social progress
* 30 action-oriented recommendations aim
* Navigate AI complexities
Re thinking regulation at the age of AILofred Madzou
This is a presentation of the keynote that Lofred Madzou (AI Project Lead at the World Economic Forum) gave on October 14th at the Instituto Nacional de Defensa de la Competencia y la Propiedad Intelectual (INDECOPI) in Lima. It presents some of the most important policy challenges associated with the development of means to address them.
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
Explore the ethical landscape of Artificial Intelligence (AI) through our insightful PowerPoint presentation. Delve into crucial considerations that shape the responsible development and deployment of AI technologies. From privacy concerns and bias mitigation to transparency and accountability, this presentation covers the key ethical dimensions of AI. Gain a comprehensive understanding of the ethical challenges and solutions in the rapidly evolving world of artificial intelligence. Stay informed and empower your audience with the knowledge needed to navigate the ethical intricacies of AI responsibly.
Let us see the good and bad effects of the impact of Artificial Intelligence and the emerging technologies!
Artificial intelligence and machine learning capabilities are growing at an unprecedented rate. These technologies have many widely beneficial applications, ranging from machine translation to medical image analysis. Countless more such applications are being developed and can be expected over the long term. Less attention has historically been paid to the ways in which artificial intelligence can be used maliciously. This report surveys the landscape of potential security threats from malicious uses of artificial intelligence technologies, and proposes ways to better forecast, prevent, and mitigate these threats. We analyze, but do not conclusively resolve, the question of what the long-term equilibrium between attackers and defenders will be. We focus instead on what sorts of attacks we are likely to see soon if adequate defenses are not developed.
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...gerogepatton
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...ijaia
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
THE TRANSFORMATION RISK-BENEFIT MODEL OF ARTIFICIAL INTELLIGENCE:BALANCING RI...ijaia
This paper summarizes the most cogent advantages and risks associated with Artificial Intelligence from an
in-depth review of the literature. Then the authors synthesize the salient risk-related models currently being
used in AI, technology and business-related scenarios. Next, in view of an updated context of AI along with
theories and models reviewed and expanded constructs, the writers propose a new framework called “The
Transformation Risk-Benefit Model of Artificial Intelligence” to address the increasing fears and levels of
AIrisk. Using the model characteristics, the article emphasizes practical and innovative solutions where
benefitsoutweigh risks and three use cases in healthcare, climate change/environment and cyber security to
illustrate unique interplay of principles, dimensions and processes of this powerful AI transformational
model.
Regulating Artificial Intelligence (AI)
Ethical Principles
Legal Frameworks
Transparency and Accountability
Risk Assessment and Mitigation
Data Governance and Privacy
Interdisciplinary Collaboration
International Cooperation
Democracy’s significance in the realm of AI development cannot be overstated. In an era marked by the rapid evolution of technology, AI stands as a transformative force with the potential to reshape societies, economies, and daily lives. As AI’s influence expands, it becomes increasingly essential to integrate democratic principles into its development.
Artificial Intelligence (AI)
Ethics
Transparency
Explainnability
Privacy and Data Protection
Accountability and Responsibility
Robustness and Safety
Collaboration and Interdisciplinary Approaches
Bias Mitigation and Diversity
Global Standards and Regulation
Tips for the Food sector: To keep up with this constantly shifting consumer behavior, look for early signs by using Google Trends to see how demand for certain food products or delivery services is changing to meet people’s needs.
Tips for Travel marketers: Our APAC travel recovery itinerary revealed that people have local trips and safety in mind, so marketers should seek to provide safety information upfront and present local product offerings and fun activities.
Tips for keeping people entertained: Though some people who signed up for a new entertainment source might stay, there’s also a higher likelihood of churn when their trial period ends. If you saw an increase in people signing up for your online products and services, focus on retention to keep them coming back, especially if you offered a free trial during the pandemic.
Tips for merchants: Make sure you integrate digital payment options for your consumers. Digital payments are expected to see a continued boost post-COVID-19, and trust in e-Wallets will likely increase.
Although there is still some instability, the internet sector in SEA is set to emerge stronger than ever in a post-COVID-19 world. The digital economy remains a bright spot in a very challenging economic environment, and e-Commerce remains a key driver of growth. The biggest takeaway for brands and marketers is the need to focus on people and their changing habits online, as well as keeping up with changing trends, as we continue to understand what our new normal will look like in the future.
A Roadmap for CrossBorder Data Flows: Future-Proofing Readiness and Cooperati...Peerasak C.
The World Economic Forum partnered with the Bahrain Economic Development Board and a Steering Committee-led project community of organizations from around the world to co-design the Roadmap for Cross-Border Data Flows, with the aim of identifying best-practice policies that both promote innovation in data-intensive technologies and enable data collaboration at the regional and international levels.
Creating effective policy on cross-border data flows is a priority for any nation that critically depends on its interactions with the rest of the world through the free flow of capital, goods, knowledge and people. Now more than ever, cross-border data flows are key predicates for countries and regions that wish to compete in the Fourth Industrial Revolution and thrive in the post COVID-19 era.
Despite this reality, we are witnessing a proliferation of policies around the world that restrict the movement of data across borders, which is posing a serious threat to the global digital economy, and to the ability of nations to maximize the economic and social benefits of data-reliant technologies such as artificial intelligence (AI) and blockchain.
We hope that countries wishing to engage in cross-border data sharing can feel confident in using the Roadmap as a guide for designing robust respective domestic policies that retain a fine balance between the benefits and risks of data flows.
“Freelancing in America” (FIA) is the most comprehensive study of the independent workforce. Commissioned by Upwork and
Freelancers Union, this study analyzes the size and impact of the freelance economy, as well as the motivations and challenges of this way of working. This year 53 percent of Gen Z workers freelanced—the highest independent workforce participation of any age bracket since FIA’s launch in 2014.
How to start a business: Checklist and CanvasPeerasak C.
How to start a business
A 15-point checklist and notes to take you from idea to launch
It’s critical to understand and manage your startup costs and cash flow wisely. If you aren’t self-funded, find out which investment options make the most sense for your business.
Outsourcing or hiring employees who are experts in their field will free up your time to focus on what you do best so you can drive faster growth. You can also lean on business partners in your community for support and to collectively grow your customer base.
Always remember, fortune favors the bold. But, it also smiles upon those who are prepared.
Download the business model canvas and full checklist here:
https://quickbooks.intuit.com/cas/dam/DOCUMENT/A5AuvH7EZ/Checklist-and-canvas.pdf
The Multiple Effects of Business Planning on New Venture PerformancePeerasak C.
ABSTRACT
We investigate the multiple effects of writing a business plan prior to start-up on new venture performance. We argue that the impact of business plans depends on the purpose for and circumstances in which they are being used. We offer an empirical methodology which can account for these multiple effects while disentangling real impact effects from selection
effects. We apply this to English data where we find that business plans promote employment growth. This is found to be due to the impact of the plan and not selection effects.
- Source: https://www.effectuation.org/wp-content/uploads/2017/06/The-Multiple-Effects-of-Business-Planning-onNew-Venture-Performance-1.pdf
Artificial Intelligence and Life in 2030. Standford U. Sep.2016Peerasak C.
Executive Summary
Artificial Intelligence (AI) is a science and a set of computational technologies that are inspired by—but typically operate quite differently from—the ways people use their nervous systems and bodies to sense, learn, reason, and take action. While the rate of progress in AI has been patchy and unpredictable, there have been significant advances since the field's inception sixty years ago. Once a mostly academic area of study, twenty-first century AI enables a constellation of mainstream technologies that are having a substantial impact on everyday lives. Computer vision and AI planning, for example, drive the video games that are now a bigger entertainment industry than Hollywood. Deep learning, a form of machine learning based on layered representations of variables referred to as neural networks, has made speech-understanding practical on our phones and in our kitchens, and its algorithms can be applied widely to an array of applications that rely on pattern recognition. Natural Language Processing (NLP) and knowledge representation and reasoning have enabled a machine to beat the Jeopardy champion and are bringing new power to Web searches.
- Source: Peter Stone, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller. "Artificial Intelligence and Life in 2030." One Hundred Year Study on Artificial Intelligence: Report of the 2015-2016 Study Panel, Stanford University, Stanford, CA, September 2016. Doc: http://ai100.stanford.edu/2016-report. Accessed: September 6, 2016.
Testing Business Ideas by David Bland & Alex Osterwalder Peerasak C.
"This new Strategyzer book builds upon the Business Model Canvas and Value Proposition Canvas by integrating Assumptions Mapping and other powerful lean startup-style experiments." The Strategyzer
Free download: https://www.strategyzer.com/emails/testing-business-ideas-preview-free-download
To buy: https://www.strategyzer.com/books/testing-business-ideas-david-j-bland ; Amazon.com: Testing Business Ideas (9781119551447): David J. Bland, Alexander Osterwalder: Books https://amzn.to/2Pg7foy
Royal Virtues by Somdet Phra Buddhaghosajahn (P. A. Payutto) translated by Ja...Peerasak C.
Foreword
On the 13th October 2016 His Majesty King Bhumibol Adulyadej, the ninth monarch of his line, passed away. This was a cause of great grief to the people of Thailand. Before long his subjects were queuing in huge numbers to pay their respects to his body, a phenomenon that has continued for the many succeeding months. Now, with just over a year having passed, the Royal Cremation Ceremony is to take place on 26th October 2017.
On such a momentous occasion it is important that the admirable demonstration of gratitude for all that His Majesty has given to the nation, should be supplemented by the effort to express that gratitude by carrying on his good works for the longlasting benefit of our country. Last year I delivered a Dhamma discourse which encouraged this effort, and it has now been published as ธรรมของพระราชา; this book is its English translation.
I would like to express my appreciation for all the people with the faith and devotion to Dhamma, and with the best of wishes for the nation in mind, who have contributed to the publication of this book for free distribution. May the Dhamma be propagated and may wisdom be spread far and wide, for the long-lasting fulfilment of His Majesty the King’s fundamental goals: the welfare and happiness of all.
Somdet Phra Buddhaghosajahn
(P. A. Payutto)
---
Source: http://book.watnyanaves.net/index.php?floor=other-language
Reference
e-Conomy SEA is a multi-year research program launched by Google and Temasek in 2016. Bain & Company joined the program as lead research partner in 2019. The research leverages Bain analysis, Google Trends, Temasek research, industry sources and expert interviews to shed light on the Internet economy in Southeast Asia. The information included in this report is sourced as “Google & Temasek / Bain, e-Conomy SEA 2019” except from third parties specified otherwise.
Disclaimer
The information in this report is provided on an “as is” basis. This document was produced by and the opinions expressed are those of Google, Temasek, Bain and other third parties involved as of the date of writing and are subject to change. It has been prepared solely for information purposes over a limited time period to provide a perspective on the market. Projected market and financial information, analyses and conclusions contained herein should not be construed as definitive forecasts or guarantees of future performance or results. Google, Temasek, Bain or any of their affiliates or any third party involved makes no representation or warranty, either expressed or implied, as to the accuracy or completeness of the
information in the report and shall not be liable for any loss arising from the use hereof. Google does not provide market analysis or financial projections. Google internal data was not used in the development of this report.
General Population Census of the Kingdom of Cambodia 2019Peerasak C.
Provisional Population Totals of GPCC 2019 show that the total de facto population of Cambodia on March 3, 2019 stood at 15,288,489. This is the population that spent the night at the
place of enumeration, thereby excluding those that were abroad, even if only briefly. The total population has increased from 13,395,682 in the 2008 Census. Thus, the population has grown by 1,892,807 persons, which represents 14.1%, over the period of 11 years from 2008 to 2019. The male population was 7,418,577 (48.5%) and the female population stood at 7,869,912 (51.5%). The average size of households was stable since 2008 at 4.6 persons.
The first census conducted in Cambodia in 1962 after independence from France, counted a total population of 5.7 million. The demographic situation of the nation changed dramatically after this first census, because of war and civil unrest. The country carried out no further total counts until
1998. But demographers did undertake some population estimations for the purpose of planning and policy development. A Demographic Survey 1979-1980 estimated the total Cambodia population at approximately 6.6 million. Later, the Socio-Economic Survey of 1994 led by NIS estimated the total population of Cambodia at 9.9 million. In March 1996, the NIS conducted another Demographic Survey covering 20,000 households, which estimated the total population of Cambodia at 10.7 million. Next, the total population determined by the 1998 Census was 11.4 million. The NIS also undertook an Inter-Censal Survey in 2004 and found the population to have increased to 12.8 million. Following a pattern of steady increases, the 2008 Census obtained a result of 13.4 million and after an update by the Inter-Censal Survey of 2013 this figure rose to 14.7 million. Now the provisional result of the 2019 Census, sets the total de facto population at 15.3 million. Obviously, the final census result may differ slightly from this figure.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Securing your Kubernetes cluster_ a step-by-step guide to success !
AI NOW REPORT 2018
1.
AI Now Report 2018
Meredith Whittaker, AI Now Institute, New York University, Google Open Research
Kate Crawford, AI Now Institute, New York University, Microsoft Research
Roel Dobbe, AI Now Institute, New York University
Genevieve Fried, AI Now Institute, New York University
Elizabeth Kaziunas, AI Now Institute, New York University
Varoon Mathur, AI Now Institute, New York University
Sarah Myers West, AI Now Institute, New York University
Rashida Richardson, AI Now Institute, New York University
Jason Schultz, AI Now Institute, New York University School of Law
Oscar Schwartz, AI Now Institute, New York University
With research assistance from Alex Campolo and Gretchen Krueger (AI Now Institute, New York
University)
DECEMBER 2018
2. CONTENTS
ABOUT THE AI NOW INSTITUTE 3
RECOMMENDATIONS 4
EXECUTIVE SUMMARY 7
INTRODUCTION 10
1. THE INTENSIFYING PROBLEM SPACE 12
1.1 AI is Amplifying Widespread Surveillance 12
The faulty science and dangerous history of affect recognition 13
Facial recognition amplifies civil rights concerns 15
1.2 The Risks of Automated Decision Systems in Government 18
1.3 Experimenting on Society: Who Bears the Burden? 22
2. EMERGING SOLUTIONS IN 2018 24
2.1 Bias Busting and Formulas for Fairness: the Limits of Technological “Fixes” 24
Broader approaches 27
2.2 Industry Applications: Toolkits and System Tweaks 28
2.3 Why Ethics is Not Enough 29
3. WHAT IS NEEDED NEXT 32
3.1 From Fairness to Justice 32
3.2 Infrastructural Thinking 33
3.3 Accounting for Hidden Labor in AI Systems 34
3.4 Deeper Interdisciplinarity 36
3.5 Race, Gender and Power in AI 37
3.6 Strategic Litigation and Policy Interventions 39
3.7 Research and Organizing: An Emergent Coalition 40
CONCLUSION 42
ENDNOTES 44
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License
2
3. ABOUT THE AI NOW INSTITUTE
The AI Now Institute at New York University is an interdisciplinary research institute dedicated to
understanding the social implications of AI technologies. It is the first university research center
focused specifically on AI’s social significance. Founded and led by Kate Crawford and Meredith
Whittaker, AI Now is one of the few women-led AI institutes in the world.
AI Now works with a broad coalition of stakeholders, including academic researchers, industry,
civil society, policy makers, and affected communities, to identify and address issues raised by
the rapid introduction of AI across core social domains. AI Now produces interdisciplinary
research to help ensure that AI systems are accountable to the communities and contexts they
are meant to serve, and that they are applied in ways that promote justice and equity. The
Institute’s current research agenda focuses on four core areas: bias and inclusion, rights and
liberties, labor and automation, and safety and critical infrastructure.
Our most recent publications include:
● Litigating Algorithms, a major report assessing recent court cases focused on
government use of algorithms
● Anatomy of an AI System, a large-scale map and longform essay produced in partnership
with SHARE Lab, which investigates the human labor, data, and planetary resources
required to operate an Amazon Echo
● Algorithmic Impact Assessment (AIA) Report, which helps affected communities and
stakeholders assess the use of AI and algorithmic decision-making in public agencies
● Algorithmic Accountability Policy Toolkit, which is geared toward advocates interested
in understanding government use of algorithmic systems
We also host expert workshops and public events on a wide range of topics. Our workshop on
Immigration, Data, and Automation in the Trump Era, co-hosted with the Brennan Center for
Justice and the Center for Privacy and Technology at Georgetown Law, focused on the Trump
Administration’s use of data harvesting, predictive analytics, and machine learning to target
immigrant communities. The Data Genesis Working Group convenes experts from across
industry and academia to examine the mechanics of dataset provenance and maintenance. Our
roundtable on Machine Learning, Inequality and Bias, co-hosted in Berlin with the Robert Bosch
Academy, gathered researchers and policymakers from across Europe to address issues of bias,
discrimination, and fairness in machine learning and related technologies.
Our annual public symposium convenes leaders from academia, industry, government, and civil
society to examine the biggest challenges we face as AI moves into our everyday lives. The AI
Now 2018 Symposium addressed the intersection of AI ethics, organizing, and accountability,
examining the landmark events of the past year. Over 1,000 people registered for the event, which
was free and open to the public. Recordings of the program are available on our website.
More information is available at www.ainowinstitute.org.
3
4. RECOMMENDATIONS
1. Governments need to regulate AI by expanding the powers of sector-specific agencies to
oversee, audit, and monitor these technologies by domain. The implementation of AI
systems is expanding rapidly, without adequate governance, oversight, or accountability
regimes. Domains like health, education, criminal justice, and welfare all have their own
histories, regulatory frameworks, and hazards. However, a national AI safety body or general
AI standards and certification model will struggle to meet the sectoral expertise requirements
needed for nuanced regulation. We need a sector-specific approach that does not prioritize
the technology, but focuses on its application within a given domain. Useful examples of
sector-specific approaches include the United States Federal Aviation Administration and the
National Highway Traffic Safety Administration.
2. Facial recognition and affect recognition need stringent regulation to protect the public
interest. Such regulation should include national laws that require strong oversight, clear
limitations, and public transparency. Communities should have the right to reject the
application of these technologies in both public and private contexts. Mere public notice of
their use is not sufficient, and there should be a high threshold for any consent, given the
dangers of oppressive and continual mass surveillance. Affect recognition deserves particular
attention. Affect recognition is a subclass of facial recognition that claims to detect things
such as personality, inner feelings, mental health, and “worker engagement” based on images
or video of faces. These claims are not backed by robust scientific evidence, and are being
applied in unethical and irresponsible ways that often recall the pseudosciences of phrenology
and physiognomy. Linking affect recognition to hiring, access to insurance, education, and
policing creates deeply concerning risks, at both an individual and societal level.
3. The AI industry urgently needs new approaches to governance. As this report
demonstrates, internal governance structures at most technology companies are failing to
ensure accountability for AI systems. Government regulation is an important component,
but leading companies in the AI industry also need internal accountability structures that go
beyond ethics guidelines. This should include rank-and-file employee representation on the
board of directors, external ethics advisory boards, and the implementation of independent
monitoring and transparency efforts. Third party experts should be able to audit and publish
about key systems, and companies need to ensure that their AI infrastructures can be
understood from “nose to tail,” including their ultimate application and use.
4. AI companies should waive trade secrecy and other legal claims that stand in the way of
accountability in the public sector. Vendors and developers who create AI and automated
decision systems for use in government should agree to waive any trade secrecy or other
legal claim that inhibits full auditing and understanding of their software. Corporate secrecy
4
5. laws are a barrier to due process: they contribute to the “black box effect” rendering systems
opaque and unaccountable, making it hard to assess bias, contest decisions, or remedy
errors. Anyone procuring these technologies for use in the public sector should demand that
vendors waive these claims before entering into any agreements.
5. Technology companies should provide protections for conscientious objectors, employee
organizing, and ethical whistleblowers. Organizing and resistance by technology workers
has emerged as a force for accountability and ethical decision making. Technology
companies need to protect workers’ ability to organize, whistleblow, and make ethical choices
about what projects they work on. This should include clear policies accommodating and
protecting conscientious objectors, ensuring workers the right to know what they are working
on, and the ability to abstain from such work without retaliation or retribution. Workers raising
ethical concerns must also be protected, as should whistleblowing in the public interest.
6. Consumer protection agencies should apply “truth-in-advertising” laws to AI products and
services. The hype around AI is only growing, leading to widening gaps between marketing
promises and actual product performance. With these gaps come increasing risks to both
individuals and commercial customers, often with grave consequences. Much like other
products and services that have the potential to seriously impact or exploit populations, AI
vendors should be held to high standards for what they can promise, especially when the
scientific evidence to back these promises is inadequate and the longer-term consequences
are unknown.
7. Technology companies must go beyond the “pipeline model” and commit to addressing the
practices of exclusion and discrimination in their workplaces. Technology companies and
the AI field as a whole have focused on the “pipeline model,” looking to train and hire more
diverse employees. While this is important, it overlooks what happens once people are hired
into workplaces that exclude, harass, or systemically undervalue people on the basis of
gender, race, sexuality, or disability. Companies need to examine the deeper issues in their
workplaces, and the relationship between exclusionary cultures and the products they build,
which can produce tools that perpetuate bias and discrimination. This change in focus needs
to be accompanied by practical action, including a commitment to end pay and opportunity
inequity, along with transparency measures about hiring and retention.
8. Fairness, accountability, and transparency in AI require a detailed account of the “full stack
supply chain.” For meaningful accountability, we need to better understand and track the
component parts of an AI system and the full supply chain on which it relies: that means
accounting for the origins and use of training data, test data, models, application program
interfaces (APIs), and other infrastructural components over a product life cycle. We call this
accounting for the “full stack supply chain” of AI systems, and it is a necessary condition for a
5
6.
more responsible form of auditing. The full stack supply chain also includes understanding
the true environmental and labor costs of AI systems. This incorporates energy use, the use of
labor in the developing world for content moderation and training data creation, and the
reliance on clickworkers to develop and maintain AI systems.
9. More funding and support are needed for litigation, labor organizing, and community
participation on AI accountability issues. The people most at risk of harm from AI systems
are often those least able to contest the outcomes. We need increased support for robust
mechanisms of legal redress and civic participation. This includes supporting public
advocates who represent those cut off from social services due to algorithmic decision
making, civil society organizations and labor organizers that support groups that are at risk of
job loss and exploitation, and community-based infrastructures that enable public
participation.
10. University AI programs should expand beyond computer science and engineering
disciplines. AI began as an interdisciplinary field, but over the decades has narrowed to
become a technical discipline. With the increasing application of AI systems to social
domains, it needs to expand its disciplinary orientation. That means centering forms of
expertise from the social and humanistic disciplines. AI efforts that genuinely wish to address
social implications cannot stay solely within computer science and engineering departments,
where faculty and students are not trained to research the social world. Expanding the
disciplinary orientation of AI research will ensure deeper attention to social contexts, and
more focus on potential hazards when these systems are applied to human populations.
6
7. EXECUTIVE SUMMARY
At the core of the cascading scandals around AI in 2018 are questions of accountability: who is
responsible when AI systems harm us? How do we understand these harms, and how do we
remedy them? Where are the points of intervention, and what additional research and regulation is
needed to ensure those interventions are effective? Currently there are few answers to these
questions, and the frameworks presently governing AI are not capable of ensuring accountability.
As the pervasiveness, complexity, and scale of these systems grow, the lack of meaningful
accountability and oversight – including basic safeguards of responsibility, liability, and due
process – is an increasingly urgent concern.
Building on our 2016 and 2017 reports, the AI Now 2018 Report contends with this central
problem and addresses the following key issues:
1. The growing accountability gap in AI, which favors those who create and deploy these
technologies at the expense of those most affected
2. The use of AI to maximize and amplify surveillance, especially in conjunction with facial
and affect recognition, increasing the potential for centralized control and oppression
3. Increasing government use of automated decision systems that directly impact
individuals and communities without established accountability structures
4. Unregulated and unmonitored forms of AI experimentation on human populations
5. The limits of technological solutions to problems of fairness, bias, and discrimination
Within each topic, we identify emerging challenges and new research, and provide
recommendations regarding AI development, deployment, and regulation. We offer practical
pathways informed by research so that policymakers, the public, and technologists can better
understand and mitigate risks. Given that the AI Now Institute’s location and regional expertise is
concentrated in the U.S., this report will focus primarily on the U.S. context, which is also where
several of the world’s largest AI companies are based.
The AI accountability gap is growing: The technology scandals of 2018 have shown that the gap
between those who develop and profit from AI—and those most likely to suffer the consequences
of its negative effects—is growing larger, not smaller. There are several reasons for this, including
a lack of government regulation, a highly concentrated AI sector, insufficient governance
structures within technology companies, power asymmetries between companies and the people
they serve, and a stark cultural divide between the engineering cohort responsible for technical
research, and the vastly diverse populations where AI systems are deployed. These gaps are
producing growing concern about bias, discrimination, due process, liability, and overall
responsibility for harm. This report emphasizes the urgent need for stronger, sector-specific
research and regulation.
7
8. AI is amplifying widespread surveillance: The role of AI in widespread surveillance has expanded
immensely in the U.S., China, and many other countries worldwide. This is seen in the growing use
of sensor networks, social media tracking, facial recognition, and affect recognition. These
expansions not only threaten individual privacy, but accelerate the automation of surveillance, and
thus its reach and pervasiveness. This presents new dangers, and magnifies many longstanding
concerns. The use of affect recognition, based on debunked pseudoscience, is also on the rise.
Affect recognition attempts to read inner emotions by a close analysis of the face and is
connected to spurious claims about people’s mood, mental health, level of engagement, and guilt
or innocence. This technology is already being used for discriminatory and unethical purposes,
often without people’s knowledge. Facial recognition technology poses its own dangers,
reinforcing skewed and potentially discriminatory practices, from criminal justice to education to
employment, and presents risks to human rights and civil liberties in multiple countries.
Governments are rapidly expanding the use of automated decision systems without adequate
protections for civil rights: Around the world, government agencies are procuring and deploying
automated decision systems (ADS) under the banners of efficiency and cost-savings. Yet many of
these systems are untested and poorly designed for their tasks, resulting in illegal and often
unconstitutional violations of individual rights. Worse, when they make errors and bad decisions,
the ability to question, contest, and remedy these is often difficult or impossible. Some agencies
are attempting to provide mechanisms for transparency, due process, and other basic rights, but
trade secrecy and similar laws threaten to prevent auditing and adequate testing of these
systems. Drawing from proactive agency efforts, and from recent strategic litigation, we outline
pathways for ADS accountability.
Rampant testing of AI systems “in the wild” on human populations: Silicon Valley is known for
its “move fast and break things” mentality, whereby companies are pushed to experiment with
new technologies quickly and without much regard for the impact of failures, including who bears
the risk. In the past year, we have seen a growing number of experiments deploying AI systems “in
the wild” without proper protocols for notice, consent, or accountability. Such experiments
continue, due in part to a lack of consequences for failure. When harms occur, it is often unclear
where or with whom the responsibility lies. Researching and assigning appropriate responsibility
and liability remains an urgent priority.
The limits of technological fixes to problems of fairness, bias, and discrimination: Much new
work has been done designing mathematical models for what should be considered “fair” when
machines calculate outcomes, aimed at avoiding discrimination. Yet, without a framework that
accounts for social and political contexts and histories, these mathematical formulas for fairness
will almost inevitably miss key factors, and can serve to paper over deeper problems in ways that
ultimately increase harm or ignore justice. Broadening perspectives and expanding research into
AI fairness and bias beyond the merely mathematical is critical to ensuring we are capable of
addressing the core issues and moving the focus from parity to justice.
8
9. The move to ethical principles: This year saw the emergence of numerous ethical principles and
guidelines for the creation and deployment of AI technologies, many in response to growing
concerns about AI’s social implications. But as studies show, these types of ethical commitments
have little measurable effect on software development practices if they are not directly tied to
structures of accountability and workplace practices. Further, these codes and guidelines are
rarely backed by enforcement, oversight, or consequences for deviation. Ethical codes can only
help close the AI accountability gap if they are truly built into the processes of AI development and
are backed by enforceable mechanisms of responsibility that are accountable to the public
interest.
The following report develops these themes in detail, reflecting on the latest academic research,
and outlines seven strategies for moving forward:
1. Expanding AI fairness research beyond a focus on mathematical parity and statistical
fairness toward issues of justice
2. Studying and tracking the full stack of infrastructure needed to create AI, including
accounting for material supply chains
3. Accounting for the many forms of labor required to create and maintain AI systems
4. Committing to deeper interdisciplinarity in AI
5. Analyzing race, gender, and power in AI
6. Developing new policy interventions and strategic litigation
7. Building coalitions between researchers, civil society, and organizers within the technology
sector
These approaches are designed to positively recast the AI field and address the growing power
imbalance that currently favors those who develop and profit from AI systems at the expense of
the populations most likely to be harmed.
9
10. INTRODUCTION
The Social Challenges of AI in 2018
The past year has seen accelerated integration of powerful artificial intelligence systems into core
social institutions, against a backdrop of rising inequality, political populism, and industry
scandals.1
There have been major movements from both inside and outside technology
companies pushing for greater accountability and justice. The AI Now 2018 Report focuses on
these themes and examines the gaps between AI ethics and meaningful accountability, and the
role of organizing and regulation.
In short, it has been a dramatic year in AI. In any normal year, Cambridge Analytica seeking to
manipulate national elections in the US and UK using social media data and algorithmic ad
targeting would have been the biggest story.2
But in 2018, it was just one of many scandals.
Facebook had a series of disasters, including a massive data breach in September,3
multiple class
action lawsuits for discrimination,4
accusations of inciting ethnic cleansing in Myanmar,5
potential
violations of the Fair Housing Act,6
and hosting masses of fake Russian accounts.7
Throughout
the year, the company’s executives were frequently summoned to testify, with Mark Zuckerberg
facing the US Senate in April and the European Parliament in May.8
Zuckerberg mentioned AI
technologies over 30 times in his Congressional testimony as the cure-all to the company’s
problems, particularly in the complex areas of censorship, fairness, and content moderation.9
But Facebook wasn’t the only one in crisis. News broke in March that Google was building AI
systems for the Department of Defense’s drone surveillance program, Project Maven.10
The news
kicked off an unprecedented wave of technology worker organizing and dissent across the
industry.11
In June, when the Trump administration introduced the family separation policy that
forcibly removed immigrant children from their parents, employees from Amazon, Salesforce, and
Microsoft all asked their companies to end contracts with U.S. Immigration and Customs
Enforcement (ICE).12
Less than a month later, it was revealed that ICE modified its own risk
assessment algorithm so that it could only produce one result: the system recommended “detain”
for 100% of immigrants in custody.13
Throughout the year, AI systems continued to be tested on live populations in high-stakes
domains, with some serious consequences. In March, autonomous cars killed drivers and
pedestrians.14
Then in May, a voice recognition system in the UK designed to detect immigration
fraud ended up cancelling thousands of visas and deporting people in error.15
Documents leaked
in July showed that IBM Watson was producing “unsafe and incorrect” cancer treatment
recommendations.16
And an investigation in September revealed that IBM was also working with
the New York City Police Department (NYPD) to build an “ethnicity detection” feature to search
faces based on race, using police camera footage of thousands of people in the streets of New
York taken without their knowledge or permission.17
10
11. This is just a sampling of an extraordinary series of incidents from 2018.18
The response has
included a growing wave of criticism, with demands for greater accountability from the
technology industry and the systems they build.19
In turn, some companies have made public
calls for the U.S. to regulate technologies like facial recognition.20
Others have published AI ethics
principles and increased efforts to produce technical fixes for issues of bias and discrimination in
AI systems. But many of these ethical and technical approaches define the problem space very
narrowly, neither contending with the historical or social context nor providing mechanisms for
public accountability, oversight, and due process. This makes it nearly impossible for the public to
validate that any of the current problems have, in fact, been addressed.
As numerous scholars have noted, one significant barrier to accountability is the culture of
industrial and legal secrecy that dominates AI development.21
Just as many AI technologies are
“black boxes”, so are the industrial cultures that create them.22
Many of the fundamental building
blocks required to understand AI systems and to ensure certain forms of accountability – from
training data, to data models, to the code dictating algorithmic functions, to implementation
guidelines and software, to the business decisions that directed design and development – are
rarely accessible to review, hidden by corporate secrecy laws.
The current accountability gap is also caused by the incentives driving the rapid pace of technical
AI research. The push to “innovate,” publish first, and present a novel addition to the technical
domain has created an accelerated cadence in the field of AI, and in technical disciplines more
broadly. This comes at the cost of considering empirical questions of context and use, or
substantively engaging with ethical concerns.23
Similarly, technology companies are driven by
pressures to “launch and iterate,” which assume complex social and political questions will be
handled by policy and legal departments, leaving developers and sales departments free from the
responsibility of considering the potential downsides. The “move fast and break things” culture
provides little incentive for ensuring meaningful public accountability or engaging the
communities most likely to experience harm.24
This is particularly problematic as the accelerated
application of AI systems in sensitive social and political domains presents risks to marginalized
communities.
The challenge to create better governance and greater accountability for AI poses particular
problems when such systems are woven into the fabric of government and public institutions.
The lack of transparency, notice, meaningful engagement, accountability, and oversight creates
serious structural barriers for due process and redress for unjust and discriminatory decisions.
In this year’s report, we assess many pressing issues facing us as AI tools are deployed further
into the institutions that govern everyday life. We focus on the biggest industry players, because
the number of companies able to create AI at scale is very small, while their power and reach is
global. We evaluate the current range of responses from industry, governments, researchers,
11
12. activists, and civil society at large. We suggest a series of substantive approaches and make ten
specific recommendations. Finally, we share the latest research and policy strategies that can
contribute to greater accountability, as well as a richer understanding of AI systems in a wider
social context.
1. THE INTENSIFYING PROBLEM SPACE
In identifying the most pressing social implications of AI this year, we look closely at the role of AI
in widespread surveillance in multiple countries around the world, and at the implications for
rights and liberties. In particular, we consider the increasing use of facial recognition, and a
subclass of facial recognition known as affect recognition, and assess the growing calls for
regulation. Next, we share our findings on the government use of automated decision systems,
and what questions this raises for fairness, transparency, and due process when such systems
are protected by trade secrecy and other laws that prevent auditing and close examination.25
Finally, we look at the practices of deploying experimental systems “in the wild,” testing them on
human populations. We analyze who has the most to gain, and who is at greatest risk of
experiencing harm.
1.1 AI is Amplifying Widespread Surveillance
This year, we have seen AI amplify large-scale surveillance through techniques that analyze video,
audio, images, and social media content across entire populations and identify and target
individuals and groups. While researchers and advocates have long warned about the dangers of
mass data collection and surveillance,26
AI raises the stakes in three areas: automation, scale of
analysis, and predictive capacity. Specifically, AI systems allow automation of surveillance
capabilities far beyond the limits of human review and hand-coded analytics. Thus, they can serve
to further centralize these capabilities in the hands of a small number of actors. These systems
also exponentially scale analysis and tracking across large quantities of data, attempting to make
connections and inferences that would have been difficult or impossible before their introduction.
Finally, they provide new predictive capabilities to make determinations about individual character
and risk profiles, raising the possibility of granular population controls.
China has offered several examples of alarming AI-enabled surveillance this year, which we know
about largely because the government openly acknowledges them. However, it’s important to
note that many of the same infrastructures already exist in the U.S. and elsewhere, often
produced and promoted by private companies whose marketing emphasizes beneficial use
cases. In the U.S. the use of these tools by law enforcement and government is rarely open to
public scrutiny, as we will review, and there is much we do not know. Such infrastructures and
capabilities could easily be turned to more surveillant ends in the U.S., without public disclosure
and oversight, depending on market incentives and political will.
12
13. In China, military and state-sanctioned automated surveillance technology is being deployed to
monitor large portions of the population, often targeting marginalized groups. Reports include
installation of facial recognition tools at the Hong Kong-Shenzhen border,27
using flocks of robotic
dove-like drones in five provinces across the country,28
and the widely reported social credit
monitoring system,29
each of which illustrates how AI-enhanced surveillance systems can be
mobilized as a means of far-reaching social control.30
The most oppressive use of these systems is reportedly occuring in the Xinjiang Autonomous
Region, described by The Economist as a “police state like no other.”31
Surveillance in this Uighur
ethnic minority area is pervasive, ranging from physical checkpoints and programs where Uighur
households are required to “adopt” Han Chinese officials into their family, to the widespread use of
surveillance cameras, spyware, Wi-Fi sniffers, and biometric data collection, sometimes by
stealth. Machine learning tools integrate these streams of data to generate extensive lists of
suspects for detention in re-education camps, built by the government to discipline the group.
Estimates of the number of people detained in these camps range from hundreds of thousands to
nearly one million.32
These infrastructures are not unique to China. Venezuela announced the adoption of a new smart
card ID known as the “carnet de patria,” which, by integrating government databases linked to
social programs, could enable the government to monitor citizens’ personal finances, medical
history, and voting activity.33
In the United States, we have seen similar efforts. The Pentagon has
funded research on AI-enabled social media surveillance to help predict large-scale population
behaviors,34
and the U.S. Immigration and Customs Enforcement (ICE) agency is using an
Investigative Case Management System developed by Palantir and powered by Amazon Web
Services in its deportation operations.35
The system integrates public data with information
purchased from private data brokers to create profiles of immigrants in order to aid the agency in
profiling, tracking, and deporting individuals.36
These examples show how AI systems increase
integration of surveillance technologies into data-driven models of social control and amplify the
power of such data, magnifying the stakes of misuse and raising urgent and important questions
as to how basic rights and liberties will be protected.
The faulty science and dangerous history of affect recognition
We are also seeing new risks emerging from unregulated facial recognition systems. These
systems facilitate the detection and recognition of individual faces in images or video, and can be
used in combination with other tools to conduct more sophisticated forms of surveillance, such
as automated lip-reading, offering the ability to observe and interpret speech from a distance.37
Among a host of AI-enabled surveillance and tracking techniques, facial recognition raises
particular civil liberties concerns. Because facial features are a very personal form of biometric
identification that is extremely difficult to change, it is hard to subvert or “opt out” of its operations.
13
14. And unlike other tracking tools, facial recognition seeks to use AI for much more than simply
recognizing faces. Once identified, a face can be linked with other forms of personal records and
identifiable data, such as credit score, social graph, or criminal record.
Affect recognition, a subset of facial recognition, aims to interpret faces to automatically detect
inner emotional states or even hidden intentions. This approach promises a type of emotional
weather forecasting: analyzing hundreds of thousands of images of faces, detecting
“micro-expressions,” and mapping these expressions to “true feelings.”38
This reactivates a long
tradition of physiognomy – a pseudoscience that claims facial features can reveal innate aspects
of our character or personality. Dating from ancient times, scientific interest in physiognomy grew
enormously in the nineteenth century, when it became a central method for scientific forms of
racism and discrimination.39
Although physiognomy fell out of favor following its association with
Nazi race science, researchers are worried about a reemergence of physiognomic ideas in affect
recognition applications.40
The idea that AI systems might be able to tell us what a student, a
customer, or a criminal suspect is really feeling or what type of person they intrinsically are is
proving attractive to both corporations and governments, even though the scientific justifications
for such claims are highly questionable, and the history of their discriminatory purposes
well-documented.
The case of affect detection reveals how machine learning systems can easily be used to
intensify forms of classification and discrimination, even when the basic foundations of these
theories remain controversial among psychologists. The scientist most closely associated with
AI-enabled affect detection is the psychologist Paul Ekman, who asserted that emotions can be
grouped into a small set of basic categories like anger, disgust, fear, happiness, sadness, and
surprise.41
Studying faces, according to Ekman, produces an objective reading of authentic
interior states—a direct window to the soul. Underlying his belief was the idea that emotions are
fixed and universal, identical across individuals, and clearly visible in observable biological
mechanisms regardless of cultural context. But Ekman’s work has been deeply criticized by
psychologists, anthropologists, and other researchers who have found his theories do not hold up
under sustained scrutiny.42
The psychologist Lisa Feldman Barrett and her colleagues have
argued that an understanding of emotions in terms of these rigid categories and simplistic
physiological causes is no longer tenable.43
Nonetheless, AI researchers have taken his work as
fact, and used it as a basis for automating emotion detection.44
Contextual, social, and cultural factors — how, where, and by whom such emotional signifiers are
expressed — play a larger role in emotional expression than was believed by Ekman and his peers.
In light of this new scientific understanding of emotion, any simplistic mapping of a facial
expression onto basic emotional categories through AI is likely to reproduce the errors of an
outdated scientific paradigm. It also raises troubling ethical questions about locating the arbiter of
someone’s “real” character and emotions outside of the individual, and the potential abuse of
14
15. power that can be justified based on these faulty claims. Psychiatrist Jamie Metzl documents a
recent cautionary example: a pattern in the 1960s of diagnosing Black people with schizophrenia
if they supported the civil rights movement.45
Affect detection combined with large-scale facial
recognition has the potential to magnify such political abuses of psychological profiling.
In the realm of education, some U.S. universities have considered using affect analysis software
on students.46
The University of St. Thomas, in Minnesota, looked at using a system based on
Microsoft’s facial recognition and affect detection tools to observe students in the classroom
using a webcam. The system predicts the students’ emotional state. An overview of student
sentiment is viewable by the teacher, who can then shift their teaching in a way that “ensures
student engagement,” as judged by the system. This raises serious questions on multiple levels:
what if the system, with a simplistic emotional model, simply cannot grasp more complex states?
How would a student contest a determination made by the system? What if different students are
seen as “happy” while others are “angry”—how should the teacher redirect the lesson? What are
the privacy implications of such a system, particularly given that, in the case of the pilot program,
there is no evidence that students were informed of its use on them?
Outside of the classroom, we are also seeing personal assistants, like Alexa and Siri, seeking to
pick up on the emotional undertones of human speech, with companies even going so far as to
patent methods of marketing based on detecting emotions, as well as mental and physical
health.47
The AI-enabled emotion measurement company Affectiva now promises it can promote
safer driving by monitoring “driver and occupant emotions, cognitive states, and reactions to the
driving experience...from face and voice.”48
Yet there is little evidence that any of these systems
actually work across different individuals, contexts, and cultures, or have any safeguards put in
place to mitigate concerns about privacy, bias, or discrimination in their operation. Furthermore,
as we have seen in the large literature on bias and fairness, classifications of this nature not only
have direct impacts on human lives, but also serve as data to train and influence other AI
systems. This raises the stakes for any use of affect recognition, further emphasizing why it
should be critically examined and its use severely restricted.
Facial recognition amplifies civil rights concerns
Concerns are intensifying that facial recognition increases racial discrimination and other biases
in the criminal justice system. Earlier this year, the American Civil Liberties Union (ACLU)
disclosed that both the Orlando Police Department and the Washington County Sheriff’s
department were using Amazon’s Rekognition system, which boasts that it can perform “real-time
face recognition across tens of millions of faces” and detect “up to 100 faces in challenging
crowded photos.”49
In Washington County, Amazon specifically worked with the Sheriff’s
department to create a mobile app that could scan faces and compare them against a database
15
16. of at least 300,000 mugshots.50
An Amazon representative recently revealed during a talk that
they have been considering applications where Orlando’s network of surveillance cameras could
be used in conjunction with facial recognition technology to find a “person of interest” wherever
they might be in the city.51
In addition to the privacy and mass surveillance concerns commonly raised, the use of facial
recognition in law enforcement has also intersected with concerns of racial and other biases.
Researchers at the ACLU and the University of California (U.C.) Berkeley tested Amazon’s
Rekognition tool by comparing the photos of sitting members in the United States Congress with
a database containing 25,000 photos of people who had been arrested. The results showed
significant levels of inaccuracy: Amazon’s Rekognition incorrectly identified 28 members of
Congress as people from the arrest database. Moreover, the false positives disproportionately
occurred among non-white members of Congress, with an error rate of nearly 40% compared to
only 5% for white members.52
Such results echo a string of findings that have demonstrated that
facial recognition technology is, on average, better at detecting light-skinned people than
dark-skinned people, and better at detecting men than women.53
In its response to the ACLU, Amazon acknowledged that “the Rekognition results can be
significantly skewed by using a facial database that is not appropriately representative.”54
Given
the deep and historical racial biases in the criminal justice system, most law enforcement
databases are unlikely to be “appropriately representative.”55
Despite these serious flaws, ongoing
pressure from civil rights groups, and protests from Amazon employees over the potential for
misuse of these technologies, Amazon Web Services CEO Andrew Jassy recently told employees
that “we feel really great and really strongly about the value that Amazon Rekognition is providing
our customers of all sizes and all types of industries in law enforcement and out of law
enforcement.”56
Nor is Amazon alone in implementing facial recognition technologies in unaccountable ways.
Investigative journalists recently disclosed that IBM and the New York City Police Department
(NYPD) partnered to develop such a system that included “ethnicity search” as a custom feature,
trained on thousands of hours of NYPD surveillance footage.57
Use of facial recognition software
in the private sector has expanded as well.58
Major retailers and venues have already begun using
these technologies to detect shoplifters, monitor crowds, and even “scan for unhappy customers,”
using facial recognition systems instrumented with “affect detection” capabilities.59
These concerns are amplified by a lack of laws and regulations. There is currently no federal
legislation that seeks to provide standards, restrictions, requirements, or guidance regarding the
development or use of facial recognition technology. In fact, most existing federal legislation
looks to promote the use of facial recognition for surveillance, immigration enforcement,
employment verification, and domestic entry-exit systems.60
The laws that we do have are
piecemeal, and none specifically address facial recognition. Among these is the Biometric
Information Privacy Act, a 2008 Illinois law that sets forth stringent rules regarding the collection
of biometrics. While the law does not mention facial recognition, given that the technology was
16
17. not widely available in 2008, many of its requirements, such as obtaining consent, are reasonably
interpreted to apply.61
More recently, several municipalities and a local transit system have
adopted ordinances that seek to create greater transparency and oversight of data collection and
use requirements regarding the acquisition of surveillance technologies, which would include
facial recognition based on the expansive definition in these ordinances.62
Opposition to the use of facial recognition tools by government agencies is growing. Earlier this
year, AI Now joined the ACLU and over 30 other research and advocacy organizations calling on
Amazon to stop selling facial recognition software to government agencies after the ACLU
uncovered documents showing law enforcement use of Amazon’s Rekognition API.63
Members of
Congress are also pushing Amazon to provide more information.64
Some have gone further, calling for an outright ban. Scholars Woodrow Hartzog and Evan Selinger
argue that facial recognition technology is a “tool for oppression that’s perfectly suited for
governments to display unprecedented authoritarian control and an all-out privacy-eviscerating
machine,” necessitating extreme caution and diligence before being applied in our contemporary
digital ecosystem.65
Critiquing the Stanford “gaydar” study that claimed its deep neural network
was more accurate than humans at predicting sexuality from facial images,66
Frank Pasquale
wrote that “there are some scientific research programs best not pursued - and this might be one
of them.”67
Kade Crockford, Director of the Technology for Liberty Program at ACLU of Massachusetts, also
wrote in favor of a ban, stating that “artificial intelligence technologies like face recognition
systems fundamentally change the balance of power between the people and the
government...some technologies are so dangerous to that balance of power that they must be
rejected.”68
Microsoft President Brad Smith has called for government regulation of facial
recognition, while Rick Smith, CEO of law enforcement technology company Axon, recently stated
that the “accuracy thresholds” of facial recognition tools aren’t “where they need to be to be
making operational decisions.”69
The events of this year have strongly underscored the urgent need for stricter regulation of both
facial and affect recognition technologies. Such regulations should severely restrict use by both
the public and the private sector, and ensure that communities affected by these technologies are
the final arbiters of whether they are used at all. This is especially important in situations where
basic rights and liberties are at risk, requiring stringent oversight, audits, and transparency.
Linkages should not be permitted between private and government databases. At this point, given
the evidence in hand, policymakers should not be funding or furthering the deployment of these
systems in public spaces.
17
18. 1.2 The Risks of Automated Decision Systems in
Government
Over the past year, we have seen a substantial increase in the adoption of Automated Decision
Systems (ADS) across government domains, including criminal justice, child welfare, education,
and immigration. Often adopted under the theory that they will improve government efficiency or
cost-savings, ADS seek to aid or replace various decision-making processes and policy
determinations. However, because the underlying models are often proprietary and the systems
frequently untested before deployment, many community advocates have raised significant
concerns about lack of due process, accountability, community engagement, and auditing.70
Such was the case for Tammy Dobbs, who moved to Arkansas in 2008 and signed up for a state
disability program to help her with her cerebral palsy.71
Under the program, the state sent a
qualified nurse to assess Tammy to determine the number of caregiver hours she would need.
Because Tammy spent most of her waking hours in a wheelchair and had stiffness in her hands,
her initial assessment allocated 56 hours of home care per week. Fast forward to 2016, when the
state assessor arrived with a new ADS on her laptop. Using a proprietary algorithm, this system
calculated the number of hours Tammy would be allotted. Without any explanation or opportunity
for comment, discussion, or reassessment, the program allotted Tammy 32 hours per week, a
massive and sudden drop that Tammy had no chance to prepare for and that severely reduced
her quality of life.
Nor was Tammy’s situation exceptional. According to Legal Aid of Arkansas attorney Kevin De
Liban, hundreds of other individuals with disabilities also received dramatic reductions in hours, all
without any meaningful opportunity to understand or contest their allocations. Legal Aid
subsequently sued the State of Arkansas, eventually winning a ruling that the new algorithmic
allocation program was erroneous and unconstitutional. Yet by then, much of the damage to the
lives of those affected had been done.72
The Arkansas disability cases provide a concrete example of the substantial risks that occur
when governments use ADS in decisions that have immediate impacts on vulnerable populations.
While individual assessors may also suffer from bias or flawed logic, the impact of their
case-by-case decisions has nowhere near the magnitude or scale that a single flawed ADS can
have across an entire population.
The increased introduction of such systems comes at a time when, according to the World
Income Inequality Database, the United States has the highest income inequality rate of all
western countries.73
Moreover, Federal Reserve data shows wealth inequalities continue to grow,
and racial wealth disparities have more than tripled in the last 50 years, with current policies set to
exacerbate such problems.74
In 2018 alone, we have seen a U.S. executive order cutting funding
for social programs that serve the country’s poorest citizens,75
alongside a proposed federal
18
19. budget that will significantly reduce low-income and affordable housing,76
the implementation of
onerous work requirements for Medicaid,77
and a proposal to cut food assistance benefits for
low-income seniors and people with disabilities.78
In the context of such policies, agencies are under immense pressure to cut costs, and many are
looking to ADS as a means of automating hard decisions that have very real effects on those
most in need.79
As such, many ADS systems are often implemented with the goal of doing more
with less in the context of austerity policies and cost-cutting. They are frequently designed and
configured primarily to achieve these goals, with their ultimate effectiveness being evaluated
based on their ability to trim costs, often at the expense of the populations such tools are
ostensibly intended to serve.80
As researcher Virginia Eubanks argues, “What seems like an effort
to lower program barriers and remove human bias often has the opposite effect, blocking
hundreds of thousands of people from receiving the services they deserve.”81
When these problems arise, they are frequently difficult to remedy. Few ADS are designed or
implemented in ways that easily allow affected individuals to contest, mitigate, or fix adverse or
incorrect decisions. Additionally, human discretion and the ability to intervene or override a
system’s determination is often substantially limited or removed from case managers, social
workers, and others trained to understand the context and nuance of a particular person and
situation.82
These front-line workers become mere intermediaries, communicating inflexible
decisions made by automated systems, without the ability to alter them.
Unlike the civil servants who have historically been responsible for such decisions, many ADS
come from private vendors and are frequently implemented without thorough testing, review, or
auditing to ensure their fitness for a given domain.83
Nor are these systems typically built with any
explicit form of oversight or accountability. This makes discovery of problematic automated
outcomes difficult, especially since such errors and evidence of discrimination frequently
manifest as collective harms, only recognizable as a pattern across many individual cases.
Detecting such problems requires oversight and monitoring. It also requires access to data that is
often neither available to advocates and the public nor monitored by government agencies.
For example, the Houston Federation of Teachers sued the Houston Independent School District
for procuring a third-party ADS to use student test data to make teacher employment decisions,
including which teachers were promoted and which were terminated. It was revealed that no one
in the district – not a single employee – could explain or even replicate the determinations made
by the system, even though the district had access to all the underlying data.84
Teachers who
sought to contest the determinations were told that the “black box” system was simply to be
believed and could not be questioned. Even when the teachers brought a lawsuit, claiming
constitutional, civil rights, and labor law violations, the ADS vendor fought against providing any
access to how its system worked. As a result, the judge ruled that the use of this ADS in public
employee cases could run afoul of constitutional due process protections, especially when trade
secrecy blocked employees’ ability to understand how decisions were made. The case has
subsequently been settled, with the District agreeing to abandon the third-party ADS.
19
20.
Similarly, in 2013, Los Angeles County adopted an ADS to assess imminent danger or harm to
children, and to predict the likelihood of a family being re-referred to the child welfare system
within 12 to 18 months. The County did not perform a review of the system or assess the efficacy
of using predictive analytics for child safety and welfare. It was only after the death of a child
whom the system failed to identify as at-risk that County leadership directed a review, which
raised serious questions regarding the system’s validity. The review specifically noted that the
system failed to provide a comprehensive picture of a given family, “but instead focus[ed] on a few
broad strokes without giving weight to important nuance.”85
Virginia Eubanks found similar
problems in her investigation of an ADS developed by the same private vendor for use in
Allegheny County, PA. This system produced biased outcomes because it significantly
oversampled poor children from working class communities, especially communities of color, in
effect subjecting poor parents and children to more frequent investigation.86
Even in the face of acknowledged issues of bias and the potential for error in high-stakes
domains, these systems are being rapidly adopted. The Ministry of Social Development in New
Zealand supported the use of a predictive ADS system to identify children at risk of maltreatment,
despite their recognizing that the system raised “significant ethical concerns.” They defended this
on the grounds that the benefits “plausibly outweighed” the potential harms, which included
reconfiguring child welfare as a statistical issue.87
These cases not only highlight the need for greater transparency, oversight, and accountability in
the adoption, development, and implementation of ADS, but also the need for examination of the
limitations of these systems overall, and of the economic and policy factors that accompany the
push to apply such systems. Virginia Eubanks, who investigated Allegheny County’s use of an
ADS in child welfare, looked at this and a number of case studies to show how ADS are often
adopted to avoid or obfuscate broader structural and systemic problems in society – problems
that are often beyond the capacity of cash-strapped agencies to address meaningfully.88
Other automated systems have also been proposed as a strategy to combat pre-existing
problems within government systems. For years, criminal justice advocates and researchers have
pushed for the elimination of cash bail, which has been shown to disproportionately harm
individuals based on race and socioeconomic status while at the same time failing to enhance
public safety.89
In response, New Jersey and California recently passed legislation aimed at
addressing this concern. However, instead of simply ending cash bail, they replaced it with a
pretrial assessment system designed to algorithmically generate “risk” scores that claim to
predict whether a person should go free or be detained in jail while awaiting trial.90
The shift from policies such as cash bail to automated systems and risk assessment scoring is
still relatively new, and is proceeding even without substantial research examining the potential to
amplify discrimination within the criminal justice system. Yet there are some early indicators that
raise concern. New Jersey’s law went into effect in 2017, and while the state has experienced a
decline in its pretrial population, advocates have expressed worry that racial disparities in the risk
20
21. assessment system persist.91
Similarly, when California’s legislation passed earlier this year, many
of the criminal justice advocates who pushed for the end of cash bail, and supported an earlier
version of the bill, opposed its final version due to the risk assessment requirement.92
Education policy is also feeling the impact of automated decision systems. A University College
London professor is among those who argued for AI to replace standardized testing, suggesting
that UCL Knowledge Lab’s AIAssess can be “trusted...with the assessment of our children’s
knowledge and understanding,” and can serve to replace or augment more traditional testing.93
However, much like other forms of AI, there is a growing body of research that shows automated
essay scoring systems may encode bias against certain linguistic and ethnic groups in ways that
replicate patterns of marginalization.94
Unfair decisions based on automated scores assigned to
students from historically and systemically disadvantaged groups are likely to have profound
consequences on children’s lives, and to exacerbate existing disparities in access to employment
opportunities and resources.95
The implications of educational ADS go beyond testing to other areas, such as school
assignments and even transportation. The City of Boston was in the spotlight this year after two
failed efforts to address school equity via automated systems. First, the school district adopted a
geographically-driven school assignment algorithm, intended to provide students access to higher
quality schools closer to home. The city’s goal was to increase the racial and geographic
integration in the school district, but a report assessing the impact of the system determined that
it did the opposite: while it shortened student commutes, it ultimately reduced school
integration.96
Researchers noted that this was, in part, because it was impossible for the system
to meet its intended goal given the history and context within which it was being used. The
geographic distribution of quality schools in Boston was already inequitable, and the pre-existing
racial disparities that played a role in placement at these schools created complications that
could not be overcome by an algorithm.97
Following this, the Boston school district tried again to use an algorithmic system to improve
inequity, this time designing it to reconfigure school start times – aiming to begin high school
later, and middle school earlier. This was done in an effort to improve student health and
performance based on a recognition of students’ circadian rhythms at different ages, and to
optimize use of school buses to produce cost savings. It also aimed to increase racial equity,
since students of color primarily attended schools with inconvenient start times compounded by
long bus rides. The city developed an ADS that optimized for these goals. However, it was never
implemented because of significant public backlash, which ultimately resulted in the resignation
of the superintendent.98
In this case, the design process failed to adequately recognize the needs of families, or include
them in defining and reviewing system goals. Under the proposed system, parents with children in
both high school and middle school would need to reconfigure their schedules for vastly different
start and end times, putting strain on those without this flexibility. The National Association for
the Advancement of Colored People (NAACP) and the Lawyers’ Committee for Civil Rights and
21
22. Economic Justice opposed the plan because of the school district’s failure to appreciate that
parents of color and lower-income parents often rely on jobs that lack work schedule flexibility
and may not be able to afford additional child care.99
These failed efforts demonstrate two important issues that policymakers must consider when
evaluating the use of these systems. First, unaddressed structural and systemic problems will
persist and will likely undermine the potential benefits of these systems if they are not addressed
prior to a system’s design and implementation. Second, robust and meaningful community
engagement is essential before a system is put in place and should be included in the process of
establishing a system’s goals and purpose.
In AI Now’s Algorithmic Impact Assessment (AIA) framework, community engagement is an
integral part of any ADS accountability process, both as part of the design stage as well as before,
during, and after implementation.100
When affected communities have the opportunity to assess
and potentially reject the use of systems that are not acceptable, and to call out fundamental
flaws in the system before it is put in place, the validity and legitimacy of the system is vastly
improved. Such engagement serves communities and government agencies: if parents of color
and lower-income parents in Boston were meaningfully engaged in assessing the goals of the
school start time algorithmic intervention, their concerns might have been accounted for in the
design of the system, saving the city time and resources, and providing a much-needed model of
oversight.
Above all, accountability in the government use of algorithmic systems is impossible when the
systems making recommendations are “black boxes.” When third-party vendors insist on trade
secrecy to keep their systems opaque, it makes any path to redress or appeal extremely
difficult.101
This is why vendors should waive trade secrecy and other legal claims that would
inhibit the ability to understand, audit, or test their systems for bias, error, or other issues. It is
important for both people in government and those who study the effects of these systems to
understand why automated recommendations are made, and to be able to trust their validity. It is
even more critical that those whose lives are negatively impacted by these systems be able to
contest and appeal adverse decisions.102
Governments should be cautious: while automated decision systems may promise short-term
cost savings and efficiencies, it is governments, not third party vendors, who will ultimately be
held responsible for their failings. Without adequate transparency, accountability, and oversight,
these systems risk introducing and reinforcing unfair and arbitrary practices in critical
government determinations and policies.103
1.3 Experimenting on Society: Who Bears the Burden?
Over the last ten years, the funding and focus on technical AI research and development has
accelerated. But efforts at ensuring that these systems are safe and non-discriminatory have not
22
23. received the same resources or attention. Currently, there are few established methods for
measuring, validating, and monitoring the effects of AI systems “in the wild”. AI systems tasked
with significant decision making are effectively tested on live populations, often with little
oversight or a clear regulatory framework.
For example, in March 2018, a self-driving Uber was navigating the Phoenix suburbs and failed to
“see” a woman, hitting and killing her.104
Last March, Tesla confirmed that a second driver had
been killed in an accident in which the car’s autopilot technology was engaged.105
Neither
company suffered serious consequences, and in the case of Uber, the person minding the
autonomous vehicle was ultimately blamed, even though Uber had explicitly disabled the vehicle’s
system for automatically applying brakes in dangerous situations.106
Despite these fatal errors,
Alphabet Inc.’s Waymo recently announced plans for an “early rider program” in Phoenix.107
Residents can sign up to be Waymo test subjects, and be driven automatically in the process.
Many claim that the occasional autonomous vehicle fatality needs to be put in the context of the
existing ecosystem, in which many driving-related deaths happen without AI.108
However, because
regulations and liability regimes govern humans and machines differently, risks generated from
machine-human interactions do not cleanly fall into a discrete regulatory or accountability
category. Strong incentives for regulatory and jurisdictional arbitrage exist in this and many other
AI domains. For example, the fact that Phoenix serves as the site of Waymo and Uber testing is
not an accident. Early this year, Arizona, perhaps swayed by a promise of technology jobs and
capital, made official what the state allowed in practice since 2015: fully autonomous vehicles
without anyone behind the wheel are permitted on public roads. This policy was put in place
without any of the regulatory scaffolding that would be required to contend with the complex
issues that are raised in terms of liability and accountability. In the words of the Phoenix New
Times: “Arizona has agreed to step aside and see how this technology develops. If something
goes wrong, well, there's no plan for that yet.”109
This regulatory accountability gap is clearly visible
in the Uber death case, apparently caused by a combination of corporate expedience (disabling
the automatic braking system) and backup driver distraction.110
While autonomous vehicles arguably present AI’s most straightforward non-military dangers to
human safety, other AI domains also raise serious concerns. For example, IBM’s Watson for
Oncology is already being tested in hospitals across the globe, assisting in patient diagnostics
and clinical care. Increasingly, its effectiveness, and the promises of IBM’s marketing, are being
questioned. Investigative reporters gained access to internal documents that paint a troubling
picture of IBM’s system, including its recommending “unsafe and incorrect cancer treatments.”
While this system was still in its trial phase, it raised serious concerns about the incentives driving
the rush to integrate such technology, and the lack of clinical validation and peer-reviewed
research attesting to IBM’s marketing claims of effectiveness.111
Such events have not slowed AI deployment in healthcare. Recently, the U.S. Food and Drug
Administration (FDA) issued a controversial decision to clear the new Apple Watch, which
features a built-in electrocardiogram (EKG) and the ability to notify a user of irregular heart
23
24. rhythm, as safe for consumers.112
Here, concerns that the FDA may be moving too quickly in an
attempt to keep up with the pace of innovation have joined with concerns around data privacy and
security.113
Similarly, DeepMind Health’s decision to move its Streams Application, a tool designed
to support decision-making by nurses and health practitioners, under the umbrella of Google,
caused some to worry that DeepMind’s promise to not share the data of patients would be
broken.114
Children and young adults are frequently subjects of such experiments. Earlier this year, it was
revealed that Pearson, a major AI-education vendor, inserted “social-psychological interventions”
into one of its commercial learning software programs to test how 9,000 students would respond.
They did this without the consent or knowledge of students, parents, or teachers.115
The company
then tracked whether students who received “growth-mindset” messages through the learning
software attempted and completed more problems than students who did not. This psychological
testing on unknowing populations, especially young people in the education system, raises
significant ethical and privacy concerns. It also highlights the growing influence of private
companies in purportedly public domains, and the lack of transparency and due process that
accompany the current practices of AI deployment and integration.
Here we see not only examples of the real harms that can come from biased and inaccurate AI
systems, but evidence of the AI industry’s willingness to conduct early releases of experimental
tools on human populations. As Amazon recently responded when criticized for monetizing
people’s wedding and baby registries with deceptive advertising tactics, “we’re constantly
experimenting.”116
This is a repeated pattern when market dominance and profits are valued over
safety, transparency, and assurance. Without meaningful accountability frameworks, as well as
strong regulatory structures, this kind of unchecked experimentation will only expand in size and
scale, and the potential hazards will grow.
2. EMERGING SOLUTIONS IN 2018
2.1 Bias Busting and Formulas for Fairness: the Limits of
Technological “Fixes”
Over the past year, we have seen growing consensus that AI systems perpetuate and amplify
bias, and that computational methods are not inherently neutral and objective. This recognition
comes in the wake of a string of examples, including evidence of bias in algorithmic pretrial risk
assessments and hiring algorithms, and has been aided by the work of the Fairness,
Accountability, and Transparency in Machine Learning community.117
The community has been at
the center of an emerging body of academic research on AI-related bias and fairness, producing
insights into the nature of these issues, along with methods aimed at remediating bias. These
approaches are now being operationalized in industrial settings.
24
25.
In the search for “algorithmic fairness”, many definitions of fairness, along with strategies to
achieve it, have been proposed over the past few years, primarily by the technical community.118
This work has informed the development of new algorithms and statistical techniques that aim to
diagnose and mitigate bias. The success of such techniques is generally measured against one or
another computational definition of fairness, based on a mathematical set of results. However,
the problems these techniques ultimately aim to remedy have deep social and historical roots,
some of which are more cleanly captured by discrete mathematical representations than others.
Below is a brief survey of some of the more prominent approaches to understanding and defining
issues involving algorithmic bias and fairness.
● Allocative harms describe the effects of AI systems that unfairly withhold services,
resources, or opportunities from some. Such harms have captured much of the attention
of those dedicated to building technical interventions that ensure fair AI systems, in part
because it is (theoretically) possible to quantify such harms and their remediation.119
However, we have seen less attention paid to fixing systems that amplify and reproduce
representational harms: the harm caused by systems that reproduce and amplify harmful
stereotypes, often doing so in ways that mirror assumptions used to justify discrimination
and inequality.
In a keynote of the 2017 Conference on Neural Information Processing (NeurIPS), AI Now
cofounder Kate Crawford described the way in which historical patterns of discrimination
and classification, which often construct harmful representations of people based on
perceived differences, are reflected in the assumptions and data that inform AI systems,
often resulting in allocative harms.120
This perspective requires one to move beyond
locating biases in an algorithm or dataset, and to consider “the role of AI in harmful
representations of human identity,” and the way in which such harmful representations are
both shaped, and shape, our social and cultural understandings of ourselves and each
other.121
● Observational fairness strategies attempt to diagnose and mitigate bias by considering a
dataset (either data used for training an AI model, or the input data processed by such a
model), and applying methods to the data aimed at detecting whether it encodes bias
against individuals or groups based on characteristics such as race, gender, or
socioeconomic standing. These characteristics are typically referred to as protected or
sensitive attributes. The majority of observational fairness approaches can be categorized
as being a form of either anti-classification, classification parity, or calibration, as
proposed by Sam Corbett-Davies and Sharad Goel.122
Observational fairness strategies
have increasingly emerged through efforts from the community to contend with the
limitations of technical fairness work and to provide entry points for other disciplines.123
● Anti-classification strategies declare a machine learning model to be fair if it does not
depend on protected attributes in the data set. For instance, this strategy considers a
25
26. pretrial risk assessment of two defendants who differ based on race or gender but are
identical in terms of their other personal information to be “fair” if they are assigned the
same risk. This strategy often requires omitting all protected attributes and their “proxies”
from the data set that is used to train a model (proxies being any attributes that are
correlated to protected attributes, such as ZIP code being correlated with race).124
● Classification parity declares a model fair when its predictive performance is equal across
groupings that are defined by protected attributes. For example, classification parity would
ensure that the percentage of people an algorithm turns down for a loan when they are
actually creditworthy (its “false negative” rate) is the same for both Black and white
populations. In practice, this strategy often results in decreasing the “accuracy” for certain
populations in order to match that of others.
● Calibration strategies look less at the data and more at the outcome once an AI system
has produced a decision or prediction. These approaches work to ensure that outcomes
do not depend on protected attributes. For example, in the case of pretrial risk
assessment, applying a calibration strategy would aim to make sure that among a pool of
defendants with a similar risk score, the proportion who actually do reoffend on release is
the same across different protected attributes, such as race.
Several scholars have identified limitations with these approaches to observational fairness. With
respect to anti-classification, some argue that there are important cases where protected
attributes—such as race or gender—should be included in data used to train and inform an AI
system in order to ensure equitable decisions.125
For example, Corbett-Davies and Goel discuss
the importance of including gender in pretrial risk assessment. As women reoffend less often
than men in many jurisdictions, gender-neutral risk assessments tend to overstate the recidivism
risk of women, “which can lead to unnecessarily harsh judicial decisions.” As a result, some
jurisdictions use gender-specific risk assessment tools. These cases counter a widespread view
that deleting sufficient information from data sets will eventually “debias” an AI system. Since
correlations between variables in a dataset almost always exist, removing such variables can
result in very little information, and thus poor predictive performance without the ability to
measure potential harms post hoc.
Secondly, some have argued that different mathematical fairness criteria are mutually exclusive.
Hence, it is generally not possible, except in highly constrained cases, to simultaneously satisfy
both calibration and any form of classification parity.126
These “impossibility results” show how
each fairness strategy makes implicit assumptions about what is and is not fair. They also
highlight the inherent mathematical trade-offs facing those aiming to mitigate various forms of
bias based on one or another fairness definition. Ultimately, these findings serve to complicate the
broader policy debate focused on solving bias issues with mathematical fairness tools. What they
make clear is that solving complex policy issues related to bias and discrimination by
indiscriminately applying one or more fairness metrics is unlikely to be successful. This does not
mean that such metrics are not useful: observational criteria may help understanding around
26
27. whether datasets and AI systems meet various notions of fairness and bias and subsequently
help inform a richer discussion about the goals one hopes to achieve when deploying AI systems
in complex social contexts.
The proliferation of observational fairness methods also raises concerns over the potential to
provide a false sense of assurance. While researchers often have a nuanced sense of the
limitations of their tools, others who might implement them may ignore such limits when looking
for quick fixes. The idea that, once “treated” with such methods, AI systems are free of bias and
safe to use in sensitive domains can provide a dangerous sense of false security—one that relies
heavily on mathematical definitions of fairness without looking at the deeper social and historical
context. As legal scholar Frank Pasquale observes, “algorithms alone can’t meaningfully hold
other algorithms accountable.”127
While increased attention to the problems of fairness and bias in AI is a positive development,
some have expressed concern over a “mathematization of ethics.”128
As Shira Mitchell has argued:
“As statistical thinkers in the political sphere we should be aware of the hazards of
supplanting politics by an expert discourse. In general, every statistical intervention to
a conversation tends to raise the technical bar of entry, until it is reduced to a
conversation between technical experts…are we speaking statistics to power? Or are
we merely providing that power with new tools for the marginalization of unquantified
political concerns?”129
Such concerns are not new. Upcoming work by Hutchinson and Mitchell surveys over fifty years
of attempts to construct quantitative fairness definitions across multiple disciplines. Their work
recalls a period between 1964 and 1973 when researchers focused on defining fairness for
educational assessments in ways that echo the current AI fairness debate. Their efforts stalled
after they were unable to agree on “broad technical solutions to the issues involved in fairness.”
These precedents emphasize what the Fairness, Accountability and Transparency in Machine
Learning community has been discovering: without a “tight connection to real world impact,” the
added value of new fairness metrics and algorithms in the machine learning community could be
minimal.130
In order to arrive at more meaningful research on fairness and algorithmic bias, we
must continue to pair the expertise and perspectives of communities outside of technical
disciplines to those within.
Broader approaches
Dobbe et al. have drawn on the definition of bias proposed in the early value-sensitive design
(VSD) literature to propose a broader view of fairness.131
VSD, as theorized in the nineties by Batya
Friedman and Helen Nissenbaum, asserts that bias in computer systems pre-exists the system
itself.132
Such bias is reflected in the data that informs the systems and embedded in the
assumptions made during the construction of a computer system. This bias manifests during the
27
28. operation of the systems due to feedback loops and dissonance between the system and our
dynamic social and cultural contexts.133
The VSD approach is one way to bring a broader lens to
these issues, emphasizing the interests and perspectives of direct and indirect stakeholders
throughout the design process.
Another approach is a “social systems analysis” first described by Kate Crawford and Ryan Calo in
Nature.134
This is a method that combines quantitative and qualitative research methods by
forensically analyzing a technical system while also studying the technology once it is deployed in
social settings. It proposes that we engage with social impacts at every stage—conception,
design, deployment, and regulation of a technology, across the life cycle.
We have also seen increased focus on examining the provenance and construction of the data
used to train and inform AI systems. This data shapes AI systems’ “view of the world,” and an
understanding of how it is created and what it is meant to represent is essential to understanding
the limits of the systems that it informs.135
As an initial remedy to this problem, a group of
researchers led by Timnit Gebru proposed “Datasheets for Datasets,” a standardized form of
documentation meant to accompany datasets used to train and inform AI systems.136
A follow-up
paper looks at standardizing provenance for AI models.137
These approaches allow AI
practitioners and those overseeing and assessing the applicability of AI within a given context to
better understand whether the data that shapes a given model is appropriate, representative, or
potentially possessing legal or ethical issues.
Advances in bias-busting and fairness formulas are strong signs that the field of AI has accepted
that these concerns are real. However, the limits of narrow mathematical models will continue to
undermine these approaches until broader perspectives are included. Approaches to fairness and
bias must take into account both allocative and representational harms, and those that debate
the definitions of fairness and bias must recognize and give voice to the individuals and
communities most affected.138
Any formulation of fairness that excludes impacted populations
and the institutional context in which a system is deployed is too limited.
2.2 Industry Applications: Toolkits and System Tweaks
This year, we have also seen several technology companies operationalize fairness definitions,
metrics, and tools. In the last year, four of the biggest AI companies released bias mitigation tools.
IBM released the “AI Fairness 360” open-source tool kit, which includes nine different algorithms
and many other fairness metrics developed by researchers in the Fairness, Accountability and
Transparency in Machine Learning community. The toolkit is intended to be integrated into the
software development pipeline from early stages of data pre-processing, to the training process
itself, through the use of specific mathematical models that deploy bias mitigation strategies.139
Google’s People + AI Research group (PAIR) released the open-source “What-If” tool, a dashboard
allowing researchers to visualize the effects of different bias mitigation strategies and metrics, as
well as a tool called “Facets” that supports decision-making around which fairness metric to
28
29. use.140
Microsoft released fairlearn.py, a Python package meant to help implement a binary
classifier subject to a developer’s intended fairness constraint.141
Facebook announced the
creation and testing of a tool called “Fairness Flow”, an internal tool for Facebook engineers that
incorporates many of the same algorithms to help identify bias in machine learning models.142
Even Accenture, a consulting firm, has developed internal software tools to help clients
understand and “essentially eliminate the bias in algorithms.”143
Industry standards bodies have also taken on fairness efforts in response to industry and public
sector requests for accountability assurances. The Institute of Electrical and Electronics
Engineers (IEEE) recently announced an Ethics Certification Program for Autonomous and
Intelligent Systems in the hopes of creating “marks” that can attest to the broader public that an
AI system is transparent, accountable, and fair.144
While this effort is new, and while IEEE has not
published the certification’s underlying methods, it is hard to see, given the complexity of these
issues, how settling on one certification standard across all contexts and all AI systems would be
possible—or ultimately reliable—in ensuring that systems are used in safe and ethical ways.
Similar concerns have arisen in other contexts, such as privacy certification programs.145
In both the rapid industrial adoption of academic fairness methods, and the rush to certification,
we see an eagerness to “solve” and “eliminate” problems of bias and fairness using familiar
approaches and skills that avoid the need for significant structural change, and which fail to
interrogate the complex social and historical factors at play. Combining “academically credible”
technical fairness fixes and certification check boxes runs the risk of instrumenting fairness in
ways that lets industry say it has fixed these problems and may divert attention from examining
ongoing harms. It also relieves companies of the responsibility to explore more complex and
costly forms of review and remediation. Rather than relying on quick fixes, tools, and
certifications, issues of bias and fairness require deeper consideration and more robust
accountability frameworks, including strong disclaimers about how “automated fairness” cannot
be relied on to truly eliminate bias from AI systems.
2.3 Why Ethics is Not Enough
A top-level recommendation in the AI Now 2017 Report advised that “ethical codes meant to steer
the AI field should be accompanied by strong oversight and accountability mechanisms.”146
While
we have seen a rush to adopt such codes, in many instances offered as a means to address the
growing controversy surrounding the design and implementation of AI systems, we have not seen
strong oversight and accountability to backstop these ethical commitments.
After it was revealed that Google was working with the Pentagon on Project Maven—developing AI
systems for drone surveillance—the debate about the role of AI in weapons systems grew in
intensity. Project Maven generated significant protest among Google’s employees, who
successfully petitioned the company’s leadership to end their involvement with the program when
the current contract expired.147
By way of response, Google’s CEO Sundar Pichai released a public
29
30. set of seven “guiding principles” designed to ensure that the company’s work on AI will be socially
responsible.148
These ethical principles include the commitment to ”be socially beneficial,” and to
“avoid creating or reinforcing unfair bias.” They also include a section titled, “AI applications we will
not pursue,” which includes “weapons and other technologies whose principal purpose or
implementation is to cause or directly facilitate injury to people”—a direct response to the
company’s decision not to renew its contract with the Department of Defense. But it is not clear to
the public who would oversee the implementation of the principles, and no ethics board has been
named.
Google was not alone. Other companies, including Microsoft, Facebook, and police body camera
maker Axon, also assembled ethics boards, advisors, and teams.149
In addition, technical
membership organizations moved to update several of their ethical codes. The IEEE reworked its
code of ethics to reflect the challenges of AI and autonomous systems, and researchers in the
Association for Computing Machinery (ACM) called for a restructuring of peer review processes,
requiring the authors of technical papers to consider the potential adverse uses of their work,
which is not a common practice.150
Universities including Harvard, NYU, Stanford, and MIT offered
new courses on the ethics and ethical AI development practices aimed at identifying issues and
considering the ramifications of technological innovation before it is implemented at scale.151
The
University of Montreal launched a wide-ranging process to formulate a declaration for the
responsible development of AI that includes both expert summits and open public deliberations
for input from citizens.152
Such developments are encouraging, and it is noteworthy that those at the heart of AI
development have declared they are taking ethics seriously. Ethical initiatives help develop a
shared language with which to discuss and debate social and political concerns. They provide
developers, company employees, and other stakeholders a set of high-level value statements or
objectives against which actions can be later judged. They are also educational, often doing the
work of raising awareness of particular risks of AI both within a given institution, and externally,
amongst the broader concerned public.153
However, developing socially just and equitable AI systems will require more than ethical
language, however well-intentioned it may be. We see two classes of problems with this current
approach to ethics. The first has to do with enforcement and accountability. Ethical approaches in
industry implicitly ask that the public simply take corporations at their word when they say they
will guide their conduct in ethical ways. While the public may be able to compare a post hoc
decision made by a company to its guiding principles, this does not allow insight into decision
making, or the power to reverse or guide such a decision. In her analysis of Google’s AI Principles,
Lucy Suchman, a pioneering scholar of human computer interaction, argues that without “the
requisite bodies for deliberation, appeal, and redress” vague ethical principles like “don’t be evil” or
“do the right thing” are “vacuous.”154
This “trust us” form of corporate self-governance also has the potential to displace or forestall
more comprehensive and binding forms of governmental regulation. Ben Wagner of the Vienna
30
31. University of Economics and Business argues, “Unable or unwilling to properly provide regulatory
solutions, ethics is seen as the “easy” or “soft” option which can help structure and give meaning
to existing self-regulatory initiatives.”155
In other words, ethical codes may deflect criticism by
acknowledging that problems exist, without ceding any power to regulate or transform the way
technology is developed and applied. The fact that a former Facebook operations manager
claims, “We can’t trust Facebook to regulate itself,” should be taken into account when evaluating
ethical codes in industry.156
A second problem relates to the deeper assumptions and worldviews of the designers of ethical
codes in the technology industry. In response to the proliferation of corporate ethics initiatives,
Greene et al. undertook a systematic critical review of high-profile “vision statements for ethical
AI.”157
One of their findings was that these statements tend to adopt a technologically
deterministic worldview, one where ethical agency and decision making was delegated to experts,
“a narrow circle of who can or should adjudicate ethical concerns around AI/ML” on behalf of the
rest of us. These statements often assert that AI promises both great benefits and risks to a
universal humanity, without acknowledgement of more specific risks to marginalized populations.
Rather than asking fundamental ethical and political questions about whether AI systems should
be built, these documents implicitly frame technological progress as inevitable, calling for better
building.158
Empirical study of the use of these codes is only beginning, but preliminary results are not
promising. One recent study found that “explicitly instructing [engineers] to consider the ACM
code of ethics in their decision making had no observed effect when compared with a control
group.”159
However, these researchers did find that media or historical accounts of ethical
controversies in engineering, like Volkswagen’s Dieselgate, may prompt more reflective practice.
Perhaps the most revealing evidence of the limitations of these emerging ethical codes is how
corporations act after they formulate them. Among the list of applications Google promises not to
pursue as a part of its AI Principles are “technologies whose purpose contravenes widely
accepted principles of international law and human rights.”160
That was tested earlier this year
after investigative journalists revealed that Google was quietly developing a censored version of
its search engine (which relies extensively on AI capabilities) for the Chinese market, code-named
Dragonfly.161
Organizations condemned the project as a violation of human rights law, and as
such, a violation of Google’s AI principles. Google employees also organized against the effort.162
As of writing, the project has not been cancelled, nor has its continued development been
explained in light of the clear commitment in the company’s AI Principles, although Google’s CEO
has defended it as “exploratory.”163
There is an obvious need for accountability and oversight in the industry, and so far the move
toward ethics is not meeting this need. This is likely in part due to the market-driven incentives
working against industry-driven implementations: a drastic (if momentary) drop in Facebook and
Twitter’s share price occurred after they announced efforts to combat misinformation and
increase spending on security and privacy efforts.164
31
32.
This is no excuse not to pursue a more ethically driven agenda, but it does suggest that we should
be wary of relying on companies to implement ethical practices voluntarily, since many of the
incentives governing these large, publicly traded technology corporations penalize ethical action.
For these mechanisms to serve as meaningful forms of accountability requires that external
oversight and transparency be put into place to ensure that there exists an external system of
checks and balances in addition to the cultivation of ethical norms and values within the
engineering profession and technology companies.
3. WHAT IS NEEDED NEXT
When we released our AI Now 2016 Report, fairness formulas, debiasing toolkits, and ethical
guidelines for AI were rare. The fact that they are commonplace today shows how far the field has
come. Yet much more needs to be done. Below, we outline seven strategies for future progress on
these issues.
3.1 From Fairness to Justice
Any debate about bias and fairness should approach issues of power and hierarchy, looking at
who is in a position to produce and profit from these systems, whose values are embedded in
these systems, who sets their “objective functions,” and which contexts they are intended to work
within.165
Echoing the Association for Computing Machinery (ACM) researcher’s call for an
acknowledgement of “negative implications” as a requirement for peer review, much more
attention must be paid to the ways that AI can be used as a tool for exploitation and control.166
We
must also be cautious not to reframe political questions as technical concerns.167
When framed as technical “fixes,” debiasing solutions rarely allow for questions about the
appropriateness or efficacy of an AI system altogether, or for an interrogation of the institutional
context into which the “fixed” AI system will ultimately be applied. For example, a “debiased”
predictive algorithm that accurately forecasts where crime will occur, but that is being used by law
enforcement to harass and oppress communities of color, is still an essentially unfair system.168
To this end, our definitions of “fairness” must expand to encompass the structural, historical, and
political contexts in which an algorithmic systems is deployed.
Furthermore, fairness is a term that can be easily co-opted: important questions such as “Fair to
whom? And in what context?” should always be asked. For example, making a facial recognition
system perform equally on people with light and dark skin may be a type of technical progress in
terms of parity, but if that technology is disproportionately used on people of color and
low-income communities, is it really “fair?” This is why definitions of fairness face a hard limit if
they remain purely contained within the technical domain: in short, “parity is not justice.”169
32