AI and Ethics and Accountability
Hiroshi Nakagawa
(RIKEN AIP)
Images in this file is licensed by creative commons via power point of MicroSoft.
2019/4/26
1
Contents
• Amplification vs. Replacement
• AI takes over jobs
• Misuse/Abuse of AI
• AI Ethics
• Transparency, accountability, trust
• AI weapon
• Flush Crash
Reference
• Ray Kurzweil: The Singularity is Near ,Loretta
Barrett Books Inc.2005
• Nick Bostrom: Superintelligence, Oxford
University Press. 2014
• James Barrat:Our Final Invention: Artificial
Intelligence and the End of the Human Era, New
York Journal of Books, 2013
• John Markoff: Machines of Loving Grace: The
Quest for Common Ground Between Humans and
Robots ,2015
• Thomas H. Davenport , Julia Kirby :Only Humans
Need Apply: Winners and Losers in the Age of
Smart Machines , Harper Business, 2016
Amplify vs. Replace
• Does Artificial Intelligence amplify human
competence?
– IA: Intelligent Assistance/ Intelligence Amplifier
• Does Artificial Intelligence replace human?
– AI: Artificial Intelligence
• “IA vs AI” is a basic view of Markoff’s book: Machines of
Loving Grace: Chapter 4 beautifully describes this view
and the history of AI.
Amplify vs. Replace
• IA vs. AI : This is a 60 year old hostile or
complementary relation since the starting time of
AI.
– When AI is booming, IA is low profile, or vice versa.
– If AI technology becomes stuck, researchers become
favorite IA, such as Terry Winograd  Larry
Page and Sergey Brin Google
– Like this example, IA has provided us with much much
great infuluencial technologies and tools than AI has.
– Question: Is Deep Learning IA or AI?  yes. IA
Is a human in the loop or out of the loop?
• Designing principle parallel with “IA vs. AI” is “a human is in
the loop or out of the loop .”
• In the loop  IA
– A system is an extension of human abilities. A human being and
a system do work collaboratively.
– A human does some task being aided by an AI based system.
– A human might not understand what is going on in an AI system.
• Out of the loop  AI
– A system acts autonomously. A human only commands a system.
– A human is no more an actual stakeholder.
– A human is usually living in a very easy position,
but could not get by when some thing happens.
Is a human in the loop or out of the loop?
• In the loop  IA
• Out of the loop  AI
• Generally speaking, the problem is to find the criteria of
how far AI system should or can be done on behalf of
human beings.
– Of course, the remaining is to be done by a human being.
• This is a traditional problem between machine and
human, but becomes complicated in the era of AI.
• To leave all the decision-making to AI is too easy for
a human being but ends up with a kind of addiction
and intelligently atrophic(shrinking) .
AI takes over human jobs
AI
Where, What,
How???
Davenport says we can find new jobs
immune to AI invasion, but…
–Jobs aiming at higher quality than AI, say
judgment without data
It must be done by so called genius?
Davenport says we can find new jobs
immune to AI invasion, but…
–Jobs AI can not do such as
human
intercommunication,
persuasion, etc..
AI knows more and precise
about things, event, etc.
If NLP becomes great , AI is
more relied by human…
AI
AI
Davenport says we can find new jobs
immune to AI invasion, but…
–Jobs of finding
connection between
business and technology
AI can do more thorough
investigation about
linkage between
business and technology
than human.
AI
Davenport says we can find new jobs
immune to AI invasion, but…
–Jobs less economical with employing
machine or AI , in other words, quite rare
and case specific task.
AI
AI
AI has a learning ability, so easy to
cope with rare or special task!
Davenport says we can find new jobs
immune to AI invasion, but…
–Jobs of developing AI
However, AI itself will develop AI in
the near future!
AI
AI
AI
Davenport says we can find new jobs
immune to AI invasion, but…
–Jobs of explaining
the results
generated by AI
or action done by
AI.
Again, however,
it will only be
done by AI!
AI Explanation
Cannot read
even relevant
papers
summaryAI
AI
Researching job that are thought to be most distant from this
crisis are seemingly taken over by AI.
Writing paper in natural
language is really needed?
Hopeless!
How to cope with this situation?
– Basic income ….
 no incentive,
Unmotivated…
If we want to a
hobby, it needs
amount of money…
If a human job is completely replaced by AI and
a human being is outside of job process
AI
Good-
by
I forgot how to
do this job!
I could not!
Skills are lost
forever!
deleted or
out-of date,
etc.
AI
AI
AI
AI However,
Shortage of workforce
will be saved by AI
and Robot.
One of the real problem is
Misuse/Abuse of AI
20
IEEE Ethically Aligned Design version 2
1. Executive Summary
2. General Principles
3. Embedding Values Into Autonomous
Intelligent Systems
4. Methodologies to Guide Ethical Research
and Design
5. Safety and Beneficence of Artificial General
Intelligence (AGI) and Artificial
Superintelligence (ASI)
6. Personal Data and Individual Access
Control
7. Reframing Autonomous Weapons Systems
8. Economics/Humanitarian Issues
9. Law
10. Affective Computing
11. Classical Ethics in Artificial Intelligence
12. Policy
13. Mixed Reality
14. Well-being 21
The final version was published
IEEE EAD (Final) on April 2019
• 1. Human Rights
– A/IS shall be created and operated to respect, promote,
and protect internationally recognized human rights.
• 2. Well-being
– A/IS creators shall adopt increased human well-being
as a primary success criterion for development.
• 3. Data Agency
– A/IS creators shall empower individuals with the ability
to access and securely share their data, to maintain
people’s capacity to have control over their identity.
• 4. Effectiveness
– A/IS creators and operators shall provide evidence of
the effectiveness and fitness for purpose of A/IS.
22
IEEE EAD (Final) on April 2019
• 5. Transparency
– The basis of a particular A/IS decision should always be
discoverable.
• 6. Accountability
– A/IS shall be created and operated to provide an
unambiguous rationale for all decisions made.
• 7. Awareness of Misuse
– A/IS creators shall guard against all potential misuses
and risks of A/IS in operation.
• 8. Competence
– A/IS creators shall specify and operators shall adhere to
the knowledge and skill required for safe and effective
operation. 23
It’s not me but AI says so!
AIA society without
freedom of speech
nor even human
rights.
We need to design
the society in which
we have the right to
object AI’s decision.
GDPR article 22
Why
me?
GDPR article 22:
Automated individual decision-making,
including profiling
• 1. The data subject shall have the right not to
be subject to a decision based solely on
automated processing, including profiling,
which produces legal effects concerning him
or her or similarly significantly affects him or
her
25
IEEE EAD version2
How to cope with misuse/abuse of AI
– Find out misuse/abuse of AI
 AI should be equipped with the mechanism that
explains the reasoning path and what data is used to
reach the results
 Whistle blower against peculiar/strange behavior of AI
 Redress or rescue package is to be legitimized
 Insurance is also needed
26
Implementation of AI Ethics
Transparency Explainability
Understandability
Accountability
Trust
27
Single AI system is too complex and
being black box  XAI
• XAI became a big research topic in recent years, such as XAI2017,
XAI2018
– The methods to give meanings of internal variables with
the combination of input variables.
– It seems not to be working for Deep Learning ‘cause of its
high dimensionality and complexity.
– Explanation is generated not via AI itself but via a simple
simulator such as decision list, decision tree, etc.
– As for the way to make output be understandable
explanation for ordinary people, promising results have
not yet come out.
28
Transparency and Accountability
• IEEE EADversion2 Law chapter says:
• We need to clarify who is responsible in case
of accidents
• For this Transparency and Accountability
29
Transparency
• Disclose the followings:
Learning data for ML and input data of actual use
of AI application generated by ML
Data flow and algorithm of AI application.
 Conceptual data flow is OK
Investor, Founder and developer of AI application
system
30
Misunderstood version of
Accountability
• Wrong one
– Disclosing information via transparency with
natural language document for users of AI
application system
– In Japan, the mistranslation into “responsible to
explain 説明責任” is badly effecting many
people’s attitudes towards accountability (Prof.
Ohya: Keio Univ.)
31
Accountability must be recognized as:
• Explain the validity, fairness and legitimacy of
result/output of AI with the manner that AI
application users who are ordinary citizen can
easily understand and accept.
To clarify who are responsible for the results
of AI application outputs.
Responsibility implies compensation.
32
New Directions
Technically speaking, we have to think not only
about single AI but about group AI
They have to have the ability to generate easily
understandable explanations for ordinary
people tough !
Then how?
33
The direction of utilizing AI:
recommendation
Towards TRUST
Trust: Making some one be authority based on
historical accumulation of technology
advancement
Licensing this authority by public authority such
as national government: i.e. medical doctor,
lawyer
Compensation for accidents: when responsive
persons are not clearly identified, insurance
comes to be the last resort.
34
Trustworthy AI (EU)
• Lawful, Ethical, Robust
• Requirements
1. Human agency and oversight
2. Technical robustness and safety
3. Privacy and data governance
4. Transparency
5. Diversity non-discrimination and fairness
6. Societal and environmental well-being
7. accountability
35
Single AI Drone used as a weapon
• AI drones are operated from a remote
operating center , even thousands of Km
– Complexity of battle field
– Responsible person could be unclear because of
latency time, difficulties of recognizing the real
enemy.
36
Single AI Drone used as a weapon
– It is tough to identify who are soldiers and who
are civilians.
– To solve this problem, every persons’ data might
be gathered for long period of time and analyzed
with big data mining technologies to identify who
are enimies.
–  worse but anyway, accountability is recognized
as a key factor.
37
38
Platoon of drones attacked Russian military base in Syria last year.
Unpredictability of Group AI’s behavior
• Platoon of autonomous AI drones
– If an attack comes out unintentionally where
human commanders are set aside, it is unclear
who is responsible  Unintentionally happening
of battle, even war!
– No accountability is a problem!
– CCW(Convention on Certain Conventional
Weapons) tries to ban it, as far as I know
39
Autonomous AI weapon
AI’s action liability immunity Strict
liability
Unjustified
damage
Autonomous
AI weapon
Unjustified
acts (mis
attack)
AI weapon
developer +
commander
Political
decision
AI weapon
developer
(wrong
design of
attack
checking)
International
laws
AI weapon as
a controllable
tool
operator
40
Unpredictability of group AIs
Flush crash
• Flush crash: Group of AI traders communicate
each other via i.e. stock prices as common
language, and catastrophic results comes out in
seconds
– Deals in micro seconds
– Companies do not disclose AI traders’ algorithm
because of enterprise secret policies
No accountability!
How to cope with
• AI traders’ algorithms are still in secret
• Observing the market from outside by another special
AI: AI observer
• AI observers try to find unusual situation as early as
possible: Unusual situation detection technologies:
good research topic of AI
– Detected then stop
– Before detection, the loss or gain are exemption of liability
– The problem is when the system stops.
 The problem caused by AI should be solved by AI
42
AI observer observes the behavior of group of AI
and tries to
detect unusual situation as early as possible.
AI
AI observer
We should make a scheme
on which we trust this AI
Observer!
Conclusion
• Combination of Transparency, Accountability
including AI observers, Licensing, and
Compensation by insurance makes AI system
based on machine learning technologies be
trusted by every people including ordinary
citizens.
• This is good for us ML and AI researchers and
developers.
44

AI Forum-2019_Nakagawa

  • 1.
    AI and Ethicsand Accountability Hiroshi Nakagawa (RIKEN AIP) Images in this file is licensed by creative commons via power point of MicroSoft. 2019/4/26 1
  • 2.
    Contents • Amplification vs.Replacement • AI takes over jobs • Misuse/Abuse of AI • AI Ethics • Transparency, accountability, trust • AI weapon • Flush Crash
  • 3.
    Reference • Ray Kurzweil:The Singularity is Near ,Loretta Barrett Books Inc.2005 • Nick Bostrom: Superintelligence, Oxford University Press. 2014 • James Barrat:Our Final Invention: Artificial Intelligence and the End of the Human Era, New York Journal of Books, 2013 • John Markoff: Machines of Loving Grace: The Quest for Common Ground Between Humans and Robots ,2015 • Thomas H. Davenport , Julia Kirby :Only Humans Need Apply: Winners and Losers in the Age of Smart Machines , Harper Business, 2016
  • 4.
    Amplify vs. Replace •Does Artificial Intelligence amplify human competence? – IA: Intelligent Assistance/ Intelligence Amplifier • Does Artificial Intelligence replace human? – AI: Artificial Intelligence • “IA vs AI” is a basic view of Markoff’s book: Machines of Loving Grace: Chapter 4 beautifully describes this view and the history of AI.
  • 5.
    Amplify vs. Replace •IA vs. AI : This is a 60 year old hostile or complementary relation since the starting time of AI. – When AI is booming, IA is low profile, or vice versa. – If AI technology becomes stuck, researchers become favorite IA, such as Terry Winograd  Larry Page and Sergey Brin Google – Like this example, IA has provided us with much much great infuluencial technologies and tools than AI has. – Question: Is Deep Learning IA or AI?  yes. IA
  • 6.
    Is a humanin the loop or out of the loop? • Designing principle parallel with “IA vs. AI” is “a human is in the loop or out of the loop .” • In the loop  IA – A system is an extension of human abilities. A human being and a system do work collaboratively. – A human does some task being aided by an AI based system. – A human might not understand what is going on in an AI system. • Out of the loop  AI – A system acts autonomously. A human only commands a system. – A human is no more an actual stakeholder. – A human is usually living in a very easy position, but could not get by when some thing happens.
  • 7.
    Is a humanin the loop or out of the loop? • In the loop  IA • Out of the loop  AI • Generally speaking, the problem is to find the criteria of how far AI system should or can be done on behalf of human beings. – Of course, the remaining is to be done by a human being. • This is a traditional problem between machine and human, but becomes complicated in the era of AI. • To leave all the decision-making to AI is too easy for a human being but ends up with a kind of addiction and intelligently atrophic(shrinking) .
  • 8.
    AI takes overhuman jobs AI Where, What, How???
  • 9.
    Davenport says wecan find new jobs immune to AI invasion, but… –Jobs aiming at higher quality than AI, say judgment without data It must be done by so called genius?
  • 10.
    Davenport says wecan find new jobs immune to AI invasion, but… –Jobs AI can not do such as human intercommunication, persuasion, etc.. AI knows more and precise about things, event, etc. If NLP becomes great , AI is more relied by human… AI AI
  • 11.
    Davenport says wecan find new jobs immune to AI invasion, but… –Jobs of finding connection between business and technology AI can do more thorough investigation about linkage between business and technology than human. AI
  • 12.
    Davenport says wecan find new jobs immune to AI invasion, but… –Jobs less economical with employing machine or AI , in other words, quite rare and case specific task. AI AI AI has a learning ability, so easy to cope with rare or special task!
  • 13.
    Davenport says wecan find new jobs immune to AI invasion, but… –Jobs of developing AI However, AI itself will develop AI in the near future! AI AI AI
  • 14.
    Davenport says wecan find new jobs immune to AI invasion, but… –Jobs of explaining the results generated by AI or action done by AI. Again, however, it will only be done by AI! AI Explanation
  • 15.
    Cannot read even relevant papers summaryAI AI Researchingjob that are thought to be most distant from this crisis are seemingly taken over by AI. Writing paper in natural language is really needed?
  • 16.
  • 17.
    How to copewith this situation? – Basic income ….  no incentive, Unmotivated… If we want to a hobby, it needs amount of money…
  • 18.
    If a humanjob is completely replaced by AI and a human being is outside of job process AI Good- by I forgot how to do this job! I could not! Skills are lost forever! deleted or out-of date, etc.
  • 19.
    AI AI AI AI However, Shortage ofworkforce will be saved by AI and Robot.
  • 20.
    One of thereal problem is Misuse/Abuse of AI 20
  • 21.
    IEEE Ethically AlignedDesign version 2 1. Executive Summary 2. General Principles 3. Embedding Values Into Autonomous Intelligent Systems 4. Methodologies to Guide Ethical Research and Design 5. Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) 6. Personal Data and Individual Access Control 7. Reframing Autonomous Weapons Systems 8. Economics/Humanitarian Issues 9. Law 10. Affective Computing 11. Classical Ethics in Artificial Intelligence 12. Policy 13. Mixed Reality 14. Well-being 21 The final version was published
  • 22.
    IEEE EAD (Final)on April 2019 • 1. Human Rights – A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights. • 2. Well-being – A/IS creators shall adopt increased human well-being as a primary success criterion for development. • 3. Data Agency – A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity. • 4. Effectiveness – A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS. 22
  • 23.
    IEEE EAD (Final)on April 2019 • 5. Transparency – The basis of a particular A/IS decision should always be discoverable. • 6. Accountability – A/IS shall be created and operated to provide an unambiguous rationale for all decisions made. • 7. Awareness of Misuse – A/IS creators shall guard against all potential misuses and risks of A/IS in operation. • 8. Competence – A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. 23
  • 24.
    It’s not mebut AI says so! AIA society without freedom of speech nor even human rights. We need to design the society in which we have the right to object AI’s decision. GDPR article 22 Why me?
  • 25.
    GDPR article 22: Automatedindividual decision-making, including profiling • 1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her 25
  • 26.
    IEEE EAD version2 Howto cope with misuse/abuse of AI – Find out misuse/abuse of AI  AI should be equipped with the mechanism that explains the reasoning path and what data is used to reach the results  Whistle blower against peculiar/strange behavior of AI  Redress or rescue package is to be legitimized  Insurance is also needed 26
  • 27.
    Implementation of AIEthics Transparency Explainability Understandability Accountability Trust 27
  • 28.
    Single AI systemis too complex and being black box  XAI • XAI became a big research topic in recent years, such as XAI2017, XAI2018 – The methods to give meanings of internal variables with the combination of input variables. – It seems not to be working for Deep Learning ‘cause of its high dimensionality and complexity. – Explanation is generated not via AI itself but via a simple simulator such as decision list, decision tree, etc. – As for the way to make output be understandable explanation for ordinary people, promising results have not yet come out. 28
  • 29.
    Transparency and Accountability •IEEE EADversion2 Law chapter says: • We need to clarify who is responsible in case of accidents • For this Transparency and Accountability 29
  • 30.
    Transparency • Disclose thefollowings: Learning data for ML and input data of actual use of AI application generated by ML Data flow and algorithm of AI application.  Conceptual data flow is OK Investor, Founder and developer of AI application system 30
  • 31.
    Misunderstood version of Accountability •Wrong one – Disclosing information via transparency with natural language document for users of AI application system – In Japan, the mistranslation into “responsible to explain 説明責任” is badly effecting many people’s attitudes towards accountability (Prof. Ohya: Keio Univ.) 31
  • 32.
    Accountability must berecognized as: • Explain the validity, fairness and legitimacy of result/output of AI with the manner that AI application users who are ordinary citizen can easily understand and accept. To clarify who are responsible for the results of AI application outputs. Responsibility implies compensation. 32
  • 33.
    New Directions Technically speaking,we have to think not only about single AI but about group AI They have to have the ability to generate easily understandable explanations for ordinary people tough ! Then how? 33
  • 34.
    The direction ofutilizing AI: recommendation Towards TRUST Trust: Making some one be authority based on historical accumulation of technology advancement Licensing this authority by public authority such as national government: i.e. medical doctor, lawyer Compensation for accidents: when responsive persons are not clearly identified, insurance comes to be the last resort. 34
  • 35.
    Trustworthy AI (EU) •Lawful, Ethical, Robust • Requirements 1. Human agency and oversight 2. Technical robustness and safety 3. Privacy and data governance 4. Transparency 5. Diversity non-discrimination and fairness 6. Societal and environmental well-being 7. accountability 35
  • 36.
    Single AI Droneused as a weapon • AI drones are operated from a remote operating center , even thousands of Km – Complexity of battle field – Responsible person could be unclear because of latency time, difficulties of recognizing the real enemy. 36
  • 37.
    Single AI Droneused as a weapon – It is tough to identify who are soldiers and who are civilians. – To solve this problem, every persons’ data might be gathered for long period of time and analyzed with big data mining technologies to identify who are enimies. –  worse but anyway, accountability is recognized as a key factor. 37
  • 38.
    38 Platoon of dronesattacked Russian military base in Syria last year.
  • 39.
    Unpredictability of GroupAI’s behavior • Platoon of autonomous AI drones – If an attack comes out unintentionally where human commanders are set aside, it is unclear who is responsible  Unintentionally happening of battle, even war! – No accountability is a problem! – CCW(Convention on Certain Conventional Weapons) tries to ban it, as far as I know 39
  • 40.
    Autonomous AI weapon AI’saction liability immunity Strict liability Unjustified damage Autonomous AI weapon Unjustified acts (mis attack) AI weapon developer + commander Political decision AI weapon developer (wrong design of attack checking) International laws AI weapon as a controllable tool operator 40
  • 41.
    Unpredictability of groupAIs Flush crash • Flush crash: Group of AI traders communicate each other via i.e. stock prices as common language, and catastrophic results comes out in seconds – Deals in micro seconds – Companies do not disclose AI traders’ algorithm because of enterprise secret policies No accountability!
  • 42.
    How to copewith • AI traders’ algorithms are still in secret • Observing the market from outside by another special AI: AI observer • AI observers try to find unusual situation as early as possible: Unusual situation detection technologies: good research topic of AI – Detected then stop – Before detection, the loss or gain are exemption of liability – The problem is when the system stops.  The problem caused by AI should be solved by AI 42
  • 43.
    AI observer observesthe behavior of group of AI and tries to detect unusual situation as early as possible. AI AI observer We should make a scheme on which we trust this AI Observer!
  • 44.
    Conclusion • Combination ofTransparency, Accountability including AI observers, Licensing, and Compensation by insurance makes AI system based on machine learning technologies be trusted by every people including ordinary citizens. • This is good for us ML and AI researchers and developers. 44