19. IEEE Ethically Aligned Design version 2
1. Executive Summary
2. General Principles
3. Embedding Values Into Autonomous
Intelligent Systems
4. Methodologies to Guide Ethical Research
and Design
5. Safety and Beneficence of Artificial General
Intelligence (AGI) and Artificial
Superintelligence (ASI)
6. Personal Data and Individual Access
Control
7. Reframing Autonomous Weapons Systems
8. Economics/Humanitarian Issues
9. Law
10. Affective Computing
11. Classical Ethics in Artificial Intelligence
12. Policy
13. Mixed Reality
14. Well-being 19
The final version was published
20. IEEE EAD (Final) on April 2019
• 1. Human Rights
– A/IS shall be created and operated to respect, promote,
and protect internationally recognized human rights.
• 2. Well-being
– A/IS creators shall adopt increased human well-being
as a primary success criterion for development.
• 3. Data Agency
– A/IS creators shall empower individuals with the
ability to access and securely share their data, to
maintain people’s capacity to have control over their
identity.
• 4. Effectiveness
– A/IS creators and operators shall provide evidence of
the effectiveness and fitness for purpose of A/IS.
20
28. IEEE EAD (Final) on April 2019
• 5. Transparency
– The basis of a particular A/IS decision should always be
discoverable.
• 6. Accountability
– A/IS shall be created and operated to provide an
unambiguous rationale for all decisions made.
• 7. Awareness of Misuse
– A/IS creators shall guard against all potential misuses
and risks of A/IS in operation.
• 8. Competence
– A/IS creators shall specify and operators shall adhere to
the knowledge and skill required for safe and effective
operation. 28
39. Trust
• We take the following definition from the literature: “Trust
is viewed as:
• (1) a set of specific beliefs dealing with benevolence,
competence, integrity, and predictability (trusting beliefs);
• (2) the willingness of one party to depend on another in a
risky situation (trusting intention); or
• While “Trust” is usually not a property ascribed to
machines, this document aims to stress the importance of
being able to trust not only in the fact that AI systems are
legally compliant, ethically adherent and robust, but also
that such trust can be ascribed to all people and processes
involved in the AI system’s life cycle.
39
40. Trustworthy AI
• (1) lawful, ensuring compliance
– with all applicable laws and regulations
• (2) ethical
– demonstrating respect for, and ensure adherence to, ethical
principles and values
• (3) robust
– both from a technical and social perspective,
• since, even with good intentions, AI systems can cause
unintentional harm. Trustworthy AI concerns not only the
trustworthiness of the AI system itself but also comprises the
trustworthiness of all processes and actors that are part of
the system’s life cycle.
40
41. Trustworthy AI
• Lawful, Ethical, Robust
• 具体的要件
1. Human agency and oversight
2. Technical robustness and safety
3. Privacy and data governance
4. Transparency
5. Diversity non-discrimination and fairness
6. Societal and environmental well-being
7. accountability
41
42. Recommendation of the Council on OECD
Legal Instruments Artificial Intelligence
• 2019年5月22日のOECD 閣
僚理事会lで採択
• 強制力はないが、各国での
立法の指針になる可能性大
– 例: OECD Guidelines on the
Protection of Privacy and
Transborder Flows of
Personal Data (1980)
は各国のプライバシー保護
法の基礎になった
42
43. OECD, Recommendation of the Council on A I,
OECD/LEGAL/0449, 2019/5/23
• 技術的目標:
• inclusive growth, sustainable development and well-being
• human-centred values and fairness
• transparency and explainability
• robustness, security and safety
• accountability
•
• 政策的目標:
• investing in AI research and development
• fostering a digital ecosystem for AI
• shaping an enabling policy environment for AI
• building human capacity and preparing for labour market
transformation
• international co-operation for trustworthy AI 43