AI and Accountability
Hiroshi Nakagawa
(RIKEN AIP)
Images in this file is licensed by creative commons via power point of MicroSoft.
1
IEEE Ethically Aligned Design version 2
1. Executive Summary
2. General Principles
3. Embedding Values Into Autonomous
Intelligent Systems
4. Methodologies to Guide Ethical Research
and Design
5. Safety and Beneficence of Artificial General
Intelligence (AGI) and Artificial
Superintelligence (ASI)
6. Personal Data and Individual Access
Control
7. Reframing Autonomous Weapons Systems
8. Economics/Humanitarian Issues
9. Law
10. Affective Computing
11. Classical Ethics in Artificial Intelligence
12. Policy
13. Mixed Reality
14. Well-being 2
The final version was published
IEEE EAD (Final) on April 2019
• 1. Human Rights
– A/IS shall be created and operated to respect, promote,
and protect internationally recognized human rights.
• 2. Well-being
– A/IS creators shall adopt increased human well-being
as a primary success criterion for development.
• 3. Data Agency
– A/IS creators shall empower individuals with the ability
to access and securely share their data, to maintain
people’s capacity to have control over their identity.
• 4. Effectiveness
– A/IS creators and operators shall provide evidence of
the effectiveness and fitness for purpose of A/IS.
3
IEEE EAD (Final) on April 2019
• 5. Transparency
– The basis of a particular A/IS decision should always be
discoverable.
• 6. Accountability
– A/IS shall be created and operated to provide an
unambiguous rationale for all decisions made.
• 7. Awareness of Misuse
– A/IS creators shall guard against all potential misuses
and risks of A/IS in operation.
• 8. Competence
– A/IS creators shall specify and operators shall adhere to
the knowledge and skill required for safe and effective
operation. 4
One of the real problem is
Misuse/Abuse of AI
5
It’s not me but AI says so!
AIA society without
freedom of speech
nor even human
rights.
We need to design
the society in which
we have the right to
object AI’s decision.
GDPR article 22
Why
me?
GDPR article 22:
Automated individual decision-making,
including profiling
• 1. The data subject shall have the right not to
be subject to a decision based solely on
automated processing, including profiling,
which produces legal effects concerning him
or her or similarly significantly affects him or
her
7
IEEE EAD version2
How to cope with misuse/abuse of AI
– Find out misuse/abuse of AI
 AI should be equipped with the mechanism that
explains the reasoning path and what data is used to
reach the results
 Whistle blower against peculiar/strange behavior of AI
 Redress or rescue package is to be legitimized
 Insurance is also needed
8
Implementation of AI Ethics
Transparency Explainability
Understandability
Accountability
Trust
9
Single AI system is too complex and
being black box  XAI
• XAI became a big research topic in recent years, such as XAI2017,
XAI2018
– The methods to give meanings of internal variables with
the combination of input variables.
– It seems not to be working for Deep Learning ‘cause of its
high dimensionality and complexity.
– Explanation is generated not via AI itself but via a simple
simulator such as decision list, decision tree, etc.
– As for the way to make output be understandable
explanation for ordinary people, promising results have
not yet come out.
10
Transparency and Accountability
• IEEE EADversion2 Law chapter says:
• We need to clarify who is responsible in case
of accidents
• For this Transparency and Accountability
11
Transparency
• Disclose the followings:
Learning data for ML and input data of actual use
of AI application generated by ML
Data flow and algorithm of AI application.
 Conceptual data flow is OK
Investor, Founder and developer of AI application
system
12
Misunderstood version of
Accountability
• Wrong one
– Only disclosing information via transparency with
natural language document for users of AI
application system
– In Japan, the mistranslation into “responsible to
explain 説明責任” is badly effecting many
people’s attitudes towards accountability (Prof.
Ohya: Keio Univ.)
13
Accountability must be recognized as:
• Explain the validity, fairness and legitimacy of
result/output of AI with the manner that AI
application users who are ordinary citizen can
easily understand and accept.
To clarify who are responsible for the results
of AI application outputs.
Responsibility implies compensation.
14
New Directions
Technically speaking, we have to think not only
about single AI but about group AI
They have to have the ability to generate easily
understandable explanations for ordinary
people tough !
Then how?
15
The direction of utilizing AI:
recommendation
Towards TRUST
Trust does not require the precise and
detailed proof of AI outcome!
16
The direction of utilizing AI:
recommendation
Trust: Making some one be authority based on
historical accumulation of technology
advancement
Licensing this authority by public authority such
as national government: i.e. medical doctor,
lawyer
Compensation for accidents: when responsive
persons are not clearly identified, insurance
comes to be the last resort.
17
Trustworthy AI (EU)
• Lawful, Ethical, Robust
• Requirements
1. Human agency and oversight
2. Technical robustness and safety
3. Privacy and data governance
4. Transparency
5. Diversity non-discrimination and fairness
6. Societal and environmental well-being
7. accountability
18
Single AI Drone used as a weapon
• AI drones are operated from a remote
operating center , even thousands of Km
– Complexity of battle field
– Responsible person could be unclear because of
latency time, difficulties of recognizing the real
enemy.
19
Single AI Drone used as a weapon
– It is tough to identify who are soldiers and who
are civilians.
– To solve this problem, every persons’ data might
be gathered for long period of time and analyzed
with big data mining technologies to identify who
are enimies.
–  worse but anyway, accountability is recognized
as a key factor.
20
21
Platoon of drones attacked Russian military base in Syria last year.
Unpredictability of Group AI’s behavior
• Platoon of autonomous AI drones
– If an attack comes out unintentionally where
human commanders are set aside, it is unclear
who is responsible  Unintentionally happening
of battle, even war!
– No accountability is a problem!
– CCW(Convention on Certain Conventional
Weapons) tries to ban it, as far as I know
22
Autonomous AI weapon
AI’s action liability immunity Strict
liability
Unjustified
damage
Autonomous
AI weapon
Unjustified
acts (mis
attack)
AI weapon
developer +
commander
Political
decision
AI weapon
developer
(wrong
design of
attack
checking)
International
laws
AI weapon as
a controllable
tool
operator
23
Unpredictability of group AIs
Flush crash
• Flush crash: Group of AI traders communicate
each other via i.e. stock prices as common
language, and catastrophic results comes out in
seconds
– Deals in micro seconds
– Companies do not disclose AI traders’ algorithm
because of enterprise secret policies
No accountability!
How to cope with
• AI traders’ algorithms are still in secret
• Observing the market from outside by another special
AI: AI observer
• AI observers try to find unusual situation as early as
possible: Unusual situation detection technologies:
good research topic of AI
– Detected then stop
– Before detection, the loss or gain are exemption of liability
– The problem is when the system stops.
 The problem caused by AI should be solved by AI
25
AI observer observes the behavior of group of AI
and tries to
detect unusual situation as early as possible.
AI
AI observer
We should make a scheme
on which we trust this AI
Observer!
Conclusion of Accountability Part
• Combination of Transparency, Accountability
including AI observers, Licensing, and
Compensation by insurance makes AI system
based on machine learning technologies be
trusted by every people including ordinary
citizens.
• This is good for us ML and AI researchers and
developers.
27
Copy right and Intellectual property
• The various copy rights of art, etc. created with
assistance of AI
• The intellectual property rights created with
assistance of AI
• Who are the holder of copy right/ intellectual
property?
• When two or more parties insist their rights, what
kind of role AI should do?
28
What is AI’s position when AI is involved in a task of
creating art products?
AI generate
vast amount
of art
product
candidates.
Select good art
product
candidates
which is likely
to accepted by
people with
this filter
Generate a filter which
select good art product
based on them
past excellent art products
New viewers’ reaction =
their idea and sense
End viewers or
audience
Art products made
with AI
Viewer’s
idea and
sense: key
point of
copyright
See things in
this loop
The typical contention
30
 In case of copyright violation, who should be
responsible to it?
• Creator who use AI?
• even the AI is (limited) autonomous
• Autonomous AI itself?
• Developer of autonomous AI?
• …..
What is AI’s position when AI is involved in a task of
creating intellectual products such as patent?
AI generate
vast
amount of
I.P. s such as
candidates
of patent.
Select good
candidates of
I.P. which is
likely to pass
this filter
Generate a filter
which rejects similar
candidates of I.P.
based on them
past intellectual properties
Authority’s decision to
accept as I.P. or not
&
User’s reaction =
“used or not”
I.P. made with AI
technical
idea and
sense: key
point of I.P.
See things in
this loop
In I.P. , this process should
reject the too similar idea
to the already existed I.P.s
Laws and Privacy Protection
32
Privacy of DNA
33
Extract DNA from the litter to infer his face image and make poster of
his face to arrest him.
DNA Privacy cont.
 Cosmetic surgery becomes fad to get rid of identification
of face image.
 Person who got cosmetic surgery is regarded as a bad guy.
 The national government collects and control every
citizen’s DNA
 Biologically DNA, informationally SNS history is collected
and used to control all the people. Who arel targets?.
 Many people say EU’s regulation about personal data is
too strict, however, the above case forces us to think of
the importance of private data of every individual.
34
Personal Data Protection
• Personal data is overwhelming on the internet.
• Concept of privacy has changed.
• GDPR(General Data Protection Regulation) covers
the whole EU area, even outside of EU.
• 2018/5/25
35
GDPR
• GDPR people do not think that there is an
anonymization method which enables the
anonymized data is freely transferred or
distributed without the consent of data subject.
• Then, personal data can transferred to the third
party by
1. The purpose of use is permitted by GDPR
2. Vender’s accountability of the purpose and usage.
3. Consent of data subject
36
MyData and GDPR art.20
MyData movement
• GDPR article 20
• The data subject shall have the right to receive the
personal data concerning him or her, which he or
she has provided to a controller,
• in a structured, commonly used and machine-
readable format
• and have the right to transmit those data to
another controller without hindrance from the
controller to which the personal data have been
provided
37
MyData: Personal Data Eco-system:
GAFA  personal data the data subject
Google,
Facebook,
Apple,
MS Data
Subject
employment
API for
developers
transportation
purchase
Web
power
companymedicine
governmen
t
research
bank
employment
API for
developer
s
transportatio
n
purchase
Web
power
company
medicine
government
researc
h
bank
38
MyData 2016,2017,2018( Helsinki),
MyData Japan 2017,2018(Tokyo)
PLR (personal life repository)
PLR clouds
(Any online cloud storage is OK to be used)
app app
Data subject
app
Personal tetrminal
Local goverment
hospitals
Family or friends
Broad casting co.
schooltransportaion
bank
retailer
hotels
PLR
PLR
PLR
PLR PLR
PLR
PLR PLR
PLR
PLR
Medical clinic
app
PLR
PLR
暗号化されたデータEncrypted data
working
Application to permit to use a data subject’s personal data with other
organizations using without mediator
(Prof. Hasida U-tokyo&RIKEN)
PDS
Personal Data Storage(PDS)
• Personal Data Store/Vault
• or
• Personal Data Cloud
Personal
Data
Personal
Data
Personal
Data
Personal
Data
Services with
personal dataMediator
(via AI
• Auto Upload
• Encrypt with
personal secret key
• Internet ID
• API-of-Me
• Usage Log
• Tracing the route
• Unified Data Format
• Portability
IEEE EAD:Personal Data and Individual
Access Control
• EAD version2
• Not only privacy protection but many ideas
about the design of AI system which accesses
personal data
41
AI personal agent:
AI agent covers pregnant to tumb
• Health care record before and after birth
• Citizenship given just after birth  government data
• School
• Moving
• Cross boarder moving: immigration data
• Purchasing history
• IoT and wearables (telecommunications data)
• SNS, digital media
• Job training
• Commitment to society through job, volunteer work
• Contract, insurance, finance
• Death( How to treat her/his digital heritage)
42
Regional Jurisdiction
• Laws are distinct country by country, but the right
of privacy protection( basic human right) should
be sure across countries.
• Data subject should be able to access her/his own
personal data in cross-border fashion.
Legally possible?
As for GDPR, within EU area, or area having adequate
in personal data protection laws, cross-border is
allowed.
43
Agency and Control
• In order to define how widely AI agent
behaves, personally identifiable information
(PII) should be explicitly defined.
• Collection and transferring of personal date
should comply the fundamental policy of
GDPR.
44
Transparency and Access
• Data subject should have the right to know how her/his
personal data is collected, used, stored and abandoned.
• UI by which a data subject can easily correct her/his
personal data, is required.
• To implement these ideas, we employ AI technologies.
45
How to make consent
• We have to develop AI system by which we can get
consent from the data subject who is not familiar to AI
• AI easily gets consent whenever the situation changes
• employee<<employer
This power imbalance must be overcome, because the
consent between two guys when their power is very
imbalanced, the consent is legally very doubtful.
46
How to make consent: information
weak person’s case
• Elderlies, people suffering dementia, infants
and so on who are information weak people
should be kept an eye on.
• For this purpose, AI is a crucial technology.
47
The right to be forgotten
• Individuals demand the search engine company to erase
pages which describe about her/him
• AI assists to determine whether demanded pages are erase or
not. AI might utilizes the amount of records about erasing or
not.
• Good application of AI technologies.
48
be easy to get to know or not
• Observation of usage of ELIZA which was developed in 1966
says:
• People tend to speak personal, even privacy information
when they convers with AI because AI is not human
• The same phenomena is expected to happen when a person
convers with Robot, humanoid, android who look very similar
to human being.
• If the robot do extract personal privacy maliciously and send it
to malicious server….. Quite scary
49
• If a companion animal robot of elderly person is
connected to the internet and contaminated by
a malware,
• s/he downs her/his guard and speak about
her/his financial asset,
• then her/his asset can get stolen
• Digital canine madness
50
– Can AI know what is her/his privacy?
– Then, the privacy information AI collected should be
protected with AI technologies from outside bad
guys( might be AI)
51
AI should have an ability of
privacy protection by design
What is privacy depends on each individual or
each situation
When he is in extra-marital affair,
Hotels or restaurants might be privacy
• Can AI decide which information is privacy?
• But if AI can do this, AI is almost AGI
• In other words, AI recognize human’s feeling,
good and bad intentions and so on: Scary!
52
Thank you for your attention
53

AI and Accountability

  • 1.
    AI and Accountability HiroshiNakagawa (RIKEN AIP) Images in this file is licensed by creative commons via power point of MicroSoft. 1
  • 2.
    IEEE Ethically AlignedDesign version 2 1. Executive Summary 2. General Principles 3. Embedding Values Into Autonomous Intelligent Systems 4. Methodologies to Guide Ethical Research and Design 5. Safety and Beneficence of Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI) 6. Personal Data and Individual Access Control 7. Reframing Autonomous Weapons Systems 8. Economics/Humanitarian Issues 9. Law 10. Affective Computing 11. Classical Ethics in Artificial Intelligence 12. Policy 13. Mixed Reality 14. Well-being 2 The final version was published
  • 3.
    IEEE EAD (Final)on April 2019 • 1. Human Rights – A/IS shall be created and operated to respect, promote, and protect internationally recognized human rights. • 2. Well-being – A/IS creators shall adopt increased human well-being as a primary success criterion for development. • 3. Data Agency – A/IS creators shall empower individuals with the ability to access and securely share their data, to maintain people’s capacity to have control over their identity. • 4. Effectiveness – A/IS creators and operators shall provide evidence of the effectiveness and fitness for purpose of A/IS. 3
  • 4.
    IEEE EAD (Final)on April 2019 • 5. Transparency – The basis of a particular A/IS decision should always be discoverable. • 6. Accountability – A/IS shall be created and operated to provide an unambiguous rationale for all decisions made. • 7. Awareness of Misuse – A/IS creators shall guard against all potential misuses and risks of A/IS in operation. • 8. Competence – A/IS creators shall specify and operators shall adhere to the knowledge and skill required for safe and effective operation. 4
  • 5.
    One of thereal problem is Misuse/Abuse of AI 5
  • 6.
    It’s not mebut AI says so! AIA society without freedom of speech nor even human rights. We need to design the society in which we have the right to object AI’s decision. GDPR article 22 Why me?
  • 7.
    GDPR article 22: Automatedindividual decision-making, including profiling • 1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her 7
  • 8.
    IEEE EAD version2 Howto cope with misuse/abuse of AI – Find out misuse/abuse of AI  AI should be equipped with the mechanism that explains the reasoning path and what data is used to reach the results  Whistle blower against peculiar/strange behavior of AI  Redress or rescue package is to be legitimized  Insurance is also needed 8
  • 9.
    Implementation of AIEthics Transparency Explainability Understandability Accountability Trust 9
  • 10.
    Single AI systemis too complex and being black box  XAI • XAI became a big research topic in recent years, such as XAI2017, XAI2018 – The methods to give meanings of internal variables with the combination of input variables. – It seems not to be working for Deep Learning ‘cause of its high dimensionality and complexity. – Explanation is generated not via AI itself but via a simple simulator such as decision list, decision tree, etc. – As for the way to make output be understandable explanation for ordinary people, promising results have not yet come out. 10
  • 11.
    Transparency and Accountability •IEEE EADversion2 Law chapter says: • We need to clarify who is responsible in case of accidents • For this Transparency and Accountability 11
  • 12.
    Transparency • Disclose thefollowings: Learning data for ML and input data of actual use of AI application generated by ML Data flow and algorithm of AI application.  Conceptual data flow is OK Investor, Founder and developer of AI application system 12
  • 13.
    Misunderstood version of Accountability •Wrong one – Only disclosing information via transparency with natural language document for users of AI application system – In Japan, the mistranslation into “responsible to explain 説明責任” is badly effecting many people’s attitudes towards accountability (Prof. Ohya: Keio Univ.) 13
  • 14.
    Accountability must berecognized as: • Explain the validity, fairness and legitimacy of result/output of AI with the manner that AI application users who are ordinary citizen can easily understand and accept. To clarify who are responsible for the results of AI application outputs. Responsibility implies compensation. 14
  • 15.
    New Directions Technically speaking,we have to think not only about single AI but about group AI They have to have the ability to generate easily understandable explanations for ordinary people tough ! Then how? 15
  • 16.
    The direction ofutilizing AI: recommendation Towards TRUST Trust does not require the precise and detailed proof of AI outcome! 16
  • 17.
    The direction ofutilizing AI: recommendation Trust: Making some one be authority based on historical accumulation of technology advancement Licensing this authority by public authority such as national government: i.e. medical doctor, lawyer Compensation for accidents: when responsive persons are not clearly identified, insurance comes to be the last resort. 17
  • 18.
    Trustworthy AI (EU) •Lawful, Ethical, Robust • Requirements 1. Human agency and oversight 2. Technical robustness and safety 3. Privacy and data governance 4. Transparency 5. Diversity non-discrimination and fairness 6. Societal and environmental well-being 7. accountability 18
  • 19.
    Single AI Droneused as a weapon • AI drones are operated from a remote operating center , even thousands of Km – Complexity of battle field – Responsible person could be unclear because of latency time, difficulties of recognizing the real enemy. 19
  • 20.
    Single AI Droneused as a weapon – It is tough to identify who are soldiers and who are civilians. – To solve this problem, every persons’ data might be gathered for long period of time and analyzed with big data mining technologies to identify who are enimies. –  worse but anyway, accountability is recognized as a key factor. 20
  • 21.
    21 Platoon of dronesattacked Russian military base in Syria last year.
  • 22.
    Unpredictability of GroupAI’s behavior • Platoon of autonomous AI drones – If an attack comes out unintentionally where human commanders are set aside, it is unclear who is responsible  Unintentionally happening of battle, even war! – No accountability is a problem! – CCW(Convention on Certain Conventional Weapons) tries to ban it, as far as I know 22
  • 23.
    Autonomous AI weapon AI’saction liability immunity Strict liability Unjustified damage Autonomous AI weapon Unjustified acts (mis attack) AI weapon developer + commander Political decision AI weapon developer (wrong design of attack checking) International laws AI weapon as a controllable tool operator 23
  • 24.
    Unpredictability of groupAIs Flush crash • Flush crash: Group of AI traders communicate each other via i.e. stock prices as common language, and catastrophic results comes out in seconds – Deals in micro seconds – Companies do not disclose AI traders’ algorithm because of enterprise secret policies No accountability!
  • 25.
    How to copewith • AI traders’ algorithms are still in secret • Observing the market from outside by another special AI: AI observer • AI observers try to find unusual situation as early as possible: Unusual situation detection technologies: good research topic of AI – Detected then stop – Before detection, the loss or gain are exemption of liability – The problem is when the system stops.  The problem caused by AI should be solved by AI 25
  • 26.
    AI observer observesthe behavior of group of AI and tries to detect unusual situation as early as possible. AI AI observer We should make a scheme on which we trust this AI Observer!
  • 27.
    Conclusion of AccountabilityPart • Combination of Transparency, Accountability including AI observers, Licensing, and Compensation by insurance makes AI system based on machine learning technologies be trusted by every people including ordinary citizens. • This is good for us ML and AI researchers and developers. 27
  • 28.
    Copy right andIntellectual property • The various copy rights of art, etc. created with assistance of AI • The intellectual property rights created with assistance of AI • Who are the holder of copy right/ intellectual property? • When two or more parties insist their rights, what kind of role AI should do? 28
  • 29.
    What is AI’sposition when AI is involved in a task of creating art products? AI generate vast amount of art product candidates. Select good art product candidates which is likely to accepted by people with this filter Generate a filter which select good art product based on them past excellent art products New viewers’ reaction = their idea and sense End viewers or audience Art products made with AI Viewer’s idea and sense: key point of copyright See things in this loop
  • 30.
    The typical contention 30 In case of copyright violation, who should be responsible to it? • Creator who use AI? • even the AI is (limited) autonomous • Autonomous AI itself? • Developer of autonomous AI? • …..
  • 31.
    What is AI’sposition when AI is involved in a task of creating intellectual products such as patent? AI generate vast amount of I.P. s such as candidates of patent. Select good candidates of I.P. which is likely to pass this filter Generate a filter which rejects similar candidates of I.P. based on them past intellectual properties Authority’s decision to accept as I.P. or not & User’s reaction = “used or not” I.P. made with AI technical idea and sense: key point of I.P. See things in this loop In I.P. , this process should reject the too similar idea to the already existed I.P.s
  • 32.
    Laws and PrivacyProtection 32
  • 33.
    Privacy of DNA 33 ExtractDNA from the litter to infer his face image and make poster of his face to arrest him.
  • 34.
    DNA Privacy cont. Cosmetic surgery becomes fad to get rid of identification of face image.  Person who got cosmetic surgery is regarded as a bad guy.  The national government collects and control every citizen’s DNA  Biologically DNA, informationally SNS history is collected and used to control all the people. Who arel targets?.  Many people say EU’s regulation about personal data is too strict, however, the above case forces us to think of the importance of private data of every individual. 34
  • 35.
    Personal Data Protection •Personal data is overwhelming on the internet. • Concept of privacy has changed. • GDPR(General Data Protection Regulation) covers the whole EU area, even outside of EU. • 2018/5/25 35
  • 36.
    GDPR • GDPR peopledo not think that there is an anonymization method which enables the anonymized data is freely transferred or distributed without the consent of data subject. • Then, personal data can transferred to the third party by 1. The purpose of use is permitted by GDPR 2. Vender’s accountability of the purpose and usage. 3. Consent of data subject 36
  • 37.
    MyData and GDPRart.20 MyData movement • GDPR article 20 • The data subject shall have the right to receive the personal data concerning him or her, which he or she has provided to a controller, • in a structured, commonly used and machine- readable format • and have the right to transmit those data to another controller without hindrance from the controller to which the personal data have been provided 37
  • 38.
    MyData: Personal DataEco-system: GAFA  personal data the data subject Google, Facebook, Apple, MS Data Subject employment API for developers transportation purchase Web power companymedicine governmen t research bank employment API for developer s transportatio n purchase Web power company medicine government researc h bank 38 MyData 2016,2017,2018( Helsinki), MyData Japan 2017,2018(Tokyo)
  • 39.
    PLR (personal liferepository) PLR clouds (Any online cloud storage is OK to be used) app app Data subject app Personal tetrminal Local goverment hospitals Family or friends Broad casting co. schooltransportaion bank retailer hotels PLR PLR PLR PLR PLR PLR PLR PLR PLR PLR Medical clinic app PLR PLR 暗号化されたデータEncrypted data working Application to permit to use a data subject’s personal data with other organizations using without mediator (Prof. Hasida U-tokyo&RIKEN)
  • 40.
    PDS Personal Data Storage(PDS) •Personal Data Store/Vault • or • Personal Data Cloud Personal Data Personal Data Personal Data Personal Data Services with personal dataMediator (via AI • Auto Upload • Encrypt with personal secret key • Internet ID • API-of-Me • Usage Log • Tracing the route • Unified Data Format • Portability
  • 41.
    IEEE EAD:Personal Dataand Individual Access Control • EAD version2 • Not only privacy protection but many ideas about the design of AI system which accesses personal data 41
  • 42.
    AI personal agent: AIagent covers pregnant to tumb • Health care record before and after birth • Citizenship given just after birth  government data • School • Moving • Cross boarder moving: immigration data • Purchasing history • IoT and wearables (telecommunications data) • SNS, digital media • Job training • Commitment to society through job, volunteer work • Contract, insurance, finance • Death( How to treat her/his digital heritage) 42
  • 43.
    Regional Jurisdiction • Lawsare distinct country by country, but the right of privacy protection( basic human right) should be sure across countries. • Data subject should be able to access her/his own personal data in cross-border fashion. Legally possible? As for GDPR, within EU area, or area having adequate in personal data protection laws, cross-border is allowed. 43
  • 44.
    Agency and Control •In order to define how widely AI agent behaves, personally identifiable information (PII) should be explicitly defined. • Collection and transferring of personal date should comply the fundamental policy of GDPR. 44
  • 45.
    Transparency and Access •Data subject should have the right to know how her/his personal data is collected, used, stored and abandoned. • UI by which a data subject can easily correct her/his personal data, is required. • To implement these ideas, we employ AI technologies. 45
  • 46.
    How to makeconsent • We have to develop AI system by which we can get consent from the data subject who is not familiar to AI • AI easily gets consent whenever the situation changes • employee<<employer This power imbalance must be overcome, because the consent between two guys when their power is very imbalanced, the consent is legally very doubtful. 46
  • 47.
    How to makeconsent: information weak person’s case • Elderlies, people suffering dementia, infants and so on who are information weak people should be kept an eye on. • For this purpose, AI is a crucial technology. 47
  • 48.
    The right tobe forgotten • Individuals demand the search engine company to erase pages which describe about her/him • AI assists to determine whether demanded pages are erase or not. AI might utilizes the amount of records about erasing or not. • Good application of AI technologies. 48
  • 49.
    be easy toget to know or not • Observation of usage of ELIZA which was developed in 1966 says: • People tend to speak personal, even privacy information when they convers with AI because AI is not human • The same phenomena is expected to happen when a person convers with Robot, humanoid, android who look very similar to human being. • If the robot do extract personal privacy maliciously and send it to malicious server….. Quite scary 49
  • 50.
    • If acompanion animal robot of elderly person is connected to the internet and contaminated by a malware, • s/he downs her/his guard and speak about her/his financial asset, • then her/his asset can get stolen • Digital canine madness 50
  • 51.
    – Can AIknow what is her/his privacy? – Then, the privacy information AI collected should be protected with AI technologies from outside bad guys( might be AI) 51 AI should have an ability of privacy protection by design
  • 52.
    What is privacydepends on each individual or each situation When he is in extra-marital affair, Hotels or restaurants might be privacy • Can AI decide which information is privacy? • But if AI can do this, AI is almost AGI • In other words, AI recognize human’s feeling, good and bad intentions and so on: Scary! 52
  • 53.
    Thank you foryour attention 53