Keywords: Artificial Intelligence, Ethics of Artificial Intelligence, Ethics of Technology, Information Ethics, Computer Ethics, Freedom and Privacy in the AI Society, Responsibility of AI
In order to protect privacy, many technologies are used for various purposes. This slide is an introductory overview of these technologies for each purpose, including private information retrieval, secure computation, pseudonymization, anonymization and differential privacy.
Keywords: Artificial Intelligence, Ethics of Artificial Intelligence, Ethics of Technology, Information Ethics, Computer Ethics, Freedom and Privacy in the AI Society, Responsibility of AI
In order to protect privacy, many technologies are used for various purposes. This slide is an introductory overview of these technologies for each purpose, including private information retrieval, secure computation, pseudonymization, anonymization and differential privacy.
信頼・情報・ルール形成(Trust in Digital Life Japan - Working Group #1)Fumiko Kudoh
「信頼・情報・ルール形成」は、2018年03月06日に開催された「Trust in Digital Life Japan - Working Group #1」で報告した際の資料です。
「Trust in Digital Life」は、次世代の「信頼」を設計するためのコミュニティで、ベルギーのブリュッセルに本拠地をおく非営利団体です。日本では、NECさんが「Trust in Digital Life Japan」(TDL Japan)を立ち上げ、欧州のTDLと連携をとりながら複数の企業や研究者とともに議論の場をつくり出そうとしています。
https://wired.jp/2017/12/13/trust-in-digital-life-japan/
https://trustindigitallife.eu/
201024 ai koeln (akemi yokota) auf japanischAkemi Yokota
This Slide is for the Symposium "TECHNICAL AND ETHICAL ASPECTS OF ARTIFICIAL INTELLIGENCE IN JAPAN AND GERMANY". Here is the original Version in Japanese. In the Symposium I will use a german version with the help of JKI center in Koeln.
ケルン日本文化会館シンポジウム「日独両国におけるデジタル化の諸相」での報告「AI利活用社会のための法制度設計 ~日本の状況と未来の展望」スライドの日本語版です。当日会場ではドイツ語版が投影されます(横田は日本語で話し,同時通訳あり)ので、日本語話者はこちらを参照してください。
Japan Open Science Summit 2018「研究データのライセンス条件を考える:産官学ラウンドテーブル」で登壇したときの資料です。
--------------------------
■セッション詳細
研究データの公開者と利用者の双方にとって、有用かつ分かりやすいライセンス条件のあり方を議論する。まず、小委員会のアンケート調査の結果から、データ公開者のニーズや懸念について紹介する。続いて企業データ、デジタルアーカイブ、政府データのライセンスに関する課題を共有した上で、産官学の参加者による検討を行いたい。
The purpose of this session is to discuss useful and simple license conditions for both publishers and users of research data. Firstly, we introduce the results of the survey conducted by the Subcommittee which analyze data publishers' needs and concerns. Then, with panelists from industry-government-academia, we share issues of publishing data in industries, museums/libraries/archives, and government.
趣旨説明
話題提供
池内 有為(筑波大学大学院図書館情報メディア研究科)
・研究データ公開におけるライセンスの現状と課題:インタビュー・アンケート調査の結果から
生貝 直人(東洋大学経済学部総合政策学科 准教授)
・デジタルアーカイブと権利表記
龍澤 直樹(内閣官房 情報通信技術(IT)総合戦略室 企画官)
・政府におけるオープンデータの取組状況について
上島 邦彦(株式会社日本データ取引所 事業企画部 部長)
・データ流通市場から見た研究データへの期待
ディスカッション
(出典:https://joss.rcos.nii.ac.jp/session/0618/?id=se_94)
--------------------------
This slide shows (1) AI and Accountability , (2) AI Ethics, (2) Privacy Protection. Several AI ethics documents such as IEEE EAD, EC-HELG Ethics Guideline for Trustworthy AI, Social Principles of Human-Centric AI(Japan), focus on AI's transparency, accountability and trust. We follow the discussions of these documents around the above (1),(2) and (3) topics.
What is Accountability of AI? We answer to this question by clarifying responsibility, explainability and liability of limited autonomous AI with several bright and dark real examples.
Then we move to the concept of "Trust " which is of not limited to single AI system but group AI ‘s behavior.
K-anonymization has been regarded as a great method to make a bad person indistinguishable among k people whose quasi identifiers are same.
It, unfortunately, has a problematic side effect of defamation. In this case, defamation means the case where other good k-1 people are suspected as a bad person because both of a bad person and good people have the same quasi identifiers because of k-anonymization. This slide shows a mathematical model of defamation and proposes an algorithm which minimizes the probability of defamation.
Social Effects by the Singularity -Pre-Singularity Era-Hiroshi Nakagawa
Contents:
Stance of scientists community against Pre-Singularity problems
Amplification vs. Replacement
AI takes over jobs
Boarder line between amplification and replacement
Autonomous driver: trolley problem
The right to be forgotten
Towards black box
Responsibility
Vulnerability of financial dealing system made of many AI agent traders connected via internet
AI and weapon
Filter bubble phenomena
Analogy: Selfish gene
AI and privacy
The right to be forgotten, Profiling and Don’t Track
Feeling of friendliness to android
Again self conscious and identity
Privacy Protectin Models and Defamation caused by k-anonymityHiroshi Nakagawa
Introduction of Privacy Protection Mathematical Models are the topics of this slide. The Models explained are 1) Private Information Retrieval, 2) IR with Homomorphic Encryption, 3) k-anonymity, 4) l-diversity, and finally 5) Defamation caused by k-Anonymity
Japanese Personal Information Protection Act (PIPA) was passed the diet in Sep.2015. De-identified Information is introduced. It is the data anonymized enough not to de-anonymized easily. It is permitted to freely use without the consent of data subject. Notice that pseudonymized is not regarded as De-identified Information. Boarder line between pseudonymized and anonymized is a critical issue. I discuss this topic in this slide.
13. IEEE Global Initiative for Ethical
Considerations in Artificial Intelligence and
Autonomous Systems
• Personal Data and Individual Access Control
•
• Digital Personas
• Regional Jurisdiction
• Agency and Control
• Transparency and Access
• Symmetry
• Children’s Issues
• Appendices
15. • 以下のようなWGが基準つくり作業をしている
•
• IEEE P7002: Data Privacy Process
• IEEE P7004: Standard on Child and Student Data
Governance
• IEEE P7005: Standard on Employer Data
Governance
• IEEE P7006: Standard on Personal Data AI Agent
Working Group
•
26. Digital Persona: Birth to Death
• Pre-birth to post life digital records (health data)
• Birth and the right to claim citizenship (government data)
• Enrolment in school (education data)
• Travel and services (transport data)
• Cross border access and visas (immigration data)
• Consumption of goods and services (consumer and loyalty data)
• Connected devices, IoT and wearables (telecommunications data)
• Social and news networks (media and content data)
• Professional training, internship and work (tax and employment data)
• Societal participation (online forums, voting and party affiliation data)
• Contracts, assets and accidents (insurance and legal data)
• Financial participation (banking and finance data)
• Death (digital inheritance data).
27. • Issue:
• How can AI interact with government authorities to facilitate law enforcement and intelligence collection while respecting rule of law
and transparency for users?
•
Background:
• Government mass surveillance has been a major issue since allegations of collaboration between technology firms and signals intelligence
agencies such as the US National Security Agency and the UK Government Communications Headquarters were revealed. Further
attempts to acquire personal data by law enforcement agencies such as the US Federal Bureau of Investigation have complicated settled
legal methods of search and seizure. A major source of the problem concerns the current framework of data collection and storage, which
puts corporate organizations in custody of personal data and detached from the generators of that information. Further complicating this
concern is the legitimate interest that security services have in trying to deter and defeat criminal and national security threats.
•
Candidate Recommendations:
• Personal privacy AIs have the potential to change the data paradigm and put the generators of personal information at the centre of
collection. This would re-define the security services’ investigative methods to pre-Internet approaches wherein individuals would be able
to control their information while providing custody to corporate entities under defined and transparent policies. (Note – applications as
described below could also be performed by an AI agent or Guardian as described above, and will be assessed for efficacy by the IEEE P7006
working group.
•
Such a construct would mirror pre-Internet days in which individuals would deposit information in narrow circumstances such as banking,
healthcare, or in transactions.
•
The personal privacy AI agent would include root-level settings that would automatically provide data to authorities after they have
satisfied sufficiently specific warrants, subpoenas, or other court issued orders, unless authority has been vested in other agencies by local
or national law. Further, since corporately held information would be used under the negotiated terms that the AI agent facilitates,
authorities would not have access unless legal exceptions were satisfied. This would force authorities to avoid mass collection in favor of
particularized efforts:
•
28. Symmetry and Consequences
• Issue:
• Could a person have a personalized privacy AI or
algorithmic Agent or Guardian?
• Candidate Recommendations:
• Algorithmic guardian platforms should be
developed for individuals to curate and share their
personal data.
•
29. • Issue:
• Consent is vital to information exchange and innovation in the
digital age. How can we redefine consent regarding personal
data so it respects individual autonomy and dignity?
• Candidate Recommendations:
• The asymmetric power of institutions (including public interest)
over individuals should not force use of personal data when
alternatives such as personal guardians, personal agents, law-
enforcement-restricted registries, and other designs that are not
dependent on loss of agency. When loss of agency is required by
technical expedience, transparency needs to be stressed in order
to mitigate the asymmetric power relationship.
•
30. • Issue:
• Data that is shared easily or haphazardly can be used to make
inferences that an individual may not wish to share.
•
Candidate Recommendation:
• The same AI/AS that parses and analyzes data should also help
individuals understand how personal information can be used. AI can
prove granular-level consent in real-time. Specific information must be
provided at or near the point (or time) of initial data collection to
provide individuals with the knowledge to gauge potential privacy risks
in the long-term. When the user has direct contact with a system: data
controllers, platform operators, and system designers must monitor for
consequences. Positive, negative, and unpredictable impacts of
accessing and collecting data should be made explicitly known to an
individual to provide meaningful consent ahead of collection.
•
31. Agency and Control
• Agency(代理人)の仕事の範囲を決めるために、
personally identifiable information (PII)の定義の
明確化が必要。
• 個人データの収集と転移に対して、GDPRの精神に
則ったポリシーに依拠すべき
• Most of the Western Hemisphere is expected to
rely indirectly on GDPR compliance requirements to
correct corporate policy contrary to consent ethics
in collection and transfer process of personal data.
32. 個人は信頼できるID認証にアク
セスできること
• 認証されたIDで金融、政府、通信などのサービス
を受けることを保証するAI
• Individuals should have access to trusted identity
verification services to validate, prove and support
the context specific use of their identity. Regulated
industries and sectors such as banking, government,
telecommunications should provide data
verification services to citizens and consumers to
provide greatest usage and control for individuals.
33. Transparency and Access
• Issue:
• It is often difficult for users to determine what information
a service provider collects about them and the time of
such aggregation/collection (at the time of installation,
during usage, even when not in use, after deletion) and for
them to correct, amend or manage the information.
•
Candidate Recommendation:
• Service providers should ensure that personal data
management tools are easy to find and use within their
service interface.
•
34. 同意の取り方
• Issue:
• Many AI/AS systems will collect data from individuals that
do not have a direct relationship with, or are not
interacting directly towards said system. How can
meaningful consent be provided in these situations?
•
Candidate Recommendations:
• Where the subject does not have a direct relationship with
the system, consent should be dynamic and must not rely
entirely on initial Terms of Service or other instruction
provided by the data collector to someone other than the
subject. We recommend AI/AS systems be designed to
interpret the data preferences, verbal or otherwise, of all
users signalling limitations on collection and use, discussed
further below.
35. • Issue:
• How do we make better User Experience and consent education
available to consumers as standard to express meaningful
consent?
•
Candidate Recommendation:
• Tools, settings or consumer education are increasingly available to
be utilized now to develop, apply, and enforce consumer consent.
• Provide ‘Privacy Offsets’ as business alternative to the personal
data exchange –
• Apply ‘Consent’ to further certify Artificial Intelligence legal and
as ethics doctrine
36. • Issue:
• In most corporate settings, employees do not have clear consent
on how their personal information (including health and other
data) is used by employers. Given the power differential
between employees and employers, this is an area in need of
clear best-practice.
•
Candidate Recommendation:
In the same way that companies are doing Privacy Impact
Assessments for how individual data is used, companies need to
create Employee Data Impact Assessments to deal with the
specific nuances of corporate specific situations. It should be clear
that no data is collected without the consent of the employee.
•
37. • Issue:
• People who may be losing their ability to understand what kinds of processing
are done in server side computers of IT services on their private data are
unable to meaningfully consent to online terms. The elderly and mentally
impaired adults are vulnerable in terms of consent, presenting consequence to
data privacy.
•
Candidate Recommendations:
• Researchers or developers of AI/AS have to take into account this issue of
vulnerable people and try to work out an AI/AS which alleviates their helpless
situation to prevent possible damage caused by misuse of their personal data.
• Build an AI advisory commission, comprised of elder advocacy and mental
health self-advocacy groups, to help developers produce a level of tools and
comprehension metrics to manifest meaningful and pragmatic consent
applications.