2. SETON HALL | LAW
AI Ethical / Legal / Policy Principles
• Accountability
• Transparency and Explainability
• Fairness and Non-Discrimination
• Safety and Reliability
• Privacy
• Ownership and Intellectual Property
• Anonymization
• Lawful Basis
• Fairness
• Access, Information about Processing
• Erasure
• Profiling and Automated Decision-Making
• Controls
3. SETON HALL | LAW
Anonymization
• Can information be “related”
to a data subject
• Problem of large data sets
and automation
–Self-driving cars
–Genetic data
4. SETON HALL | LAW
Lawful Basis
• Consent and Explicit
Consent
• Necessity or Other
Reasons / Exceptions
5. SETON HALL | LAW
Fairness
• Lack of human control
• Use of data for AI
beyond original
expectations of data
subject
6. SETON HALL | LAW
Access, Information About
Processing
• What information does the
data subject have a right to
obtain?
–Probably at least the logic
of the algorithm
7. SETON HALL | LAW
Erasure
• What do erasure and deletion mean in
AI applications?
– What about applications that use
Blockchain? (E.g., European
Parliamentary Research Service,
Blockchain and the General Data
Protection Regulation (2019))
8. SETON HALL | LAW
Profiling and Automated
Decision-Making
• GDPR Article 4(4)(profiling): “[A]ny form of
automated processing of personal data which
consists of the use of personal data relating to
a natural person to evaluate certain personal
aspects relating to a natural person, in
particular to analyze or predict aspects of that
natural person’s performance at work,
economic situation, health, personal
preferences, interests, reliability, behaviour,
location or movements.”
9. SETON HALL | LAW
Profiling and Automated
Decision-Making
• GDPR Article 22 (automated decisions):
“The data subject shall have the right
not to be subject to a decision based
solely on automated processing,
including profiling, which produces
legal effects concerning him or her or
similarly significantly affects him or
her.”
10. SETON HALL | LAW
Profiling and Automated
Decision-Making
• If there is “human intervention” in
decision-making Art. 22 will not apply
• Exceptions include necessity and explicit
consent
• Controller must “implement suitable
measures to safeguard the data subject’s
rights and freedoms and legitimate
interests”
– Unclear exactly what this means
11. SETON HALL | LAW
Controls
• Privacy by Design?
• AI Ethics Boards? (AI
Ethics by Design)
12. SETON HALL | LAW
Proposed EU AI Regulation
• Establish National authorities and an EU AI
Board (Art. 56, 63)
• Some “artificial intelligence practices” banned,
e.g.:
– “an AI system that deploys subliminal techniques
beyond a person’s consciousness in order to materially
distort a person’s behaviour in a manner that causes
or is likely to cause that person or another person
physical or psychological harm” (Art. 5(1)(a))
– Social ratings used by public authorities leading to
“detrimental treatment”
– Some law enforcement uses
13. SETON HALL | LAW
Proposed EU AI Regulation
• Rules for AI systems deemed “High-risk,” e.g.
requirements for training data, transparency
accessibility to users, and cybersecurity
– E.g. “measures to prevent and control for attacks
trying to manipulate the training dataset (‘data
poisoning’), inputs designed to cause the model to
make a mistake (‘adversarial examples’), or model
flaws.” (Art. 15(4)).
– Notification that user is interacting with an AI system
(Art. 52(1))
– Notices on “deep fakes” (Art. 52(3))
– Post-market monitoring and incident / malfunction
reporting (Art. 61-62)
14. SETON HALL | LAW
Thank You
Prof. David Opderbeck, Seton Hall University Law School