1. Emerging technologies: prioritising the citizen
Marcello Ienca, PhD
Chair of Ethics of AI
& Neuroscience
TUM
Head of Intelligent
Systems Group -
EPFL
4. What is an Emerging Technology?
Emerging technologies are technologies defined by five
attributes:
Radical novelty
Fast growth
Coherence (persistence over time)
Prominent impact
Uncertainty and ambiguity.
07.09.2023
4
Rotolo et al. (2015)
5. Brain surgery in Ancient India & European
Middle Age
Brain-computer interface for robotic limb control.
Hochberg et al. 2012, Nature
8. The Argument for Neutrality
Some authors have maintained
that technology is value-
neutral, in the sense that
technology is just a neutral
means to an end, and
accordingly can be put to good
or bad use (e.g., Pitt 2000).
This view might have some
plausibility in as far as
technology is considered to be
just a bare physical structure.
07.09.2023
8
9. The Argument for Value-Sensitivity
Technological development is a
goal-oriented process.
Therefore, technological
artifacts by definition have
certain functions, so that they
can be used for certain goals
but not, or far more difficulty
or less effectively, for other
goals.
07.09.2023
9
11. The Role of Goals
What goals do I want to achieve with this technology?
Am I creating functions that will help me achieve those
goals?
What unintended impact may this technology have?
07.09.2023
11
12. Example: AI for predicting sexual orientation
07.09.2023
12
Deep neural networks to extract
features from 35,326 facial images,
which were then entered into a
logistic regression aimed at
classifying sexual orientation.
Given a single facial image, a
classifier could correctly distinguish
between gay and heterosexual men
in 81% of cases, and in 74% of
cases for women.
Human judges achieved much lower
accuracy: 61% for men and 54% for
women.
Kosinski & Wang 2018, J of Pers & Soc
Psych
14. AI
Typically, computer systems are deemed intelligent
(hence called ‘intelligent agents’ or ‘intelligent
machines’) when they have the ability to perceive their
environment and take autonomous actions directed
towards successfully achieving a goal”.
Ienca & Vayena (2020), Report to the Council of Europe’s Ad Hoc Committee on AI
19. 19
AI/Big-Data-specific privacy issues
Surveillance Re-identification Inferential potential Data-exploitation
Deployment of privacy-
invasive surveillance
strategies in greater
scale and magnitude
compared to other
technologies
Identification of
people who wish to
remain anonymous
Inferring and generating
sensitive information
about people from their
non-sensitive data
Ubiquitous data points and
sensor-equipped autonomous
systems generate and collect
vast amounts of data without
the knowledge or consent of
subjects
Informational
privacy in the
Age of AI & Big
Data
21. 21
Inferential Potential
An algorithm was able to identify about 25 products that, when analyzed
together, allowed it to assign each shopper a “pregnancy prediction”
score.
It could also estimate her due date to within a small window, so Target
could send coupons timed to very specific stages of her pregnancy.
25. “Ethical safeguards for research that intervenes in human lives were largely set
up for medical and psychological studies, and are often written with definitions
that exclude Internet research. In the United States, for example, unless data
collected are both private and identifiable, informed consent is usually not
deemed necessary, and research requires minimal, if any, oversight by an
institutional review board. This would include data from Twitter, which are by
default public. Models built on anonymized Facebook data would also tend to
be exempt.”
25
Consent
26. Facebook’s Emotional Contagion
26
• Massive psychological experiment on
689,003 users
• Manipulating their news feeds to
assess the effects on their emotions
• Results show that when user timelines
were manipulated to reduce positive
expressions manipulated by others
“people produced fewer positive posts
and more negative posts”.
30. • Risk of spurious correlations (correlation ≠ causation)
• Uncertain clinical validity and reliability of diagnostic,
preventative or therapeutic inferences generated from
data mining
• Risk of (human) bias in the datasets
Hypothesis-free pattern identification and other data mining techniques
may successfully complement and enhance but not reliably replace
conventional scientific methods
30
31. Algorithmic Bias Data used to teach a machine learning
system reflect implicit values or attitudes
of humans involved in the data collection,
selection, and use.
Algorithmic
Discrimination
Due to algorithmic bias, machine
learning system produce outcomes that
result in unjust or prejudicial treatment
of different categories of people.
36. Microsoft’s Tay Chatbot
• Tay was an AI chatter bot that was
originally released by Microsoft
Corporation via Twitter on March 23,
2016.
• The bot was created by
Microsoft's Technology and
Research and Bing divisions, and
named "Tay" as an acronym for
"thinking about you”.
• Tay was designed to mimic the
language patterns of a 19-year-old
American girl, and to learn from
interacting with human users of Twitter.
37. Microsoft’s Tay Chatbot
• Tay caused controversy when
the bot began to post
inflammatory and offensive
tweets through its Twitter
account, causing Microsoft to
shut down the service in less
than 24h after its launch.
• According to Microsoft, this was
caused by trolls who "attacked"
the service as the bot made
replies based on its interactions
with people on Twitter.
40. The Blackbox Problem of AI 40
AI conundrum: The most capable AI
technologies—e.g., deep neural
networks—are notoriously the most
opaque, offering few clues as to how
they arrive at their conclusions.
41. 41
Ensure that „alghorithms are
not merely efficient, but
transparent and fair“ by
rendering them „more
amenable to ex post and ex
ante inspection“
(Goodman & Flaxman, 2016)
Unboxing the Black Box
42. Explainable AI
Explainable AI (XAI), or Interpretable AI, or Explainable
Machine Learning (XML), is artificial intelligence (AI) in
which humans can understand the decisions or
predictions made by the AI.
It contrasts with the "black box" concept in machine
learning where even its designers cannot explain why an
AI arrived at a specific decision.
07.09.2023
42
44. Documents issued by:
Public sector ~31% 26
documents from
governmental org. &
IGOs
Private sector ~27%
23 documents from
companies & private
sector alliances.
Academic
/research institutions,
NPOs, professional
assoc./scientific societies,
etc.
Systematic review & analysis of 84 AI ethics guidelines
published until 4/23/2019
44
45. Systematic review & analysis of 116 AI ethics guidelines
published until February 2021
Type of Issuers
45
40
35
30
25
20
15
10
5
0 Number of documents
Governmental Private Sector Academia NGO2
45
Variation over time in the publication of softlaw
documents on AI
45
40
35
30
25
20
15
10
5
0
2011 2012 2013 2014 2015 2016 2017 2018 2019
Number of Publications
48. 48
Regulating AI
• “Contemporary AI systems
are now becoming human-
competitive at general
tasks” and thereby “can
pose profound risks to
society and humanity”
• The current“ level of
planning and management”
of AI is insufficient.
49. 49
Regulating AI
• Balancing risks against
BENEFITS
• Loss of control due to
prohibitionism
• Questionable
enforceability
• Democratic
accountability
• Shifting in public
discourse
50. Predictiv
euncertainty
73
“Machines will be capable, within twenty years,
of doing any work a man can do". (Herbert
Simon, 1956)
”Anyone who looked for a source of power in the
transformation of the atoms was talking moonshine”
(Ernst Rutherford, 1932)
51. AdaptiveGovernance
7
7
Stepwise learning under conditions of acknowledged uncertainty, with initial limits to use, iterative steps of proactive ethical design,
evidence-based data collection and normative evaluation.
Oye 2012, IRGC
Ienca 2018, Ethics and Informatics
Adaptive
Governance
Laissez-
faire
Moratoriu
m
52. Alignment with human
needs and ethical values
Patient-
centered
design
Clinical
validation &
translation
Ethical
Assessment
Devices
Apps
Platforms
Social &
technological
innovation
Technology development
Ienca et al. 2018, J Neuroeng Rehabil
05.09.22
| 4
5
Proactive Ethical Design
Here I have another example of innovation. Innovation of weapon systems. Is this innovation? Yes, of course. Each of these weapons is new and innovative compared to the previous ones. Each of them works better technically speaking. But is this evolution something good or something right?
How should health data be protected?
Who should have access to data about your health? What information should be legitimately glimpsed from the data?
How should own and control the data? And what does that mean?
What is fair data processing? (avoidance of discrimination)
Now replace face recognition with something like skin cancer detection! This would undermine the universal right to healthcare.