Navi Mumbai Call Girls Service Pooja 9892124323 Real Russian Girls Looking Mo...
Agricultural Robotics: ethical and social questions
1. Alexei Grinbaum
CEA-Saclay/Larsim
CERNA ethics committee
CNPE du numérique et de l'IA
IEEE Global Initiative on Ethics
Sanofi Advisory Bioethics Council
AGRICULTURAL
ROBOTICS AND AI:
ETHICAL ISSUES
2. 10 DÉCEMBRE 2019
• Autonomy
• Imitation of life
• Nature/artefact status
• Emotional and affective
interaction
ETHICS OF AI, BY THEME
Sources:
https://tjournal.ru/news/128023-na-ferme-v-moskovskoy-oblasti-protestirovali-vr-ochki-dlya-korov-v-nih-zhivotnye-stali-menshe-trevozhitsya
ABC Science Youtube channel; AFP
Alexei Grinbaum
3. ETHICS OF AI: AGENTS, VALUES
Alexei Grinbaum
HUMAN BENEFIT
USEFULNESS FOR NATURE
LOYALTY TO USER/HABITAT
RESPONSIBILITY
TRACEABILITY
INTERPRETABILITY
XAI
SECURITY
PRIVACY
FAIRNESS
TRANSPARENCY
RESEARCHER,
PROGRAMMER,
DESIGNER
TRAINER
PRODUCER
OPERATOR
USER
cerna-ethics-allistene.org
SOCIAL
JUSTICE
4. BLACK BOXES AND DEMOCRACY
If your argument is that […] we
can and should create black
boxes, that I think does not strike
the kind of balance we have lived
with for 200, 300 years…
Être en mesure
« d’ouvrir les boîtes
noires » tient de l’enjeu
démocratique.
Cédric Villani
Barack Obama
Alexei Grinbaum
The “black box” problem, often
leading to a lack of trust in the
AI system. US Department of Defense
Pega.com
5. French national debate on nanotechnology, 2009-2010
“Complexity of the nanoworld is a concern…
There is no doubt that nanotechnologies
represent progress and even hope…”
Alexei Grinbaum
6. Nanotechnology and risk in France (2017)
33%
30%
37%
One shouldn't introduce
nanotechnological innovations
in society until all risks for
humans are known
Some risk from
nanotechnological innovation is
acceptable if benefits exceed
risks
I don't know, I'm not sufficiently
informed
7. Alexei Grinbaum
“The cows like to be on their own and travel through the robot themselves.”
Holloway, Bear, Wilkinson, Agric Hum Values (2014) 31:185–199; https://www.fwi.co.uk/livestock/robot-milker-demand-to-surge-say-farm-lenders
SYMBOLIC EFFECTS IN AGRICULTURAL ROBOTICS
8. SYMBOLIC EFFECTS IN AGRICULTURAL ROBOTICS
8Alexei Grinbaum
Evolving animals and plants
increased welfare
curious to interact with machines
free from human company
constrained by emotionless robots
being served or being controlled
reversible or irreversible change
Evolving humans
learning and adaptation
loss of skill
new routine
new lifestyle choices
increased flexibility
more time
easier to find young
successors
Evolving technology
faster
more powerful
more precise
easier to control
less natural
making non-human mistakes
Relational
redefined proximity
increased alienation
more utilitarian
still responsible?
10. PMO : « S’il y a débat sur les OGM, le nucléaire ou les nanotechnologies, c’est que des individus ou
des groupes ont exprimé sans permission leur opposition politique à ces projets politiques. »
http://www.slate.fr/france/83845/les-activistes-anti-techno-de-pmo-nous-expliquent-leur-strategie
Alexei Grinbaum
12. SOCIETAL ISSUES IN RESPONSIBLE INNOVATION
Legal
Who is the owner? Who is liable?
Distributive justice
Who benefits? Who doesn’t?
Biosecurity
Misuse. Dual use.
Biosafety
Sanitary and environmental risk.
Governance and uncertainty in complex systems.
Alexei Grinbaum
SYMBOLIC
EFFECTS
14. IMAGE RECOGNITION – FACE RECOGNITION
NEURAL NETWORK
DETERMINES PROPERTIES,
THEIR IMPORTANCE, AND
THE CONNECTIONS BETWEEN
THEM.
IT “FORMULATES” ABOUT 80
PARAMETERS DESCRIBING
FACIAL INFORMATION.
THE MEANING OF MANY OF
THESE PARAMETERS IS
UNKNOWN EVEN TO NTECH
ITSELF.
https://meduza.io/feature/2016/07/07/konets-chastnoy-zhizni
“As a private person I value security more
than privacy,” says Kukharenko of NTECH.
Next NTECH release: recognition of ethnic origin
Alexei Grinbaum
15. FACE RECOGNITION DISEASE RECOGNITION
NEURAL NETWORK
DETERMINES PROPERTIES,
THEIR IMPORTANCE, AND THE
CONNECTIONS BETWEEN
THEM. IT “FORMULATES”
https://meduza.io/feature/2016/07/07/konets-chastnoy-zhizni https://ntechlab.com Kermany et al., DOI: https://doi.org/10.1016/j.cell.2018.02.010
An artificial intelligence
system using transfer
learning techniques was
developed
It effectively classified
images for macular
degeneration and
diabetic retinopathy
It also accurately
distinguished bacterial
and viral pneumonia on
chest X-rays
THE MEANING OF
MANY OF THESE
PARAMETERS IS
UNKNOWN EVEN
TO NTECH ITSELF.
ABOUT 80
PARAMETERS
DESCRIBING
FACIAL
INFORMATION.
Alexei Grinbaum
16. WHY EXPLANATION?
• Predict the system’s behavior? Not really.
• Understand how the system might break down.
• Provide the user with a sense of what’s going on.
• Establish trust.
• Create a basis for social communication.
Alexei Grinbaum
17. SPECIFICATION PROBLEM
“Lu et al. discovered that leaves usually have diseases or holes.
Holes in leaves will be identified as the background instead of leaf area.”
Lü, C.; Ren, H.; Zhang, Y.; Shen, Y. Leaf area measurement based on image processing. In Proceedings of the 2010 International Conference on Measuring Technology and Mechatronics Automation, Changsha, China, 13–14 March 2010; Volume 2, pp. 580–582.
Azlah, M.A.F.; Chua, L.S.; Rahmad, F.R.; Abdullah, F.I.; Wan Alwi, S.R. Review on Techniques for Plant Leaf Classification and Recognition. Computers 2019, 8, 77.
Alexei Grinbaum
18. INSTABILITY OF MACHINE LEARNING
Ian J. Goodfellow, Jonathon Shlens & Christian Szegedy, Explaining
and harnessing adversarial examples, ICLR 2015 arXiv:1412.6572
Ch. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, R. Fergus,
Intriguing properties of neural networks, arXiv:1312.6199
19. FACTORS LEADING TO ETHICAL PROBLEMS
Specification problem
Training bias
Learning without
understanding
Instability of learning
Verification impossible,
benchmarking difficult
www.salemstatelog.com
Alexei Grinbaum
VALUE
CONFLICTS
Double mimetism:
machines imitate humans,
human users imitate
machines
Relational constitution of
social status
Frontier of opacity
A functional individual,
not a subject of law
20. EU GUIDELINES (APRIL 2019)
Alexei Grinbaum
From high-level values to
operational guidelines:
a lifecycle approach
21. PILOT ACTIONS FOR OPERATIONALIZING AI ETHICS
www.salemstatelog.com
Alexei Grinbaum
22. ROBUSTNESS AND SAFETY
22
Resilience to attack and security
Fallback plan and general safety
Accuracy
Reliability and reproducibility
Did you put in place a strategy to monitor and test if the AI system is
meeting the goals, purposes and intended applications?
• Did you test whether specific contexts or particular conditions need to be taken into account to
ensure reproducibility?
• Did you put in place verification methods to measure and ensure different aspects of the system's
reliability and reproducibility?
• Did you put in place processes to describe when an AI system fails in certain types of settings?
• Did you clearly document and operationalise these processes for the testing and verification of the
reliability of AI systems?
• Did you establish mechanisms of communication to assure (end-)users of the system’s reliability?
Alexei Grinbaum
23. ROBUSTNESS AND SAFETY
23
Alexei Grinbaum
Respect for privacy and data protection
Quality and integrity of data
Access to data
Unfair bias avoidance
Accessibility and universal design
Stakeholder participation
Auditability
Minimizing and reporting negative impact
Documenting trade-offs
Ability to redress
Traceability
Explainability
Communication
Sustainable and environmentally friendly AI
Social impact
Society and democracy
24. Filippino Lippi, National Gallery, Washington
“And Tobias went out to wash
his feet, and behold a
monstrous fish came up to
devour him. And he being
afraid of him, cried out with a
loud voice, saying: Lord, he
cometh upon me. And the
angel said to him: Take him by
the gill, and draw him to thee.
And when he had done so, he
drew him out upon the land,
and he began to pant before
his feet. Then the angel said
to him: Take out the entrails
of the fish, and lay up his
heart, and his gall, and his
liver for thee: for these are
necessary for useful
medicines.”
Tobit 6:2-5
Alexei Grinbaum