SlideShare a Scribd company logo
1 of 30
The EU ‘AI ACT’: a “risk-based”
legislation for robotic surgery
Workshop at ICRA 2024
Autonomy in Robotics Surgery: State of the art,
technical and regulatory challenges for clinical
application
1
DIPARTIMENTO
DI SCIENZE
GIURIDICHE
Prof. Federico Costantini
Department of Legal Sciences, University of Udine, IT
Summary
2
(1) The legal framework of the «AI ACT»: building the EU «Digital
Single Market»
(2) A quick overview of the «AI ACT»: «risk-based» legislation and
governance set-up
(3) The «AI ACT» and Robotics surgery: challenges and
opportunities
(4) Conclusions (?) / Recomendations (!)
(1) The legal framework of
the «AI ACT»
Building the EU «Digital Single Market»
3
Reg. (UE) 910/2014 “eIDAS” E-signatures
Dir. (UE) 2016/1148
«N.I.S.» network information
security
«Privacy Package»
Reg. (UE) 2016/679
“GDPR”
Dir. (UE) (UE) 2016/680
«criminal system»
Dir. (UE) (UE) 2016/681
«PRN»
Proposal of Reg. «E-Privacy»
COM(2017)10
Reg. (UE) 2018/1807 «non-personal data»
Dir. (UE) 2019/790 copyright in the
Digital Market
Digital Service Package 15/12/20
COM(2020) 825 final Proposal Digital Service Act
COM(2020) 842 final Proposal Digital Markets Act
Reg. (UE) 2019/881 Cybersecurity
Act
Information security
COM(2020) 823 final
Proposal NIS 2.0
Dir. (EU) 2022/2557 Critical Entities
Resilience
P9_TA(2021)0144 online terrorism
content spreading
Beginning 2000
Dir. (UE)
95/46/CE data
protection
Dir. (UE)
2002/58/CE
privacy in
electronic
communications
Dir. (UE)
1999/93/CE
electronic
signature
Dir. (UE)
2000/31/CE
electronic
commerce
Dir. (UE)
2001/29/CE
digital copyright
EU Data Strategy
“Data Governance Act”
(Regulation (EU) 2022/868)
«Data act»
(Regulation (EU) 2023/2854 )
Timeline «digital single market»
Reg. (UE) 2023/1230 «Machinery
Regulation»
COM/2020/593 final “Crypto-assets”
Reg. (UE) 2022/2065
Reg. (UE) 2022/1925
Reg. (UE) 2024/?
Dir. 2006/42/CE «Machinery Directive»
COM/2022/197 European Health Data Space
DIRECTIVE REGULATION
Artificial Intelligence 20/10/20
P9_TA(2020)0275 Ethics and IA (proposal
Artificial intelligence Law COM/2021/206)
P9_TA(2020)0276 Civil liability AI (Proposal
Reg.)
P9_TA(2020)0277 Intellectual property AI
(Res.)
P9_TA(2020)0274 Digital Services and
Fundamental Righs (Res.)
P9_TA(2021)0009, «Killer Robots» 20/1/2021
REG. (EU) 2022/2554)
“DORA”
Key points of (almost) 30 years of EU legislation
5
1.- EU territorial enlargement
EU «digital single market» expansion
2.- different legal tools Directives ->
Regulations: strenghtening legislation
Directives (EU gives the objectives,
Member States follow) -> Regulations
(obligations for States, companies,
citizens)
3.- Increasing cooperation with public
authority by Internet providers
e.g. cybersecurity, copyright
4.- technological innovation
2006: social networks, 2020 «co-bots»,
2023 LLM
5.- Governance
Fundamental rights / economical growth /
institutional stability
6.- Complexity of sources of law
e.g. data protection: guidelines, Codes of
Conduct, ISO standards
(2) A quick overview of the
«AI ACT»
«risk-based» legislation and governance set-up
6
7
https://www.europarl.europa.eu/RegistreWeb/search/simpleSearchHome.htm?relatio
ns=NUTA%23%23T9-0138%2F2024&sortAndOrder=DATE_DOCU_DESC
https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138-FNL-
COR01_EN.pdf
https://artificialintelligenceact.eu/
Definition of «AI system» (art. 3 n. 1)
«An AI system means
- a machine-based system
- designed to operate with varying levels of autonomy and
- that may exhibit adaptiveness after deployment and
- that, for explicit or implicit objectives, infers, from the input it
receives, how to generate outputs such as
- predictions, content, recommendations, or
- decisions
- that can influence physical or virtual environments».
Why the «AI ACT» is called «risk-based legislation»
11
Unacceptable risks
High Risks
Limited Risks
Minimal risks
Prohibited «practices»
No specific requirements
Strict
obligations
Art. 50
Requirements
Obligations
Art. 5
GP-AI models (hidden)
Transparency
«systemic Risks» (hidden)
List of 8 prohibited «AI practices» (art. 5 «AI ACT»)
12
(a) subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive technique
[…]
(b) exploits any of the vulnerabilities of a natural person or a specific group of persons […]
(c) evaluation or classification of natural persons or groups of persons […] with the social score leading to
detriment […]
(i) social context […]
(ii) social behaviour […]
(d) assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling […][]
(e) create or expand facial recognition databases through the untargeted scraping of facial images from the
internet or CCTV footage […]
(f) infer emotions of a natural person in the areas of workplace and education institutions […]
(g) the use of biometric categorisation systems that categorise individually natural persons based on their
biometric data to deduce or infer […]
(h) use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of
law enforcement, unless […] -> EXCEPTIONS (search of victims, dangerous criminals, terrorists)
Unacceptable risks
Chapter III - Section 2: Requirements for High-Risk AI
System
13
Article 8: Compliance with the Requirements
Article 9: Risk Management System
Article 10: Data and Data Governance
Article 11: Technical Documentation
Article 12: Record-Keeping
Article 13: Transparency and Provision of Information to Deployers
Article 14: Human Oversight
Article 15: Accuracy, Robustness and Cybersecurity
High Risks
Chapter III - Section 3: Obligations of Providers and
Deployers of High-Risk AI Systems and Other Parties
14
Article 16: Obligations of Providers of High-Risk AI Systems
Article 17: Quality Management System
Article 18: Documentation Keeping
Article 19: Automatically Generated Logs
Article 20: Corrective Actions and Duty of Information
Article 21: Cooperation with Competent Authorities
Article 22: Authorised Representatives of providers of high-risk AI systems
Article 23: Obligations of Importers
Article 24: Obligations of Distributors
Article 25: Responsibilities Along the AI Value Chain
Article 26: Obligations of Deployers of High-Risk AI Systems
Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems
High Risks
Chapter IV: Transparency Obligations for Providers
and Deployers of Certain AI Systems and GPAI Models
15
«intended to interact directly with natural persons» (art. 50 par.1)
 the natural persons concerned are informed that they are interacting with an AI system
«generating synthetic audio, image, video or text content» (art. 50 par. 2)
 the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or
manipulated
«emotion recognition system or a biometric categorisation system» (art. 50 par. 3)
 Inform exposed persons
 Process according to GDPR […]
«AI system that generates or manipulates image, audio or video content constituting a deep fake» (art. 50 par. 4)
 disclose that the content has been artificially generated or manipulated
Limited Risks
(decentralised) Governance
19
- Governance (Chapter VII)
- European AI Office
- European Artificial Intelligence Board (1 representative X each Member State)
- National Competent Authorities
- Certification mechanism (binding for high rish AI / voluntary for others)
- Database for high risk systems (Chapter VIII)
- Market monitoring (Chapter IX) –> competition!
Commission Decision of 24 January 2024 establishing the European
Artificial Intelligence Office
https://eur-lex.europa.eu/eli/C/2024/1459/oj
(3) The «AI ACT» and Robotics
surgery: challenges and opportunities
Challenge = high risk classification / opportunities = «regulatory sandboxes»
20
Classification of robotic surgery according to the «AI ACT» as «High risk
AI System»
21
When an AI system is a «high risk
AI system»?
Three alternatives
(1) The AI is a certain product
(2) The AI is a safety component of
a certain product
(3) The AI is included in a special list
of applications (e.g. law
enforcement, migration)
ANNEX I
ANNEX III (not relevant)
Classification of robotic surgery according to the «AI ACT» as «High risk
AI System»
22
List of ANNEX I (extract)
1. Directive 2006/42/EC of the European Parliament and of the Council of 17 May
2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24)
[as repealed by the Machinery Regulation];
[…]
11. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5
April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC)
No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives
90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1);
12. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5
April 2017 on in vitro diagnostic medical devices and repealing Directive
98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
Robotic surgery seems to fall into «high risk» AI systems
Requirement of «High Risk AI System»
23
Human Oversight (art. 14)
1. High-risk AI systems shall be designed and developed in such a way,
including with appropriate human-machine interface tools, that they can
be effectively overseen by natural persons during the period in which they
are in use.
2. Human oversight shall aim to prevent or minimise the risks to health,
safety or fundamental rights that may emerge when a high-risk AI system is
used
- in accordance with its intended purpose or
- under conditions of reasonably foreseeable misuse, in particular where
such risks persist despite the application of other requirements set out
in this Section.
[…]
Requirement of «High Risk AI System»
24
Human Oversight (art. 14)
[…]
3. The oversight measures shall be
- commensurate with
- the risks,
- level of autonomy and
- context of use of the high-risk AI system,
- and shall be ensured through either one or both of the following types of measures:
- (a) measures identified and built, when technically feasible, into the high-risk AI system by the provider
before it is placed on the market or put into service;
- (b) measures
- identified by the provider before placing the high-risk AI system on the market or putting it into service and
- that are appropriate to be implemented by the deployer.
[…]
Requirement of «High Risk AI System»
25
Human Oversight (art. 14)
[…]
4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be
provided to the deployer in such a way that natural persons to whom human oversight is
assigned are enabled, as appropriate and proportionate:
(a) to properly understand the relevant capacities and limitations of the high-risk AI system
and be able to duly monitor its operation, including in view of detecting and addressing
anomalies, dysfunctions and unexpected performance;
(b) to remain aware of the possible tendency of automatically relying or over-relying on the
output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems
used to provide information or recommendations for decisions to be taken by natural persons;
(c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the
interpretation tools and methods available;
(d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise
disregard, override or reverse the output of the high-risk AI system;
(e) to intervene in the operation of the high-risk AI system or interrupt the system through a
‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.
«Human»
traininig
Procedures
Art. 27: «Fundamental rights impact assessment for high risk AI systems»
(only for those which are included in ANNEX III list, but interesting)
(a) a description of the deployer’s processes in which the high-risk AI
system will be used in line with its intended purpose;
(b) a description of the period of time within which, and the frequency
with which, each high-risk AI system is intended to be used;
(c) the categories of natural persons and groups likely to be affected by
its use in the specific context;
(d) the specific risks of harm likely to have an impact on the categories
of natural persons or groups of persons identified pursuant to point (c) of
this paragraph, taking into account the information given by the provider
pursuant to Article 13;
(e) a description of the implementation of human oversight measures,
according to the instructions for use;
(f) the measures to be taken in the case of the materialisation of those
risks, including the arrangements for internal governance and complaint
mechanisms.
Design
Documentation
«Human»
traininig
«Regulatory sandbox» as a worldwide «megatrend» and EU legislative
practice
27
- «secure processing
environment», art. 2 no.
(20) "Open Governance
Act", Reg. EU 2022/868
- «regulatory sandbox»
"Artificial Intelligence
Act" COM/2021/206
final
- Cryprocurrencies and
fintech
https://www.bancaditalia.it/focus/sandbox/?dotcache=refresh
https://ec.europa.eu/digital-building-
blocks/sites/display/EBSI/Sandbox+Project
Definition art. 3 «AI ACT»
28
(55) ‘AI regulatory sandbox’ means
- a controlled framework
- set up by a competent authority
- which offers providers or prospective providers of AI systems the
possibility to develop, train, validate and test, where appropriate
in real-world conditions,
- an innovative AI system,
- pursuant to a sandbox plan
- for a limited time
- under regulatory supervision;
Definitions art. 3 «AI ACT»
29
(57) ‘testing in real-world conditions’ means
- the temporary testing of an AI system for its intended purpose
- in real-world conditions outside a laboratory or otherwise simulated
environment,
- with a view to
- gathering reliable and robust data and
- to assessing and verifying the conformity of the AI system with the requirements
of this Regulation
- and it does not qualify as placing the AI system on the market or
putting it into service within the meaning of this Regulation, provided
that all the conditions laid down in Article 57 or 60 are fulfilled;
Chapter VI: Measures in Support of Innovation
30
Article 57: AI Regulatory Sandboxes
Article 58: Detailed arrangements for and functioning of AI regulatory
sandboxes
Article 59: Further Processing of Personal Data for Developing Certain AI
Systems in the Public Interest in the AI Regulatory Sandbox
Article 60: Testing of High-Risk AI Systems in Real World Conditions Outside
AI Regulatory Sandboxes
Article 61: Informed consent to participate in testing in real world conditions
outside AI regulatory sandboxes
Article 62: Measures for Providers and Deployers, in Particular SMEs,
Including Start-Ups
Article 63: Derogations for specific operators
Art. 57 «AI ACT»
31
- at least one 'regulatory sandbox' for each Member State (but
allowed at a regional or local level).
- Sufficient resources must be allocated to competent authorities.
- Risks must be identified and can only be activated for limited periods
- Documentation must be published.
- Risks must be mitigated.
- A single European interface must be created with all relevant
information.
- AI operators remain responsible for damages but are not
subject to administrative penalties.
Art. 60 and 61 «AI ACT»
32
- For «real world testing», but special safeguards must be
adopted (planning, documentation, monitoring).
- The testing time must be a maximum of 6 months.
- Vulnerable subjects must be protected.
- Information must be provided and consent must be collected
from the involved individuals (except in the case of 'law
enforcement’).
- It must be possible to revoke consent and exit the test area.
- Incident reporting must be ensured
Conclusions (?) /
Recomendations (!)
33
Conclusions (?) / Recomendations (!)
34
- Complex Legal framework: not only «AI ACT», but also
«Machinery Regulation» and GDPR (and many others)
- The «digital society» is a matter of compliance: audit and
certification are the surrogate of punishments and fines
- Could it be legal a «full autonomous robotic surgeon»?
- Regulatory sandboxes: innovation as a drive offered by the
legislator
- Research on robotics is not only a technological matter:
Bioethics and «Oviedo convention»?
Many thanks
35
federico.costantini@uniud.it

More Related Content

Similar to The EU ‘AI ACT’: a “risk-based” legislation for robotic surgery

Infosec Law It Web (March 2006)
Infosec Law It Web (March 2006)Infosec Law It Web (March 2006)
Infosec Law It Web (March 2006)
Lance Michalson
 
CYBER Liability and CYBER Security (nov 21, 2014)(final)
CYBER Liability and CYBER Security (nov 21, 2014)(final)CYBER Liability and CYBER Security (nov 21, 2014)(final)
CYBER Liability and CYBER Security (nov 21, 2014)(final)
Melanie Kamilah Williams
 
Internet of Things (IoT) - Hafedh Alyahmadi - May 29, 2015.pdf
Internet of Things (IoT) - Hafedh Alyahmadi - May 29, 2015.pdfInternet of Things (IoT) - Hafedh Alyahmadi - May 29, 2015.pdf
Internet of Things (IoT) - Hafedh Alyahmadi - May 29, 2015.pdf
ImXaib
 
Ethics of security and surveillance technologies opinion 28
Ethics of security and surveillance technologies opinion 28Ethics of security and surveillance technologies opinion 28
Ethics of security and surveillance technologies opinion 28
Karlos Svoboda
 

Similar to The EU ‘AI ACT’: a “risk-based” legislation for robotic surgery (20)

Looking beyond 2020 IEEE – 13th System of Systems Engineering Conference - So...
Looking beyond 2020 IEEE – 13th System of Systems Engineering Conference - So...Looking beyond 2020 IEEE – 13th System of Systems Engineering Conference - So...
Looking beyond 2020 IEEE – 13th System of Systems Engineering Conference - So...
 
Journée thématique "Évaluation d’Impact sur la Vie Privée des Applications RFID"
Journée thématique "Évaluation d’Impact sur la Vie Privée des Applications RFID"Journée thématique "Évaluation d’Impact sur la Vie Privée des Applications RFID"
Journée thématique "Évaluation d’Impact sur la Vie Privée des Applications RFID"
 
EU data protection issues in IoT
EU data protection issues in IoTEU data protection issues in IoT
EU data protection issues in IoT
 
16190734.ppt
16190734.ppt16190734.ppt
16190734.ppt
 
Ethical and Legal Considerations for Trustworthy AI
Ethical and Legal Considerations for Trustworthy AIEthical and Legal Considerations for Trustworthy AI
Ethical and Legal Considerations for Trustworthy AI
 
Call for Research Articles - 7th International Conference on Computer Science...
Call for Research Articles - 7th International Conference on Computer Science...Call for Research Articles - 7th International Conference on Computer Science...
Call for Research Articles - 7th International Conference on Computer Science...
 
Call for Papers - 7th International Conference on Computer Science and Inform...
Call for Papers - 7th International Conference on Computer Science and Inform...Call for Papers - 7th International Conference on Computer Science and Inform...
Call for Papers - 7th International Conference on Computer Science and Inform...
 
Infosec Law It Web (March 2006)
Infosec Law It Web (March 2006)Infosec Law It Web (March 2006)
Infosec Law It Web (March 2006)
 
Towards the Internet of Things
Towards the Internet of ThingsTowards the Internet of Things
Towards the Internet of Things
 
CYBER Liability and CYBER Security (nov 21, 2014)(final)
CYBER Liability and CYBER Security (nov 21, 2014)(final)CYBER Liability and CYBER Security (nov 21, 2014)(final)
CYBER Liability and CYBER Security (nov 21, 2014)(final)
 
Nicolas Petit 27 september 18 - Hard Questions of Law and AI
Nicolas Petit 27 september 18 - Hard Questions of Law and AINicolas Petit 27 september 18 - Hard Questions of Law and AI
Nicolas Petit 27 september 18 - Hard Questions of Law and AI
 
Ethical hacking, the way to get product & solution confidence and trust in an...
Ethical hacking, the way to get product & solution confidence and trust in an...Ethical hacking, the way to get product & solution confidence and trust in an...
Ethical hacking, the way to get product & solution confidence and trust in an...
 
Data science chicago_(public)
Data science chicago_(public)Data science chicago_(public)
Data science chicago_(public)
 
Kees stuurman
Kees stuurmanKees stuurman
Kees stuurman
 
Internet of Things (IoT) - Hafedh Alyahmadi - May 29, 2015.pdf
Internet of Things (IoT) - Hafedh Alyahmadi - May 29, 2015.pdfInternet of Things (IoT) - Hafedh Alyahmadi - May 29, 2015.pdf
Internet of Things (IoT) - Hafedh Alyahmadi - May 29, 2015.pdf
 
Artificial intelligence, its application and development prospects in the con...
Artificial intelligence, its application and development prospects in the con...Artificial intelligence, its application and development prospects in the con...
Artificial intelligence, its application and development prospects in the con...
 
I4ADA the hague summitCAHAI
I4ADA the hague summitCAHAII4ADA the hague summitCAHAI
I4ADA the hague summitCAHAI
 
IoT + AI + Big Data Integration Strategy Insights from Patents 3Q 2016
IoT + AI + Big Data Integration Strategy Insights from Patents 3Q 2016IoT + AI + Big Data Integration Strategy Insights from Patents 3Q 2016
IoT + AI + Big Data Integration Strategy Insights from Patents 3Q 2016
 
10th International Conference on Cybernetics & Informatics (CYBI 2023)
10th International Conference on Cybernetics & Informatics (CYBI 2023)10th International Conference on Cybernetics & Informatics (CYBI 2023)
10th International Conference on Cybernetics & Informatics (CYBI 2023)
 
Ethics of security and surveillance technologies opinion 28
Ethics of security and surveillance technologies opinion 28Ethics of security and surveillance technologies opinion 28
Ethics of security and surveillance technologies opinion 28
 

More from Federico Costantini

More from Federico Costantini (20)

L’ETICA COME “DESIGN” NELL’INTELLIGENZA ARTIFICIALE
L’ETICA COME “DESIGN” NELL’INTELLIGENZA ARTIFICIALEL’ETICA COME “DESIGN” NELL’INTELLIGENZA ARTIFICIALE
L’ETICA COME “DESIGN” NELL’INTELLIGENZA ARTIFICIALE
 
Digital transformation: Smart Working, sicurezza e dati personali
Digital transformation: Smart Working, sicurezza e dati personaliDigital transformation: Smart Working, sicurezza e dati personali
Digital transformation: Smart Working, sicurezza e dati personali
 
COVID19 vs GDPR: the case of “Immuni” Italian app
COVID19 vs GDPR: the case of “Immuni” Italian appCOVID19 vs GDPR: the case of “Immuni” Italian app
COVID19 vs GDPR: the case of “Immuni” Italian app
 
20191004 Gamification PA
20191004 Gamification PA20191004 Gamification PA
20191004 Gamification PA
 
COST Action CA16222 on Autonomous and Connected Transport –How block chain co...
COST Action CA16222 on Autonomous and Connected Transport –How block chain co...COST Action CA16222 on Autonomous and Connected Transport –How block chain co...
COST Action CA16222 on Autonomous and Connected Transport –How block chain co...
 
20181012 Intelligenza artificiale e soggezione all'azione amministrativa: il ...
20181012 Intelligenza artificiale e soggezione all'azione amministrativa: il ...20181012 Intelligenza artificiale e soggezione all'azione amministrativa: il ...
20181012 Intelligenza artificiale e soggezione all'azione amministrativa: il ...
 
20180327 Intelligenza artificiale e “computabilità giuridica” tra diritto civ...
20180327 Intelligenza artificiale e “computabilità giuridica” tra diritto civ...20180327 Intelligenza artificiale e “computabilità giuridica” tra diritto civ...
20180327 Intelligenza artificiale e “computabilità giuridica” tra diritto civ...
 
20180914 “Inaction is not an option”. Informazione, diritto e società nella p...
20180914 “Inaction is not an option”. Informazione, diritto e società nella p...20180914 “Inaction is not an option”. Informazione, diritto e società nella p...
20180914 “Inaction is not an option”. Informazione, diritto e società nella p...
 
20180220 PROFILI GIURIDICI DELLA SICUREZZA INFORMATICA NELL’INDUSTRIA 4.0
20180220 PROFILI GIURIDICI DELLA SICUREZZA INFORMATICA NELL’INDUSTRIA 4.020180220 PROFILI GIURIDICI DELLA SICUREZZA INFORMATICA NELL’INDUSTRIA 4.0
20180220 PROFILI GIURIDICI DELLA SICUREZZA INFORMATICA NELL’INDUSTRIA 4.0
 
20171031 Cosa vuol dire «essere avvocato» oggi? Il giurista tra «complessità ...
20171031 Cosa vuol dire «essere avvocato» oggi? Il giurista tra «complessità ...20171031 Cosa vuol dire «essere avvocato» oggi? Il giurista tra «complessità ...
20171031 Cosa vuol dire «essere avvocato» oggi? Il giurista tra «complessità ...
 
20170928 A (very short) introduction
20170928 A (very short) introduction20170928 A (very short) introduction
20170928 A (very short) introduction
 
20170927 Introduzione ai problemi concernenti prova come “informazione” e “in...
20170927 Introduzione ai problemi concernenti prova come “informazione” e “in...20170927 Introduzione ai problemi concernenti prova come “informazione” e “in...
20170927 Introduzione ai problemi concernenti prova come “informazione” e “in...
 
Social network, social profiling, predictive policing. Current issues and fut...
Social network, social profiling, predictive policing. Current issues and fut...Social network, social profiling, predictive policing. Current issues and fut...
Social network, social profiling, predictive policing. Current issues and fut...
 
Collecting Evidence in the «Information Society»: Theoretical Background, Cur...
Collecting Evidence in the «Information Society»: Theoretical Background, Cur...Collecting Evidence in the «Information Society»: Theoretical Background, Cur...
Collecting Evidence in the «Information Society»: Theoretical Background, Cur...
 
"Società dell'Informazione", organizzazione del lavoro e "Risorse Umane"
"Società dell'Informazione", organizzazione del lavoro e "Risorse Umane""Società dell'Informazione", organizzazione del lavoro e "Risorse Umane"
"Società dell'Informazione", organizzazione del lavoro e "Risorse Umane"
 
Problemi inerenti la “sicurezza” negli “autonomous vehicles”
Problemi inerenti la “sicurezza” negli “autonomous vehicles”Problemi inerenti la “sicurezza” negli “autonomous vehicles”
Problemi inerenti la “sicurezza” negli “autonomous vehicles”
 
Introduzione generale ai problemi della prova digitale
Introduzione generale ai problemi della prova digitaleIntroduzione generale ai problemi della prova digitale
Introduzione generale ai problemi della prova digitale
 
«Information Society» and MaaS in the European Union: current issues and futu...
«Information Society» and MaaS in the European Union: current issues and futu...«Information Society» and MaaS in the European Union: current issues and futu...
«Information Society» and MaaS in the European Union: current issues and futu...
 
POSTER: "When an algorithm decides «who has to die». Security concerns in “A...
POSTER: "When an algorithm decides «who has to die».  Security concerns in “A...POSTER: "When an algorithm decides «who has to die».  Security concerns in “A...
POSTER: "When an algorithm decides «who has to die». Security concerns in “A...
 
Società dell’Informazione e “diritto artificiale”. Il problema del “controll...
Società dell’Informazione e “diritto artificiale”.  Il problema del “controll...Società dell’Informazione e “diritto artificiale”.  Il problema del “controll...
Società dell’Informazione e “diritto artificiale”. Il problema del “controll...
 

Recently uploaded

Integrated Mother and Neonate Childwood Illness Health Care
Integrated Mother and Neonate Childwood Illness  Health CareIntegrated Mother and Neonate Childwood Illness  Health Care
Integrated Mother and Neonate Childwood Illness Health Care
ASKatoch1
 
ASSISTING WITH THE USE OF URINAL BY ANUSHRI SRIVASTAVA.pptx
ASSISTING WITH THE USE OF URINAL BY ANUSHRI SRIVASTAVA.pptxASSISTING WITH THE USE OF URINAL BY ANUSHRI SRIVASTAVA.pptx
ASSISTING WITH THE USE OF URINAL BY ANUSHRI SRIVASTAVA.pptx
AnushriSrivastav
 
ASSISTING WITH THE USE OF BED PAN BY ANUSHRI SRIVASTAVA.pptx
ASSISTING WITH THE USE OF BED PAN BY ANUSHRI SRIVASTAVA.pptxASSISTING WITH THE USE OF BED PAN BY ANUSHRI SRIVASTAVA.pptx
ASSISTING WITH THE USE OF BED PAN BY ANUSHRI SRIVASTAVA.pptx
AnushriSrivastav
 
Benefits of Dentulu's Salivary Testing.pptx
Benefits of Dentulu's Salivary Testing.pptxBenefits of Dentulu's Salivary Testing.pptx
Benefits of Dentulu's Salivary Testing.pptx
Dentulu Inc
 

Recently uploaded (20)

Best Way 30-Days Keto Meal Plan For Diet
Best Way 30-Days Keto Meal Plan For DietBest Way 30-Days Keto Meal Plan For Diet
Best Way 30-Days Keto Meal Plan For Diet
 
Virtual Health Platforms_ Revolutionizing Patient Care.pdf
Virtual Health Platforms_ Revolutionizing Patient Care.pdfVirtual Health Platforms_ Revolutionizing Patient Care.pdf
Virtual Health Platforms_ Revolutionizing Patient Care.pdf
 
Dr. Gaurav Gangwani: Leading Interventional Radiologist in Mumbai, India
Dr. Gaurav Gangwani: Leading Interventional Radiologist in Mumbai, IndiaDr. Gaurav Gangwani: Leading Interventional Radiologist in Mumbai, India
Dr. Gaurav Gangwani: Leading Interventional Radiologist in Mumbai, India
 
Importance of Diet on Dental Health.docx
Importance of Diet on Dental Health.docxImportance of Diet on Dental Health.docx
Importance of Diet on Dental Health.docx
 
Notify ME 89O1183OO2 #cALL# #gIRLS# In Chhattisgarh By Chhattisgarh #ℂall #gI...
Notify ME 89O1183OO2 #cALL# #gIRLS# In Chhattisgarh By Chhattisgarh #ℂall #gI...Notify ME 89O1183OO2 #cALL# #gIRLS# In Chhattisgarh By Chhattisgarh #ℂall #gI...
Notify ME 89O1183OO2 #cALL# #gIRLS# In Chhattisgarh By Chhattisgarh #ℂall #gI...
 
Integrated Mother and Neonate Childwood Illness Health Care
Integrated Mother and Neonate Childwood Illness  Health CareIntegrated Mother and Neonate Childwood Illness  Health Care
Integrated Mother and Neonate Childwood Illness Health Care
 
Sugar Medicine_ Natural Homeopathy Remedies for Blood Sugar Management.pdf
Sugar Medicine_ Natural Homeopathy Remedies for Blood Sugar Management.pdfSugar Medicine_ Natural Homeopathy Remedies for Blood Sugar Management.pdf
Sugar Medicine_ Natural Homeopathy Remedies for Blood Sugar Management.pdf
 
#cALL# #gIRLS# In Chhattisgarh ꧁❤8901183002❤꧂#cALL# #gIRLS# Service In Chhatt...
#cALL# #gIRLS# In Chhattisgarh ꧁❤8901183002❤꧂#cALL# #gIRLS# Service In Chhatt...#cALL# #gIRLS# In Chhattisgarh ꧁❤8901183002❤꧂#cALL# #gIRLS# Service In Chhatt...
#cALL# #gIRLS# In Chhattisgarh ꧁❤8901183002❤꧂#cALL# #gIRLS# Service In Chhatt...
 
Storage_of _Bariquin_Components_in_Storage_Boxes.pptx
Storage_of _Bariquin_Components_in_Storage_Boxes.pptxStorage_of _Bariquin_Components_in_Storage_Boxes.pptx
Storage_of _Bariquin_Components_in_Storage_Boxes.pptx
 
Renal Replacement Therapy - ICU Guidlines
Renal Replacement  Therapy - ICU GuidlinesRenal Replacement  Therapy - ICU Guidlines
Renal Replacement Therapy - ICU Guidlines
 
ASSISTING WITH THE USE OF URINAL BY ANUSHRI SRIVASTAVA.pptx
ASSISTING WITH THE USE OF URINAL BY ANUSHRI SRIVASTAVA.pptxASSISTING WITH THE USE OF URINAL BY ANUSHRI SRIVASTAVA.pptx
ASSISTING WITH THE USE OF URINAL BY ANUSHRI SRIVASTAVA.pptx
 
Management of heart failure 23.02.24.pptx
Management of heart failure 23.02.24.pptxManagement of heart failure 23.02.24.pptx
Management of heart failure 23.02.24.pptx
 
Digital Healthcare: The Future of Medical Consultations
Digital Healthcare: The Future of Medical ConsultationsDigital Healthcare: The Future of Medical Consultations
Digital Healthcare: The Future of Medical Consultations
 
Improve Patient Care with Medical Record Abstraction
Improve Patient Care with Medical Record AbstractionImprove Patient Care with Medical Record Abstraction
Improve Patient Care with Medical Record Abstraction
 
ASSISTING WITH THE USE OF BED PAN BY ANUSHRI SRIVASTAVA.pptx
ASSISTING WITH THE USE OF BED PAN BY ANUSHRI SRIVASTAVA.pptxASSISTING WITH THE USE OF BED PAN BY ANUSHRI SRIVASTAVA.pptx
ASSISTING WITH THE USE OF BED PAN BY ANUSHRI SRIVASTAVA.pptx
 
GOUT and it's Management with All the catagories like; Defination, Type, Sym...
GOUT and it's Management with All the catagories like;  Defination, Type, Sym...GOUT and it's Management with All the catagories like;  Defination, Type, Sym...
GOUT and it's Management with All the catagories like; Defination, Type, Sym...
 
Thoracic Vertebrae: Anatomy, Function, and Common Disorders | The Lifescience...
Thoracic Vertebrae: Anatomy, Function, and Common Disorders | The Lifescience...Thoracic Vertebrae: Anatomy, Function, and Common Disorders | The Lifescience...
Thoracic Vertebrae: Anatomy, Function, and Common Disorders | The Lifescience...
 
CHAPTER- 1 SEMESTER V NATIONAL-POLICIES-AND-LEGISLATION.pdf
CHAPTER- 1 SEMESTER V NATIONAL-POLICIES-AND-LEGISLATION.pdfCHAPTER- 1 SEMESTER V NATIONAL-POLICIES-AND-LEGISLATION.pdf
CHAPTER- 1 SEMESTER V NATIONAL-POLICIES-AND-LEGISLATION.pdf
 
Enhancing-Patient-Centric-Clinical-Trials.pdf
Enhancing-Patient-Centric-Clinical-Trials.pdfEnhancing-Patient-Centric-Clinical-Trials.pdf
Enhancing-Patient-Centric-Clinical-Trials.pdf
 
Benefits of Dentulu's Salivary Testing.pptx
Benefits of Dentulu's Salivary Testing.pptxBenefits of Dentulu's Salivary Testing.pptx
Benefits of Dentulu's Salivary Testing.pptx
 

The EU ‘AI ACT’: a “risk-based” legislation for robotic surgery

  • 1. The EU ‘AI ACT’: a “risk-based” legislation for robotic surgery Workshop at ICRA 2024 Autonomy in Robotics Surgery: State of the art, technical and regulatory challenges for clinical application 1 DIPARTIMENTO DI SCIENZE GIURIDICHE Prof. Federico Costantini Department of Legal Sciences, University of Udine, IT
  • 2. Summary 2 (1) The legal framework of the «AI ACT»: building the EU «Digital Single Market» (2) A quick overview of the «AI ACT»: «risk-based» legislation and governance set-up (3) The «AI ACT» and Robotics surgery: challenges and opportunities (4) Conclusions (?) / Recomendations (!)
  • 3. (1) The legal framework of the «AI ACT» Building the EU «Digital Single Market» 3
  • 4. Reg. (UE) 910/2014 “eIDAS” E-signatures Dir. (UE) 2016/1148 «N.I.S.» network information security «Privacy Package» Reg. (UE) 2016/679 “GDPR” Dir. (UE) (UE) 2016/680 «criminal system» Dir. (UE) (UE) 2016/681 «PRN» Proposal of Reg. «E-Privacy» COM(2017)10 Reg. (UE) 2018/1807 «non-personal data» Dir. (UE) 2019/790 copyright in the Digital Market Digital Service Package 15/12/20 COM(2020) 825 final Proposal Digital Service Act COM(2020) 842 final Proposal Digital Markets Act Reg. (UE) 2019/881 Cybersecurity Act Information security COM(2020) 823 final Proposal NIS 2.0 Dir. (EU) 2022/2557 Critical Entities Resilience P9_TA(2021)0144 online terrorism content spreading Beginning 2000 Dir. (UE) 95/46/CE data protection Dir. (UE) 2002/58/CE privacy in electronic communications Dir. (UE) 1999/93/CE electronic signature Dir. (UE) 2000/31/CE electronic commerce Dir. (UE) 2001/29/CE digital copyright EU Data Strategy “Data Governance Act” (Regulation (EU) 2022/868) «Data act» (Regulation (EU) 2023/2854 ) Timeline «digital single market» Reg. (UE) 2023/1230 «Machinery Regulation» COM/2020/593 final “Crypto-assets” Reg. (UE) 2022/2065 Reg. (UE) 2022/1925 Reg. (UE) 2024/? Dir. 2006/42/CE «Machinery Directive» COM/2022/197 European Health Data Space DIRECTIVE REGULATION Artificial Intelligence 20/10/20 P9_TA(2020)0275 Ethics and IA (proposal Artificial intelligence Law COM/2021/206) P9_TA(2020)0276 Civil liability AI (Proposal Reg.) P9_TA(2020)0277 Intellectual property AI (Res.) P9_TA(2020)0274 Digital Services and Fundamental Righs (Res.) P9_TA(2021)0009, «Killer Robots» 20/1/2021 REG. (EU) 2022/2554) “DORA”
  • 5. Key points of (almost) 30 years of EU legislation 5 1.- EU territorial enlargement EU «digital single market» expansion 2.- different legal tools Directives -> Regulations: strenghtening legislation Directives (EU gives the objectives, Member States follow) -> Regulations (obligations for States, companies, citizens) 3.- Increasing cooperation with public authority by Internet providers e.g. cybersecurity, copyright 4.- technological innovation 2006: social networks, 2020 «co-bots», 2023 LLM 5.- Governance Fundamental rights / economical growth / institutional stability 6.- Complexity of sources of law e.g. data protection: guidelines, Codes of Conduct, ISO standards
  • 6. (2) A quick overview of the «AI ACT» «risk-based» legislation and governance set-up 6
  • 8. Definition of «AI system» (art. 3 n. 1) «An AI system means - a machine-based system - designed to operate with varying levels of autonomy and - that may exhibit adaptiveness after deployment and - that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as - predictions, content, recommendations, or - decisions - that can influence physical or virtual environments».
  • 9. Why the «AI ACT» is called «risk-based legislation» 11 Unacceptable risks High Risks Limited Risks Minimal risks Prohibited «practices» No specific requirements Strict obligations Art. 50 Requirements Obligations Art. 5 GP-AI models (hidden) Transparency «systemic Risks» (hidden)
  • 10. List of 8 prohibited «AI practices» (art. 5 «AI ACT») 12 (a) subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive technique […] (b) exploits any of the vulnerabilities of a natural person or a specific group of persons […] (c) evaluation or classification of natural persons or groups of persons […] with the social score leading to detriment […] (i) social context […] (ii) social behaviour […] (d) assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling […][] (e) create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage […] (f) infer emotions of a natural person in the areas of workplace and education institutions […] (g) the use of biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer […] (h) use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of law enforcement, unless […] -> EXCEPTIONS (search of victims, dangerous criminals, terrorists) Unacceptable risks
  • 11. Chapter III - Section 2: Requirements for High-Risk AI System 13 Article 8: Compliance with the Requirements Article 9: Risk Management System Article 10: Data and Data Governance Article 11: Technical Documentation Article 12: Record-Keeping Article 13: Transparency and Provision of Information to Deployers Article 14: Human Oversight Article 15: Accuracy, Robustness and Cybersecurity High Risks
  • 12. Chapter III - Section 3: Obligations of Providers and Deployers of High-Risk AI Systems and Other Parties 14 Article 16: Obligations of Providers of High-Risk AI Systems Article 17: Quality Management System Article 18: Documentation Keeping Article 19: Automatically Generated Logs Article 20: Corrective Actions and Duty of Information Article 21: Cooperation with Competent Authorities Article 22: Authorised Representatives of providers of high-risk AI systems Article 23: Obligations of Importers Article 24: Obligations of Distributors Article 25: Responsibilities Along the AI Value Chain Article 26: Obligations of Deployers of High-Risk AI Systems Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems High Risks
  • 13. Chapter IV: Transparency Obligations for Providers and Deployers of Certain AI Systems and GPAI Models 15 «intended to interact directly with natural persons» (art. 50 par.1)  the natural persons concerned are informed that they are interacting with an AI system «generating synthetic audio, image, video or text content» (art. 50 par. 2)  the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or manipulated «emotion recognition system or a biometric categorisation system» (art. 50 par. 3)  Inform exposed persons  Process according to GDPR […] «AI system that generates or manipulates image, audio or video content constituting a deep fake» (art. 50 par. 4)  disclose that the content has been artificially generated or manipulated Limited Risks
  • 14. (decentralised) Governance 19 - Governance (Chapter VII) - European AI Office - European Artificial Intelligence Board (1 representative X each Member State) - National Competent Authorities - Certification mechanism (binding for high rish AI / voluntary for others) - Database for high risk systems (Chapter VIII) - Market monitoring (Chapter IX) –> competition! Commission Decision of 24 January 2024 establishing the European Artificial Intelligence Office https://eur-lex.europa.eu/eli/C/2024/1459/oj
  • 15. (3) The «AI ACT» and Robotics surgery: challenges and opportunities Challenge = high risk classification / opportunities = «regulatory sandboxes» 20
  • 16. Classification of robotic surgery according to the «AI ACT» as «High risk AI System» 21 When an AI system is a «high risk AI system»? Three alternatives (1) The AI is a certain product (2) The AI is a safety component of a certain product (3) The AI is included in a special list of applications (e.g. law enforcement, migration) ANNEX I ANNEX III (not relevant)
  • 17. Classification of robotic surgery according to the «AI ACT» as «High risk AI System» 22 List of ANNEX I (extract) 1. Directive 2006/42/EC of the European Parliament and of the Council of 17 May 2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24) [as repealed by the Machinery Regulation]; […] 11. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1); 12. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices and repealing Directive 98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176). Robotic surgery seems to fall into «high risk» AI systems
  • 18. Requirement of «High Risk AI System» 23 Human Oversight (art. 14) 1. High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which they are in use. 2. Human oversight shall aim to prevent or minimise the risks to health, safety or fundamental rights that may emerge when a high-risk AI system is used - in accordance with its intended purpose or - under conditions of reasonably foreseeable misuse, in particular where such risks persist despite the application of other requirements set out in this Section. […]
  • 19. Requirement of «High Risk AI System» 24 Human Oversight (art. 14) […] 3. The oversight measures shall be - commensurate with - the risks, - level of autonomy and - context of use of the high-risk AI system, - and shall be ensured through either one or both of the following types of measures: - (a) measures identified and built, when technically feasible, into the high-risk AI system by the provider before it is placed on the market or put into service; - (b) measures - identified by the provider before placing the high-risk AI system on the market or putting it into service and - that are appropriate to be implemented by the deployer. […]
  • 20. Requirement of «High Risk AI System» 25 Human Oversight (art. 14) […] 4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be provided to the deployer in such a way that natural persons to whom human oversight is assigned are enabled, as appropriate and proportionate: (a) to properly understand the relevant capacities and limitations of the high-risk AI system and be able to duly monitor its operation, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; (b) to remain aware of the possible tendency of automatically relying or over-relying on the output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems used to provide information or recommendations for decisions to be taken by natural persons; (c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the interpretation tools and methods available; (d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise disregard, override or reverse the output of the high-risk AI system; (e) to intervene in the operation of the high-risk AI system or interrupt the system through a ‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state. «Human» traininig Procedures
  • 21. Art. 27: «Fundamental rights impact assessment for high risk AI systems» (only for those which are included in ANNEX III list, but interesting) (a) a description of the deployer’s processes in which the high-risk AI system will be used in line with its intended purpose; (b) a description of the period of time within which, and the frequency with which, each high-risk AI system is intended to be used; (c) the categories of natural persons and groups likely to be affected by its use in the specific context; (d) the specific risks of harm likely to have an impact on the categories of natural persons or groups of persons identified pursuant to point (c) of this paragraph, taking into account the information given by the provider pursuant to Article 13; (e) a description of the implementation of human oversight measures, according to the instructions for use; (f) the measures to be taken in the case of the materialisation of those risks, including the arrangements for internal governance and complaint mechanisms. Design Documentation «Human» traininig
  • 22. «Regulatory sandbox» as a worldwide «megatrend» and EU legislative practice 27 - «secure processing environment», art. 2 no. (20) "Open Governance Act", Reg. EU 2022/868 - «regulatory sandbox» "Artificial Intelligence Act" COM/2021/206 final - Cryprocurrencies and fintech https://www.bancaditalia.it/focus/sandbox/?dotcache=refresh https://ec.europa.eu/digital-building- blocks/sites/display/EBSI/Sandbox+Project
  • 23. Definition art. 3 «AI ACT» 28 (55) ‘AI regulatory sandbox’ means - a controlled framework - set up by a competent authority - which offers providers or prospective providers of AI systems the possibility to develop, train, validate and test, where appropriate in real-world conditions, - an innovative AI system, - pursuant to a sandbox plan - for a limited time - under regulatory supervision;
  • 24. Definitions art. 3 «AI ACT» 29 (57) ‘testing in real-world conditions’ means - the temporary testing of an AI system for its intended purpose - in real-world conditions outside a laboratory or otherwise simulated environment, - with a view to - gathering reliable and robust data and - to assessing and verifying the conformity of the AI system with the requirements of this Regulation - and it does not qualify as placing the AI system on the market or putting it into service within the meaning of this Regulation, provided that all the conditions laid down in Article 57 or 60 are fulfilled;
  • 25. Chapter VI: Measures in Support of Innovation 30 Article 57: AI Regulatory Sandboxes Article 58: Detailed arrangements for and functioning of AI regulatory sandboxes Article 59: Further Processing of Personal Data for Developing Certain AI Systems in the Public Interest in the AI Regulatory Sandbox Article 60: Testing of High-Risk AI Systems in Real World Conditions Outside AI Regulatory Sandboxes Article 61: Informed consent to participate in testing in real world conditions outside AI regulatory sandboxes Article 62: Measures for Providers and Deployers, in Particular SMEs, Including Start-Ups Article 63: Derogations for specific operators
  • 26. Art. 57 «AI ACT» 31 - at least one 'regulatory sandbox' for each Member State (but allowed at a regional or local level). - Sufficient resources must be allocated to competent authorities. - Risks must be identified and can only be activated for limited periods - Documentation must be published. - Risks must be mitigated. - A single European interface must be created with all relevant information. - AI operators remain responsible for damages but are not subject to administrative penalties.
  • 27. Art. 60 and 61 «AI ACT» 32 - For «real world testing», but special safeguards must be adopted (planning, documentation, monitoring). - The testing time must be a maximum of 6 months. - Vulnerable subjects must be protected. - Information must be provided and consent must be collected from the involved individuals (except in the case of 'law enforcement’). - It must be possible to revoke consent and exit the test area. - Incident reporting must be ensured
  • 29. Conclusions (?) / Recomendations (!) 34 - Complex Legal framework: not only «AI ACT», but also «Machinery Regulation» and GDPR (and many others) - The «digital society» is a matter of compliance: audit and certification are the surrogate of punishments and fines - Could it be legal a «full autonomous robotic surgeon»? - Regulatory sandboxes: innovation as a drive offered by the legislator - Research on robotics is not only a technological matter: Bioethics and «Oviedo convention»?