The long-awaited European Union “Artificial Intelligence Act” has been recently approved (13rd of March 2024). Even though it has not been published – for this reason we still might recall it as COM(2021)206 – and despite the fact that it will come into force only after two years since its publication, it has drawn the attention from the international community of AI experts, due to the fact that it is the first piece of legislation worldwide regulating such technologies. This contribution aims at presenting the “AI ACT” with a focus on its most relevant features regarding robotic surgery. After a short overview on its background, which is brought by a very complex legal framework built within the last 25 years by the EU, I will offer a summary of its provisions, which are resulting from the “risk-based” approach adopted by the EU legislator. Then, I will address “high risk” AI systems, analysing both the obligations that not only manufacturers, but also providers will need to fulfil, highlighting those which are most challenging in the sector of robotic surgery. At the end I will offer a few conclusive remarks, concerns and recommendations.
The EU ‘AI ACT’: a “risk-based” legislation for robotic surgery
1. The EU ‘AI ACT’: a “risk-based”
legislation for robotic surgery
Workshop at ICRA 2024
Autonomy in Robotics Surgery: State of the art,
technical and regulatory challenges for clinical
application
1
DIPARTIMENTO
DI SCIENZE
GIURIDICHE
Prof. Federico Costantini
Department of Legal Sciences, University of Udine, IT
2. Summary
2
(1) The legal framework of the «AI ACT»: building the EU «Digital
Single Market»
(2) A quick overview of the «AI ACT»: «risk-based» legislation and
governance set-up
(3) The «AI ACT» and Robotics surgery: challenges and
opportunities
(4) Conclusions (?) / Recomendations (!)
3. (1) The legal framework of
the «AI ACT»
Building the EU «Digital Single Market»
3
4. Reg. (UE) 910/2014 “eIDAS” E-signatures
Dir. (UE) 2016/1148
«N.I.S.» network information
security
«Privacy Package»
Reg. (UE) 2016/679
“GDPR”
Dir. (UE) (UE) 2016/680
«criminal system»
Dir. (UE) (UE) 2016/681
«PRN»
Proposal of Reg. «E-Privacy»
COM(2017)10
Reg. (UE) 2018/1807 «non-personal data»
Dir. (UE) 2019/790 copyright in the
Digital Market
Digital Service Package 15/12/20
COM(2020) 825 final Proposal Digital Service Act
COM(2020) 842 final Proposal Digital Markets Act
Reg. (UE) 2019/881 Cybersecurity
Act
Information security
COM(2020) 823 final
Proposal NIS 2.0
Dir. (EU) 2022/2557 Critical Entities
Resilience
P9_TA(2021)0144 online terrorism
content spreading
Beginning 2000
Dir. (UE)
95/46/CE data
protection
Dir. (UE)
2002/58/CE
privacy in
electronic
communications
Dir. (UE)
1999/93/CE
electronic
signature
Dir. (UE)
2000/31/CE
electronic
commerce
Dir. (UE)
2001/29/CE
digital copyright
EU Data Strategy
“Data Governance Act”
(Regulation (EU) 2022/868)
«Data act»
(Regulation (EU) 2023/2854 )
Timeline «digital single market»
Reg. (UE) 2023/1230 «Machinery
Regulation»
COM/2020/593 final “Crypto-assets”
Reg. (UE) 2022/2065
Reg. (UE) 2022/1925
Reg. (UE) 2024/?
Dir. 2006/42/CE «Machinery Directive»
COM/2022/197 European Health Data Space
DIRECTIVE REGULATION
Artificial Intelligence 20/10/20
P9_TA(2020)0275 Ethics and IA (proposal
Artificial intelligence Law COM/2021/206)
P9_TA(2020)0276 Civil liability AI (Proposal
Reg.)
P9_TA(2020)0277 Intellectual property AI
(Res.)
P9_TA(2020)0274 Digital Services and
Fundamental Righs (Res.)
P9_TA(2021)0009, «Killer Robots» 20/1/2021
REG. (EU) 2022/2554)
“DORA”
5. Key points of (almost) 30 years of EU legislation
5
1.- EU territorial enlargement
EU «digital single market» expansion
2.- different legal tools Directives ->
Regulations: strenghtening legislation
Directives (EU gives the objectives,
Member States follow) -> Regulations
(obligations for States, companies,
citizens)
3.- Increasing cooperation with public
authority by Internet providers
e.g. cybersecurity, copyright
4.- technological innovation
2006: social networks, 2020 «co-bots»,
2023 LLM
5.- Governance
Fundamental rights / economical growth /
institutional stability
6.- Complexity of sources of law
e.g. data protection: guidelines, Codes of
Conduct, ISO standards
6. (2) A quick overview of the
«AI ACT»
«risk-based» legislation and governance set-up
6
8. Definition of «AI system» (art. 3 n. 1)
«An AI system means
- a machine-based system
- designed to operate with varying levels of autonomy and
- that may exhibit adaptiveness after deployment and
- that, for explicit or implicit objectives, infers, from the input it
receives, how to generate outputs such as
- predictions, content, recommendations, or
- decisions
- that can influence physical or virtual environments».
9. Why the «AI ACT» is called «risk-based legislation»
11
Unacceptable risks
High Risks
Limited Risks
Minimal risks
Prohibited «practices»
No specific requirements
Strict
obligations
Art. 50
Requirements
Obligations
Art. 5
GP-AI models (hidden)
Transparency
«systemic Risks» (hidden)
10. List of 8 prohibited «AI practices» (art. 5 «AI ACT»)
12
(a) subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive technique
[…]
(b) exploits any of the vulnerabilities of a natural person or a specific group of persons […]
(c) evaluation or classification of natural persons or groups of persons […] with the social score leading to
detriment […]
(i) social context […]
(ii) social behaviour […]
(d) assess or predict the risk of a natural person committing a criminal offence, based solely on the profiling […][]
(e) create or expand facial recognition databases through the untargeted scraping of facial images from the
internet or CCTV footage […]
(f) infer emotions of a natural person in the areas of workplace and education institutions […]
(g) the use of biometric categorisation systems that categorise individually natural persons based on their
biometric data to deduce or infer […]
(h) use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purposes of
law enforcement, unless […] -> EXCEPTIONS (search of victims, dangerous criminals, terrorists)
Unacceptable risks
11. Chapter III - Section 2: Requirements for High-Risk AI
System
13
Article 8: Compliance with the Requirements
Article 9: Risk Management System
Article 10: Data and Data Governance
Article 11: Technical Documentation
Article 12: Record-Keeping
Article 13: Transparency and Provision of Information to Deployers
Article 14: Human Oversight
Article 15: Accuracy, Robustness and Cybersecurity
High Risks
12. Chapter III - Section 3: Obligations of Providers and
Deployers of High-Risk AI Systems and Other Parties
14
Article 16: Obligations of Providers of High-Risk AI Systems
Article 17: Quality Management System
Article 18: Documentation Keeping
Article 19: Automatically Generated Logs
Article 20: Corrective Actions and Duty of Information
Article 21: Cooperation with Competent Authorities
Article 22: Authorised Representatives of providers of high-risk AI systems
Article 23: Obligations of Importers
Article 24: Obligations of Distributors
Article 25: Responsibilities Along the AI Value Chain
Article 26: Obligations of Deployers of High-Risk AI Systems
Article 27: Fundamental Rights Impact Assessment for High-Risk AI Systems
High Risks
13. Chapter IV: Transparency Obligations for Providers
and Deployers of Certain AI Systems and GPAI Models
15
«intended to interact directly with natural persons» (art. 50 par.1)
the natural persons concerned are informed that they are interacting with an AI system
«generating synthetic audio, image, video or text content» (art. 50 par. 2)
the outputs of the AI system are marked in a machine-readable format and detectable as artificially generated or
manipulated
«emotion recognition system or a biometric categorisation system» (art. 50 par. 3)
Inform exposed persons
Process according to GDPR […]
«AI system that generates or manipulates image, audio or video content constituting a deep fake» (art. 50 par. 4)
disclose that the content has been artificially generated or manipulated
Limited Risks
14. (decentralised) Governance
19
- Governance (Chapter VII)
- European AI Office
- European Artificial Intelligence Board (1 representative X each Member State)
- National Competent Authorities
- Certification mechanism (binding for high rish AI / voluntary for others)
- Database for high risk systems (Chapter VIII)
- Market monitoring (Chapter IX) –> competition!
Commission Decision of 24 January 2024 establishing the European
Artificial Intelligence Office
https://eur-lex.europa.eu/eli/C/2024/1459/oj
15. (3) The «AI ACT» and Robotics
surgery: challenges and opportunities
Challenge = high risk classification / opportunities = «regulatory sandboxes»
20
16. Classification of robotic surgery according to the «AI ACT» as «High risk
AI System»
21
When an AI system is a «high risk
AI system»?
Three alternatives
(1) The AI is a certain product
(2) The AI is a safety component of
a certain product
(3) The AI is included in a special list
of applications (e.g. law
enforcement, migration)
ANNEX I
ANNEX III (not relevant)
17. Classification of robotic surgery according to the «AI ACT» as «High risk
AI System»
22
List of ANNEX I (extract)
1. Directive 2006/42/EC of the European Parliament and of the Council of 17 May
2006 on machinery, and amending Directive 95/16/EC (OJ L 157, 9.6.2006, p. 24)
[as repealed by the Machinery Regulation];
[…]
11. Regulation (EU) 2017/745 of the European Parliament and of the Council of 5
April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC)
No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives
90/385/EEC and 93/42/EEC (OJ L 117, 5.5.2017, p. 1);
12. Regulation (EU) 2017/746 of the European Parliament and of the Council of 5
April 2017 on in vitro diagnostic medical devices and repealing Directive
98/79/EC and Commission Decision 2010/227/EU (OJ L 117, 5.5.2017, p. 176).
Robotic surgery seems to fall into «high risk» AI systems
18. Requirement of «High Risk AI System»
23
Human Oversight (art. 14)
1. High-risk AI systems shall be designed and developed in such a way,
including with appropriate human-machine interface tools, that they can
be effectively overseen by natural persons during the period in which they
are in use.
2. Human oversight shall aim to prevent or minimise the risks to health,
safety or fundamental rights that may emerge when a high-risk AI system is
used
- in accordance with its intended purpose or
- under conditions of reasonably foreseeable misuse, in particular where
such risks persist despite the application of other requirements set out
in this Section.
[…]
19. Requirement of «High Risk AI System»
24
Human Oversight (art. 14)
[…]
3. The oversight measures shall be
- commensurate with
- the risks,
- level of autonomy and
- context of use of the high-risk AI system,
- and shall be ensured through either one or both of the following types of measures:
- (a) measures identified and built, when technically feasible, into the high-risk AI system by the provider
before it is placed on the market or put into service;
- (b) measures
- identified by the provider before placing the high-risk AI system on the market or putting it into service and
- that are appropriate to be implemented by the deployer.
[…]
20. Requirement of «High Risk AI System»
25
Human Oversight (art. 14)
[…]
4. For the purpose of implementing paragraphs 1, 2 and 3, the high-risk AI system shall be
provided to the deployer in such a way that natural persons to whom human oversight is
assigned are enabled, as appropriate and proportionate:
(a) to properly understand the relevant capacities and limitations of the high-risk AI system
and be able to duly monitor its operation, including in view of detecting and addressing
anomalies, dysfunctions and unexpected performance;
(b) to remain aware of the possible tendency of automatically relying or over-relying on the
output produced by a high-risk AI system (automation bias), in particular for high-risk AI systems
used to provide information or recommendations for decisions to be taken by natural persons;
(c) to correctly interpret the high-risk AI system’s output, taking into account, for example, the
interpretation tools and methods available;
(d) to decide, in any particular situation, not to use the high-risk AI system or to otherwise
disregard, override or reverse the output of the high-risk AI system;
(e) to intervene in the operation of the high-risk AI system or interrupt the system through a
‘stop’ button or a similar procedure that allows the system to come to a halt in a safe state.
«Human»
traininig
Procedures
21. Art. 27: «Fundamental rights impact assessment for high risk AI systems»
(only for those which are included in ANNEX III list, but interesting)
(a) a description of the deployer’s processes in which the high-risk AI
system will be used in line with its intended purpose;
(b) a description of the period of time within which, and the frequency
with which, each high-risk AI system is intended to be used;
(c) the categories of natural persons and groups likely to be affected by
its use in the specific context;
(d) the specific risks of harm likely to have an impact on the categories
of natural persons or groups of persons identified pursuant to point (c) of
this paragraph, taking into account the information given by the provider
pursuant to Article 13;
(e) a description of the implementation of human oversight measures,
according to the instructions for use;
(f) the measures to be taken in the case of the materialisation of those
risks, including the arrangements for internal governance and complaint
mechanisms.
Design
Documentation
«Human»
traininig
22. «Regulatory sandbox» as a worldwide «megatrend» and EU legislative
practice
27
- «secure processing
environment», art. 2 no.
(20) "Open Governance
Act", Reg. EU 2022/868
- «regulatory sandbox»
"Artificial Intelligence
Act" COM/2021/206
final
- Cryprocurrencies and
fintech
https://www.bancaditalia.it/focus/sandbox/?dotcache=refresh
https://ec.europa.eu/digital-building-
blocks/sites/display/EBSI/Sandbox+Project
23. Definition art. 3 «AI ACT»
28
(55) ‘AI regulatory sandbox’ means
- a controlled framework
- set up by a competent authority
- which offers providers or prospective providers of AI systems the
possibility to develop, train, validate and test, where appropriate
in real-world conditions,
- an innovative AI system,
- pursuant to a sandbox plan
- for a limited time
- under regulatory supervision;
24. Definitions art. 3 «AI ACT»
29
(57) ‘testing in real-world conditions’ means
- the temporary testing of an AI system for its intended purpose
- in real-world conditions outside a laboratory or otherwise simulated
environment,
- with a view to
- gathering reliable and robust data and
- to assessing and verifying the conformity of the AI system with the requirements
of this Regulation
- and it does not qualify as placing the AI system on the market or
putting it into service within the meaning of this Regulation, provided
that all the conditions laid down in Article 57 or 60 are fulfilled;
25. Chapter VI: Measures in Support of Innovation
30
Article 57: AI Regulatory Sandboxes
Article 58: Detailed arrangements for and functioning of AI regulatory
sandboxes
Article 59: Further Processing of Personal Data for Developing Certain AI
Systems in the Public Interest in the AI Regulatory Sandbox
Article 60: Testing of High-Risk AI Systems in Real World Conditions Outside
AI Regulatory Sandboxes
Article 61: Informed consent to participate in testing in real world conditions
outside AI regulatory sandboxes
Article 62: Measures for Providers and Deployers, in Particular SMEs,
Including Start-Ups
Article 63: Derogations for specific operators
26. Art. 57 «AI ACT»
31
- at least one 'regulatory sandbox' for each Member State (but
allowed at a regional or local level).
- Sufficient resources must be allocated to competent authorities.
- Risks must be identified and can only be activated for limited periods
- Documentation must be published.
- Risks must be mitigated.
- A single European interface must be created with all relevant
information.
- AI operators remain responsible for damages but are not
subject to administrative penalties.
27. Art. 60 and 61 «AI ACT»
32
- For «real world testing», but special safeguards must be
adopted (planning, documentation, monitoring).
- The testing time must be a maximum of 6 months.
- Vulnerable subjects must be protected.
- Information must be provided and consent must be collected
from the involved individuals (except in the case of 'law
enforcement’).
- It must be possible to revoke consent and exit the test area.
- Incident reporting must be ensured
29. Conclusions (?) / Recomendations (!)
34
- Complex Legal framework: not only «AI ACT», but also
«Machinery Regulation» and GDPR (and many others)
- The «digital society» is a matter of compliance: audit and
certification are the surrogate of punishments and fines
- Could it be legal a «full autonomous robotic surgeon»?
- Regulatory sandboxes: innovation as a drive offered by the
legislator
- Research on robotics is not only a technological matter:
Bioethics and «Oviedo convention»?