Ethical and responsible implementation
of AI
Practical approach in
universities
© 2024 Logicalis 2
• Unacceptable risk
• Violation of EU
fundamental rights and
values
• Prohibited
• High-risk
• Potential to cause
significant harm (health,
safety or fundamental
rights)
• Regulated
• Limited-risk
• Risk of impersonation,
manipulation or deceit
• Humans informed
• Minimal-risk
• All other, deployed without
restrictions
• August ‘24: EU AI Act came
into force
• August ‘26: Most obligations
take effect
• 37 million EUR or up to 7%
of annual turnover
• 15 million EUR or up to 3%
of annual turnover
• (GPAI) Providers: the ones
that develop the model
• Deployers: Integrators of a
specific app or use case.
• Distributors: resellers
• Importers: if the AI comes
from outside de EU
• Authorized
representatives: legally
represent the provider in
the EU
• Downstream Users:
individual or
organizationss that
operate it without
significantly modifiying it.
Involved parties Risk categories Penalties
Timeline
Key facts
EU AI Act
© 2024 Logicalis 3
Roles in Practice
Providers create the AI system and are
primarily responsible for its compliance at the
design and development stages.
Deployers adapt AI systems for specific use
cases and must ensure compliance when the
system is applied in high-risk areas.
Distributors ensure that compliant systems
are marketed appropriately.
Users monitor the system's performance and
use it responsibly.
Provider: OpenAI, as the developer of GPT
models, would be responsible for ensuring the
general-purpose model complies with the EU
AI Act.
Deployer: A healthcare company integrating
OpenAI’s GPT model into a medical diagnosis
tool must assess compliance in that high-risk
use case.
Distributor: A software reseller offering the
healthcare tool to hospitals must ensure the
system is compliant and properly labeled.
User: The hospital using the AI-powered
diagnosis tool must monitor its use and report
incidents.
© 2024 Logicalis 4
The AI EU Act approaches GPAI regulation with a focus on shared responsibility,
transparency, and risk management. It sets clear obligations for both GPAI providers and
their downstream users, ensuring safe and ethical use of these versatile models across
industries.
Key Areas for Education The implementation of AI systems in education is
crucial to foster digital skills and critical thinking,
allowing active participation in society. AI in
education, especially for assessments and
admissions, is classified as high-risk due to its
significant impact on people's educational and
professional lives. Poor design and use can be
intrusive, violate rights and perpetuate
discrimination, affecting access to education and
equal opportunities.
It is essential to guarantee its ethical and
responsible development.
© 2024 Logicalis 5
3. Education and vocational training:
(a) AI systems intended to be used to determine access or admission or to
assign natural persons to educational and vocational training institutions at
all levels;
(b) AI systems intended to be used to evaluate learning outcomes, including
when those outcomes are used to steer the learning process of natural
persons in educational and vocational training institutions at all levels;
(c) AI systems intended to be used for the purpose of assessing the
appropriate level of education that an individual will receive or will be able to
access, in the context of or within educational and vocational training
institutions at all levels;
(d) AI systems intended to be used for monitoring and detecting prohibited
behaviour of students during tests in the context of or within educational
and vocational training institutions at all levels.
Key Areas for Education
© 2024 Logicalis 6
1.Compliance with Requirements (Article 8):
Systems must comply with established standards
to guarantee security and rights.
2.Risk Management System (Article 9):
Implement a continuous risk identification and
mitigation system.
• Physical and mental safety of users; potential
discrimination, violation of privacy and bias in
decisions; cybersecurity; operational errors;
transparency.
3.Data and Governance (Article 10): Ensure the
quality, relevance and proper handling of data.
4.Technical Documentation (Article 11): Provide
detailed documentation about the system.
5.Record Keeping (Article 12): Maintain records
for traceability.
6.Transparency (Article 13): Provide clear
information to users.
7.Human Supervision (Article 14): Include human
oversight to correct bugs.
8.Accuracy, Robustness and Cybersecurity
(Article 15): Ensure system reliability and safety.
Key Areas of the EU AI Act
Impacting LLMs (GPAI)
The EU AI Act emphasizes transparency, safety, and
accountability for AI systems, especially high-risk
systems. It mandates provisions for:
1.Risk Management – Identifying and mitigating
risks associated with AI misuse or harm.
2.Transparency – Ensuring users understand AI
systems and their outputs.
3.Content Moderation and Safety – Guarding
against harmful or illegal content.
4.Data Privacy – Ensuring compliance with GDPR
and other EU-specific privacy laws.
5.Accountability and Compliance – Supporting a
robust framework for adhering to AI regulations.
© 2024 Logicalis 7
General
Purpose AI
Risk and Safety
Measures
• User Input Attack
Mitigation: Malicious
prompting. Ej. Prompt
Shields
• Groundedness Detection:
Avoid Hallucinations. Ej RAG
• IP infringement: avoid the
usage of copyrighted
material. Ej Curated
training.
• Emerging Harmful
Content Patterns: Quickly
block specific emerging
scenarios. Ej.Safety filters
• Content scanning: sexual
content, violence, hate, and
self-harm. Ej. Guardails
Accountability and
Monitoring
Frameworks
• AI-Assisted Evaluations:
adversarial simulations
• Abusive User Detection:
detecting potentially
abusive users in real-time
• Message Framework:
templates for consistent,
safety-focused
communication
Compliance and
Privacy for EU
Markets
• EU Data Boundary
• EU-based reviewers
• Compliance Certifications:
ENS High Level
• Legal Copyright
Commitment
• Compliance tracking and
management
© 2024 Logicalis 8
Key Areas of the EU AI Act
Impacting LLMs (GPAI)
The EU AI Act emphasizes transparency, safety, and
accountability for AI systems, especially high-risk
systems. It mandates provisions for:
1.Safety and Moderation: Advanced APIs and
content filters help manage harmful content risks
effectively.
2.Transparency: Features like groundedness
detection and system messages ensure clear and
explainable outputs.
3.Privacy and Data Sovereignty: Regional and
industry-specific compliance ensures adherence to
GDPR and other EU standards.
4.Accountability: Risk monitoring, copyright
commitment, and compliance certifications
underscore a robust framework for responsible AI
use.
© 2024 Logicalis 9
MICROSOFT
Data, privacy, and
security for Azure
OpenAI Service
ions (outputs), your embeddings, and your training data:
mers.
AI models.
r improve Azure OpenAI Service foundation models.
icrosoft or 3rd party products or services without your permission or instruction.
models are available exclusively for your use.
erated by Microsoft as an Azure service; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Serv
Human reviewers assessing potential abuse can
access prompts and completions data only when
that data has been flagged by the abuse monitoring
system. The human reviewers are authorized
Microsoft employees who access the data via point
wise queries using request IDs, Secure Access
Workstations (SAWs), and Just-In-Time (JIT) request
approval granted by team managers. For Azure
OpenAI Service deployed in the European Economic
Area, the authorized Microsoft employees are
located in the European Economic Area.
© 2024 Logicalis 10
MICROSOFT
Data protection for
Amazon Bedrock
AWS Region where Amazon Bedrock is available, there is one such deployment account per model provider.
team. Model providers don't have any access to those accounts. After delivery of a model from a model provider to AWS, A
on't have access to Amazon Bedrock logs or to customer prompts and completions.
The AWS shared responsibility model applies to data
protection in Amazon Bedrock. As described in this
model, AWS is responsible for protecting the global
infrastructure that runs all of the AWS Cloud. You are
responsible for maintaining control over your
content that is hosted on this infrastructure. You are
also responsible for the security configuration and
management tasks for the AWS services that you
use. .
© 2024 Logicalis 11
OWASP Top
10 for LLM
Applications
© 2024 Logicalis 12
https://owasp.org/www-project-top-10-for-large-language-model-
applications/
Thanks!
Alberto Robles
AI Tech Lead
linkedin.com/in/fcoalberto

Implementació ètica i responsable de la IA a universitats (un enfocament pràctic)

  • 1.
    Ethical and responsibleimplementation of AI Practical approach in universities
  • 2.
  • 3.
    • Unacceptable risk •Violation of EU fundamental rights and values • Prohibited • High-risk • Potential to cause significant harm (health, safety or fundamental rights) • Regulated • Limited-risk • Risk of impersonation, manipulation or deceit • Humans informed • Minimal-risk • All other, deployed without restrictions • August ‘24: EU AI Act came into force • August ‘26: Most obligations take effect • 37 million EUR or up to 7% of annual turnover • 15 million EUR or up to 3% of annual turnover • (GPAI) Providers: the ones that develop the model • Deployers: Integrators of a specific app or use case. • Distributors: resellers • Importers: if the AI comes from outside de EU • Authorized representatives: legally represent the provider in the EU • Downstream Users: individual or organizationss that operate it without significantly modifiying it. Involved parties Risk categories Penalties Timeline Key facts EU AI Act © 2024 Logicalis 3
  • 4.
    Roles in Practice Providerscreate the AI system and are primarily responsible for its compliance at the design and development stages. Deployers adapt AI systems for specific use cases and must ensure compliance when the system is applied in high-risk areas. Distributors ensure that compliant systems are marketed appropriately. Users monitor the system's performance and use it responsibly. Provider: OpenAI, as the developer of GPT models, would be responsible for ensuring the general-purpose model complies with the EU AI Act. Deployer: A healthcare company integrating OpenAI’s GPT model into a medical diagnosis tool must assess compliance in that high-risk use case. Distributor: A software reseller offering the healthcare tool to hospitals must ensure the system is compliant and properly labeled. User: The hospital using the AI-powered diagnosis tool must monitor its use and report incidents. © 2024 Logicalis 4 The AI EU Act approaches GPAI regulation with a focus on shared responsibility, transparency, and risk management. It sets clear obligations for both GPAI providers and their downstream users, ensuring safe and ethical use of these versatile models across industries.
  • 5.
    Key Areas forEducation The implementation of AI systems in education is crucial to foster digital skills and critical thinking, allowing active participation in society. AI in education, especially for assessments and admissions, is classified as high-risk due to its significant impact on people's educational and professional lives. Poor design and use can be intrusive, violate rights and perpetuate discrimination, affecting access to education and equal opportunities. It is essential to guarantee its ethical and responsible development. © 2024 Logicalis 5 3. Education and vocational training: (a) AI systems intended to be used to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels; (b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels; (c) AI systems intended to be used for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels; (d) AI systems intended to be used for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.
  • 6.
    Key Areas forEducation © 2024 Logicalis 6 1.Compliance with Requirements (Article 8): Systems must comply with established standards to guarantee security and rights. 2.Risk Management System (Article 9): Implement a continuous risk identification and mitigation system. • Physical and mental safety of users; potential discrimination, violation of privacy and bias in decisions; cybersecurity; operational errors; transparency. 3.Data and Governance (Article 10): Ensure the quality, relevance and proper handling of data. 4.Technical Documentation (Article 11): Provide detailed documentation about the system. 5.Record Keeping (Article 12): Maintain records for traceability. 6.Transparency (Article 13): Provide clear information to users. 7.Human Supervision (Article 14): Include human oversight to correct bugs. 8.Accuracy, Robustness and Cybersecurity (Article 15): Ensure system reliability and safety.
  • 7.
    Key Areas ofthe EU AI Act Impacting LLMs (GPAI) The EU AI Act emphasizes transparency, safety, and accountability for AI systems, especially high-risk systems. It mandates provisions for: 1.Risk Management – Identifying and mitigating risks associated with AI misuse or harm. 2.Transparency – Ensuring users understand AI systems and their outputs. 3.Content Moderation and Safety – Guarding against harmful or illegal content. 4.Data Privacy – Ensuring compliance with GDPR and other EU-specific privacy laws. 5.Accountability and Compliance – Supporting a robust framework for adhering to AI regulations. © 2024 Logicalis 7
  • 8.
    General Purpose AI Risk andSafety Measures • User Input Attack Mitigation: Malicious prompting. Ej. Prompt Shields • Groundedness Detection: Avoid Hallucinations. Ej RAG • IP infringement: avoid the usage of copyrighted material. Ej Curated training. • Emerging Harmful Content Patterns: Quickly block specific emerging scenarios. Ej.Safety filters • Content scanning: sexual content, violence, hate, and self-harm. Ej. Guardails Accountability and Monitoring Frameworks • AI-Assisted Evaluations: adversarial simulations • Abusive User Detection: detecting potentially abusive users in real-time • Message Framework: templates for consistent, safety-focused communication Compliance and Privacy for EU Markets • EU Data Boundary • EU-based reviewers • Compliance Certifications: ENS High Level • Legal Copyright Commitment • Compliance tracking and management © 2024 Logicalis 8
  • 9.
    Key Areas ofthe EU AI Act Impacting LLMs (GPAI) The EU AI Act emphasizes transparency, safety, and accountability for AI systems, especially high-risk systems. It mandates provisions for: 1.Safety and Moderation: Advanced APIs and content filters help manage harmful content risks effectively. 2.Transparency: Features like groundedness detection and system messages ensure clear and explainable outputs. 3.Privacy and Data Sovereignty: Regional and industry-specific compliance ensures adherence to GDPR and other EU standards. 4.Accountability: Risk monitoring, copyright commitment, and compliance certifications underscore a robust framework for responsible AI use. © 2024 Logicalis 9
  • 10.
    MICROSOFT Data, privacy, and securityfor Azure OpenAI Service ions (outputs), your embeddings, and your training data: mers. AI models. r improve Azure OpenAI Service foundation models. icrosoft or 3rd party products or services without your permission or instruction. models are available exclusively for your use. erated by Microsoft as an Azure service; Microsoft hosts the OpenAI models in Microsoft’s Azure environment and the Serv Human reviewers assessing potential abuse can access prompts and completions data only when that data has been flagged by the abuse monitoring system. The human reviewers are authorized Microsoft employees who access the data via point wise queries using request IDs, Secure Access Workstations (SAWs), and Just-In-Time (JIT) request approval granted by team managers. For Azure OpenAI Service deployed in the European Economic Area, the authorized Microsoft employees are located in the European Economic Area. © 2024 Logicalis 10
  • 11.
    MICROSOFT Data protection for AmazonBedrock AWS Region where Amazon Bedrock is available, there is one such deployment account per model provider. team. Model providers don't have any access to those accounts. After delivery of a model from a model provider to AWS, A on't have access to Amazon Bedrock logs or to customer prompts and completions. The AWS shared responsibility model applies to data protection in Amazon Bedrock. As described in this model, AWS is responsible for protecting the global infrastructure that runs all of the AWS Cloud. You are responsible for maintaining control over your content that is hosted on this infrastructure. You are also responsible for the security configuration and management tasks for the AWS services that you use. . © 2024 Logicalis 11
  • 12.
    OWASP Top 10 forLLM Applications © 2024 Logicalis 12 https://owasp.org/www-project-top-10-for-large-language-model- applications/
  • 13.
    Thanks! Alberto Robles AI TechLead linkedin.com/in/fcoalberto