Ethical considerations in Generative AI are vital for integrity. Human accountability is emphasized, and interdisciplinary panels are suggested to assess biases comprehensively. Thorough documentation of Generative AI models is urged, promoting transparency with open models. Non-related research applications with generative AI are flagged as high-risk, demanding attention to ethics and integrity. Criteria are proposed to distinguish low and high integrity risks, necessitating tailored mitigation actions. Researchers must report countermeasures, and agreements on acceptable AI models are sought to align with scientific values, excluding outdated or biased models.
2. ChatGPT and similar models have had a significant impact in various domains:
Gen AI Overview
• Education at all
levels
• Academic research
• Arts and the
media
• Software
engineering
• Health and
medicine
• Legal and ethical
considerations
FIELD
• Significant impact on
education
• Faster and more
accurate data analysis
• Creative applications,
content generation
• Improved development
processes
• Enhanced diagnostics,
personalized
healthcare
• Attention to legal and
ethical issues,
standards, control
IMPACT
• Enhanced educational
experiences and
improved knowledge
access
• Advancements in
research
• Innovative content
creation
• Streamlined software
engineering
• Improved medical
outcomes
• Ensuring responsible AI
usage
BENEFITS
• Concerns about information
quality and accuracy
• Potential biases and
discrimination in training
data
• Potential biases in generated
content
• Ethical considerations
regarding human labor
displacement
• Privacy and security concerns
with personal data usage
• Legal and regulatory
challenges in AI deployment
RISKS & ETHICAL
CHALLENGES
3. AI and Cybernatics
Markov
Models
Markov models are probabilistic models that capture dependencies
between events.
Neural
Network
Models
Neural network models are computational models inspired by the human
brain
Generative
AI
• Generative AI combines the concepts of Markov and neural network
models.
• It leverages the probabilistic nature of Markov models and the learning
ability of neural network models to generate new data or content.
4. Ethics
Etymologically, the English word “ethics” (ethica in Latin) can be traced back to the ancient Greek noun, ἔθος (ethos),
which denotes a “habit” or “custom.”
Ethics is a practical discipline that refers to human action with the purpose of being morally good.
• Moral Object: Substance or end inherent to the chosen action. It involves evaluating the inherent goodness or ethical
implications of the action being taken.
• Intention: Reason or purpose behind the act. The intention reflects the moral stance and desired outcomes of the
individual or entity involved.
• Circumstances: Concrete conditions and anticipated consequences.
AI-Powered Customer Service Chatbot
• Moral Object: Enhancing customer support and satisfaction through an AI chatbot.
• Intention: Streamlining customer service operations, reducing response times, and providing 24/7 assistance.
• Circumstances: Ensuring accurate responses, user privacy, and managing potential biases or errors.
In this example, a company implements an AI-powered chatbot to enhance customer support. The intention is to streamline
operations and provide convenient assistance. The company must address accuracy, privacy, and potential biases to ensure
ethical implementation.
7. SAP AI Guidelines
SAP’s guiding principles for Artificial Intelligence:
• We are driven by our values
• We design for people.
• We enable business beyond bias.
• We strive for transparency and integrity in all that
we do.
• We uphold quality and safety standards.
• We place data protection and privacy at our core
• We engage with the wider societal challenges of
artificial intelligence.
8. Use of AI in Software Engineering: Risks
Ethical Aspect Risks Mitigation
Ethical Responsibility
Unaddressed ethical concerns in LLMs and
Generative AI.
Team Awareness Actively address ethical
concerns in software and CPS development.
Privacy Risks
Compromised privacy and potential deepfake
creation.
Ensure lawful, ethical, and transparent data
handling.
Fairness and Bias Biased outputs due to lack of ethical AI practices.
Implement ethical AI practices, ensuring
unbiased training sets and robust software
engineering.
Trust Concerns
Unreliable outputs undermining trust in software
and CPSs.
CPS stands for Cyber-Physical Systems. These are
systems that involve a combination of
computational algorithms and physical processes.
Emphasize transparency and verification in
generating outputs.
Intellectual Property
Risks related to third-party code inclusion and
data enrichment.
Protect intellectual property through careful
management of training sets.
Management Responsibilities
Lack of acknowledgment and oversight for
generative AI impacts in leadership roles.
Acknowledge changes in leadership roles and
develop explicit oversight for generative AI.
Gartner predicts 50% of software engineering leader roles will require oversight of generative AI.
9. Use of AI in Medical : Risks
AI Ethics Issues in Health and
Medicine
Challenges/Risks Mitigation
Bias and Fairness Inadvertent bias in AI (race, sex, insurance).
Implement measures to identify and address bias
in AI algorithms.
Justice and Fairness Ensuring justice and fairness in healthcare.
Enforce stringent regulation and provide adequate
training for physicians using AI.
Privacy and Confidentiality
Compromised privacy and confidentiality in
responses.
Develop ethical safeguards and standards for
privacy in AI-generated responses.
Autonomy and Informed Consent
Issues related to autonomy for patients and
medical staff.
Establish guidelines respecting autonomy and
ensuring informed consent for AI integration.
Transparency and Explainability
Lack of transparency and explainability in AI
decision-making.
Implement standards for transparent and
explainable AI systems in healthcare.
Safety, Security, and Public Trust
Risks of mistaken diagnosis, security
breaches, and compromised trust.
Ensure comprehensive training, robust security
measures, and prioritize equity and transparency.
10. Use of AI in Medical : Academic Research
Academic Research Ethics in
Generative AI
Recommendations/Concerns Mitigation
Accountability
Emphasize human accountability for scientific
practices.
Enforce accountability in all phases of
generative AI research.
Interdisciplinary Reviewers
Ensure diverse perspectives with interdisciplinary
ethics panels.
Employ varied reviewers to assess biases and
inaccuracies comprehensively.
Documentation and Reporting
Document generative AI models, versions, and
responses thoroughly.
Promote open models to enhance transparency
and mitigate inaccuracies.
High-Risk Applications
Treat non-related research applications with
generative AI as high-risk.
Address ethics and integrity issues in
applications using generative AI.
Criteria for Integrity Risks
Develop criteria for distinguishing low and high
integrity risks.
Determine tailored mitigation actions based on
varying risk levels.
Countermeasures Reporting
Require researchers to report countermeasures
against risks in generative AI.
Ensure ethical review applications cover and
address risks comprehensively.
11. Resources
• Academic Paper: The Ethics of Artificial Intelligence in the Era of Generative AI
• Academic Paper: Science in the Era of ChatGPT, Large Language Models and Generative AI:
Challenges for Research Ethics and How to Respond
• SAP AI Ethics guideline