Professional
Ethics
Dr. / Dina Saif Ragab
Table of contents
01
03
02
04
The FAST Track
Principles
Fairness, Accountability,
Sustainability, and
Transparency
Background
Data fairness
background
01
The SUM Values are intended to guide ethical
thinking in AI projects but don't directly
address the processes of AI development. To
make ethics more actionable, it helps to
understand why AI ethics is essential. Marvin
Minsky described AI as making computers
perform tasks that would need human
intelligence, highlighting the need for ethical
frameworks as AI takes on more complex,
human-like roles. The emergence of AI ethics
focuses on responsible design and use as
technology advances.
"Bridging the Ethical Gap: Accountability and
Responsibility in AI Systems"
Humans are held responsible for their
judgments, decisions, and fairness
when using intelligent systems.
However, these systems are not
morally accountable, leading to ethical
breaches in applied science.
To address this, frameworks for AI
ethics are being developed, focusing on
principles like fairness, accountability,
sustainability, and transparency. These
principles aim to bridge the gap
between the smart agency of machines
and their inherent lack of moral
responsibility.
The FAST Track
Principles
02
➔ The FAST Track Principles
(Fairness, Accountability, Sustainability, and
Transparency) are essential pillars that guide
teams in developing responsible, ethical, and
socially-responsible AI systems.
These foundational principles ensure a holistic
approach that addresses critical ethical
considerations at every phase of a project, from
ideation to deployment. Here's a detailed look at
each principle, highlighting their importance and
application in real-world scenarios:
The FAST Track Principles
Fairness AI systems should process social or demographic data equitably, without
discriminatory bias. he designs should ensure equitable outcomes and avoid
disproportionate impacts on any group.
Accountability AI systems should be built with accountability in mind, enabling end-to-end
traceability and review. This includes responsible design, clear
implementation, and active monitoring protocols.
Sustainability The development and deployment of AI should consider its long-term
impact on society and the environment. This principle promotes the
responsible use of resources, robustness, and overall system resilience.
Transparency AI systems should communicate clearly with stakeholders, explaining their
functioning, purpose, and potential impacts. Transparency is key for public
trust and acceptance.
Getting to Know FAST Principles in AI
• FAST: Fairness, Accountability, Sustainability, Transparency.
• These four guiding principles may not always connect in a straightforward way.
• Accountability
̶ We all need to take responsibility for creating AI.
̶ This ensures that every step of the process is traceable and clear.
• Transparency
̶ We want AI decisions to be easy to understand and explain.
̶ It’s important that everyone affected knows how AI impacts them.
Fairness and Sustainability in AI
• FAST: Fairness, Accountability, Sustainability, Transparency.
• These four guiding principles may not always connect in a straightforward way.
• Fairness
̶ AI should be friendly and treat everyone with respect.
̶ We aim to avoid harm and discrimination for all.
• Sustainability
̶ AI should be safe, ethical, and work for the good of future generations.
̶ Let’s support positive changes for both society and our planet!
Summary: Fast Track Principles
● The principles of transparency and accountability provide the procedural mechanisms. and
means through which Al systems can be justified and by which their producer and
implementer can be held responsible, fairness and sustainability are the crucial aspects of
the design, implementation, and outcomes of these systems which establish the normative
criteria for such governing constraints.
● These four principles are all deeply interrelated, but they are not equal.
● There is important thing to keep in mind before we delve into the details of the FAST Track
principles:
1) Transparency
2) Accountability
3) Fairness
● Are data protection principles and where algorithmic processing involves personal data,
complying with them is not simply a matter of ethics or good practice, but a legal
requirement, which is enshrined in the General Data Protection Regulation (GDPR) and the
Data Protection Act of 2018 (DPA 2018).
Fairness in AI System Design and Deployment
● AI models rely on historical data, which may
carry inherent biases.
● Data may contain social and historical
patterns that reinforce cultural biases.
● There's no single solution to completely
eliminate discrimination in AI systems.
● AI systems may appear neutral, but are influenced
by the decisions of those who design them.
● Designers’ backgrounds and biases impact AI
models.
● Biases can enter at any stage: data collection,
problem formulation, model building, or
deployment.
Human Influence on AI Systems Challenges with Data-Driven Technologies
Approaches to Fairness in AI
- Combines non-technical self-assessment with technical
controls and evaluations.
- Aims to achieve fair, ethical, and equitable outcomes for
stakeholders.
- Ensures AI systems treat all parties fairly.
Importance of Fairness-Aware Design
Principle of Discriminatory Non-Harm
- A minimum threshold required to achieve fairness in AI systems.
- Guides developers to avoid harm from biased or discriminatory
outcomes.
Principle of Discriminatory Non-Harm
Fundamental Fairness Principles for AI Systems
Data Fairness Design Fairness Outcome Fairness Implementation Fairness
requires that the data used
in training and testing is
comprehensive, accurate,
and represents the full
diversity of the population it
will affect. If the dataset is
not representative ,the AI
could develop biased
models.
To build the AI model so that it
doesn’t contain any biased or
morally questionable features.
Designers need to avoid
including certain variables (like
race, gender, or socioeconomic
status) unless they are genuinely
relevant and justifiable. For
instance, a loan approval AI
shouldn’t include factors that
unfairly disadvantage certain
groups without a valid reason.
Outcome fairness is about
the real-world impact of the
AI system. After deployment,
it’s essential to evaluate if the
AI system’s decisions have a
fair and positive effect on
people’s lives. For instance, if
a healthcare AI model favors
certain groups over others in
terms of treatment
suggestions, this would signal
an outcome disparity.
Implementation fairness focuses
on the responsibilities of those
deploying the AI systems. Proper
training is crucial for the users of AI
models (such as employees or
decision-makers) to understand
how to use these tools
impartially and ethically.
For instance, in hiring, this means
HR professionals should interpret
AI recommendations with an
understanding of any possible
biases, so the tool is applied justly.
Goal: Prevent AI systems from causing unfair or biased impacts on individuals or communities.
Summary: Representativeness
● Sampling bias can lead to the underrepresentation or overrepresentation of disadvantaged
or legally protected groups, which can disadvantage vulnerable stakeholders in model
outcomes. To mitigate this, domain expertise is essential to ensure that the data sample
accurately reflects the target population. Technical teams should, when possible, provide
solutions to address and correct any representational biases in the sampling.
Summary: Fit-for-purpose and sufficiency
● In data collection, it is essential to determine if the dataset is large enough to meet the
project’s goals, as data sufficiency impacts the accuracy and fairness of model outputs. A
dataset that lacks sufficient depth may fail to represent important attributes of the
population, leading to potentially biased outcomes. Technical and policy experts should
work together to assess whether the data volume is adequate and suitable for the AI
system’s intended purpose.
Summary: Source integrity and measurement
accuracy
● Bias mitigation starts effectively at the data extraction and collection stage, where both
sources and measurement tools may introduce discrimination into the dataset. Including
biased human judgments in training data can replicate this bias in system outputs. Ensuring
non-discriminatory outcomes requires verifying that data sources are reliable, neutral, and
that collection methods are sound to achieve accuracy and reliability in results.
Summary: Timeliness and Recency
● Outdated data in datasets can impact the generalizability of a model, as shifts in data
distribution due to changing social dynamics may introduce bias. To avoid discriminatory
outcomes, it’s essential to assess the timeliness and recency of all data elements in the
dataset.
Data Relevance and Best Practices
Data Relevance & Domain Knowledge:
● Select appropriate data sources for reliable, unbiased AI.
● Leverage domain knowledge for choosing relevant inputs.
● Collaborate with domain experts for optimal data
selection.
Dataset Factsheet for Responsible Data Management:
● Create a Dataset Factsheet at the alpha stage
● Track data quality, bias mitigation, and auditability
● Record key aspects: data origin, pre-processing, security,
and team insights on representativeness and integrity.
Do you have any questions?
Thanks!

Professional Ethics------------------.pdf

  • 1.
  • 2.
    Table of contents 01 03 02 04 TheFAST Track Principles Fairness, Accountability, Sustainability, and Transparency Background Data fairness
  • 3.
  • 4.
    The SUM Valuesare intended to guide ethical thinking in AI projects but don't directly address the processes of AI development. To make ethics more actionable, it helps to understand why AI ethics is essential. Marvin Minsky described AI as making computers perform tasks that would need human intelligence, highlighting the need for ethical frameworks as AI takes on more complex, human-like roles. The emergence of AI ethics focuses on responsible design and use as technology advances.
  • 5.
    "Bridging the EthicalGap: Accountability and Responsibility in AI Systems" Humans are held responsible for their judgments, decisions, and fairness when using intelligent systems. However, these systems are not morally accountable, leading to ethical breaches in applied science. To address this, frameworks for AI ethics are being developed, focusing on principles like fairness, accountability, sustainability, and transparency. These principles aim to bridge the gap between the smart agency of machines and their inherent lack of moral responsibility.
  • 6.
  • 7.
    ➔ The FASTTrack Principles (Fairness, Accountability, Sustainability, and Transparency) are essential pillars that guide teams in developing responsible, ethical, and socially-responsible AI systems. These foundational principles ensure a holistic approach that addresses critical ethical considerations at every phase of a project, from ideation to deployment. Here's a detailed look at each principle, highlighting their importance and application in real-world scenarios:
  • 8.
    The FAST TrackPrinciples Fairness AI systems should process social or demographic data equitably, without discriminatory bias. he designs should ensure equitable outcomes and avoid disproportionate impacts on any group. Accountability AI systems should be built with accountability in mind, enabling end-to-end traceability and review. This includes responsible design, clear implementation, and active monitoring protocols. Sustainability The development and deployment of AI should consider its long-term impact on society and the environment. This principle promotes the responsible use of resources, robustness, and overall system resilience. Transparency AI systems should communicate clearly with stakeholders, explaining their functioning, purpose, and potential impacts. Transparency is key for public trust and acceptance.
  • 9.
    Getting to KnowFAST Principles in AI • FAST: Fairness, Accountability, Sustainability, Transparency. • These four guiding principles may not always connect in a straightforward way. • Accountability ̶ We all need to take responsibility for creating AI. ̶ This ensures that every step of the process is traceable and clear. • Transparency ̶ We want AI decisions to be easy to understand and explain. ̶ It’s important that everyone affected knows how AI impacts them.
  • 10.
    Fairness and Sustainabilityin AI • FAST: Fairness, Accountability, Sustainability, Transparency. • These four guiding principles may not always connect in a straightforward way. • Fairness ̶ AI should be friendly and treat everyone with respect. ̶ We aim to avoid harm and discrimination for all. • Sustainability ̶ AI should be safe, ethical, and work for the good of future generations. ̶ Let’s support positive changes for both society and our planet!
  • 11.
    Summary: Fast TrackPrinciples ● The principles of transparency and accountability provide the procedural mechanisms. and means through which Al systems can be justified and by which their producer and implementer can be held responsible, fairness and sustainability are the crucial aspects of the design, implementation, and outcomes of these systems which establish the normative criteria for such governing constraints. ● These four principles are all deeply interrelated, but they are not equal. ● There is important thing to keep in mind before we delve into the details of the FAST Track principles: 1) Transparency 2) Accountability 3) Fairness ● Are data protection principles and where algorithmic processing involves personal data, complying with them is not simply a matter of ethics or good practice, but a legal requirement, which is enshrined in the General Data Protection Regulation (GDPR) and the Data Protection Act of 2018 (DPA 2018).
  • 12.
    Fairness in AISystem Design and Deployment ● AI models rely on historical data, which may carry inherent biases. ● Data may contain social and historical patterns that reinforce cultural biases. ● There's no single solution to completely eliminate discrimination in AI systems. ● AI systems may appear neutral, but are influenced by the decisions of those who design them. ● Designers’ backgrounds and biases impact AI models. ● Biases can enter at any stage: data collection, problem formulation, model building, or deployment. Human Influence on AI Systems Challenges with Data-Driven Technologies
  • 13.
    Approaches to Fairnessin AI - Combines non-technical self-assessment with technical controls and evaluations. - Aims to achieve fair, ethical, and equitable outcomes for stakeholders. - Ensures AI systems treat all parties fairly. Importance of Fairness-Aware Design Principle of Discriminatory Non-Harm - A minimum threshold required to achieve fairness in AI systems. - Guides developers to avoid harm from biased or discriminatory outcomes.
  • 14.
    Principle of DiscriminatoryNon-Harm Fundamental Fairness Principles for AI Systems Data Fairness Design Fairness Outcome Fairness Implementation Fairness requires that the data used in training and testing is comprehensive, accurate, and represents the full diversity of the population it will affect. If the dataset is not representative ,the AI could develop biased models. To build the AI model so that it doesn’t contain any biased or morally questionable features. Designers need to avoid including certain variables (like race, gender, or socioeconomic status) unless they are genuinely relevant and justifiable. For instance, a loan approval AI shouldn’t include factors that unfairly disadvantage certain groups without a valid reason. Outcome fairness is about the real-world impact of the AI system. After deployment, it’s essential to evaluate if the AI system’s decisions have a fair and positive effect on people’s lives. For instance, if a healthcare AI model favors certain groups over others in terms of treatment suggestions, this would signal an outcome disparity. Implementation fairness focuses on the responsibilities of those deploying the AI systems. Proper training is crucial for the users of AI models (such as employees or decision-makers) to understand how to use these tools impartially and ethically. For instance, in hiring, this means HR professionals should interpret AI recommendations with an understanding of any possible biases, so the tool is applied justly. Goal: Prevent AI systems from causing unfair or biased impacts on individuals or communities.
  • 15.
    Summary: Representativeness ● Samplingbias can lead to the underrepresentation or overrepresentation of disadvantaged or legally protected groups, which can disadvantage vulnerable stakeholders in model outcomes. To mitigate this, domain expertise is essential to ensure that the data sample accurately reflects the target population. Technical teams should, when possible, provide solutions to address and correct any representational biases in the sampling.
  • 16.
    Summary: Fit-for-purpose andsufficiency ● In data collection, it is essential to determine if the dataset is large enough to meet the project’s goals, as data sufficiency impacts the accuracy and fairness of model outputs. A dataset that lacks sufficient depth may fail to represent important attributes of the population, leading to potentially biased outcomes. Technical and policy experts should work together to assess whether the data volume is adequate and suitable for the AI system’s intended purpose.
  • 17.
    Summary: Source integrityand measurement accuracy ● Bias mitigation starts effectively at the data extraction and collection stage, where both sources and measurement tools may introduce discrimination into the dataset. Including biased human judgments in training data can replicate this bias in system outputs. Ensuring non-discriminatory outcomes requires verifying that data sources are reliable, neutral, and that collection methods are sound to achieve accuracy and reliability in results.
  • 18.
    Summary: Timeliness andRecency ● Outdated data in datasets can impact the generalizability of a model, as shifts in data distribution due to changing social dynamics may introduce bias. To avoid discriminatory outcomes, it’s essential to assess the timeliness and recency of all data elements in the dataset.
  • 19.
    Data Relevance andBest Practices Data Relevance & Domain Knowledge: ● Select appropriate data sources for reliable, unbiased AI. ● Leverage domain knowledge for choosing relevant inputs. ● Collaborate with domain experts for optimal data selection. Dataset Factsheet for Responsible Data Management: ● Create a Dataset Factsheet at the alpha stage ● Track data quality, bias mitigation, and auditability ● Record key aspects: data origin, pre-processing, security, and team insights on representativeness and integrity.
  • 20.
    Do you haveany questions? Thanks!