The Ethics of AI in Digital
Ecosystems
Maryam Washik
Department of Informatics
University of Oslo
Digital Ecosystems
Networks of diverse entities connected through di
ff
erent levels of collaboration and unique
complementarities, without the presence of hierarchical controls.
Arti
fi
cial Intelligence (AI)
John McCarty (1956): "the science and engineering
of making intelligent machines.”
The simulation of human intelligence processes by
machines, especially computer systems.
“Systems that extend human capability by sensing,
comprehending, acting and learning.” (Daugherty and
Wilson, 2018, p 3)
Major areas/sub
fi
elds: Machine Learning, Natural
Language Processing, Expert Systems, Robotics,
Computer Vision, Speech Recognition,
Reinforcement Learning…)
Starting from search engines, call-center chatbots, to
AI-enabled humanoid robots, there is a whole range
of arti
fi
cial intelligence products and services used in
digital platforms/ecosystems today.
Examples of AI-driven Digital Ecosystems
Image: Ova Digital Ecosystems
How AI Works in
Digital Ecosystems
Ecosystem-based business models require and generate
large volumes of data.
Data comes from all participants in the ecosystem, and AI is
often required to make sense of the data for productivity and
better e
ffi
ciency.
Feedback loop called the virtuous cycle of AI-driven growth
Begins with a business problem, data collection, train AI
model, serve users, gain more users, gain more data,
improve AI model…
Data drives growth, growth generates more data, data leads
to better AI models, AI drives further growth, positive
feedback loop results in exponential growth.
AI improvement comes from observing and learning
diligently. It needs to continuously evolve to keep up with
the growth.
Roles of AI in an Ecosystem
Network E
ff
ects and Growth: drive network e
ff
ects in digital ecosystems by enhancing user
experiences, increasing engagement, and attracting more users, which in turn ampli
fi
es the
value of the ecosystem for stakeholders involved. YouTube employs AI for personalized video
recommendations, which increase user engagement. This, in turn, attracts more users to the
platform. The network e
ff
ect, driven by AI, ampli
fi
es the value of YouTube for both creators
and viewers.
Personalization: personalize digital experiences based on user behaviour and preferences.
E.g, Amazon recommends products based on purchase history.
Security Enhancement: strengthen digital security by detecting and preventing cyber threats.
E.g, Google's AI algorithms scan incoming emails for signs of phishing, such as suspicious
links, sender inconsistencies, or mismatched domain names.
Role of AI in an Ecosystem
Decision Optimization: AI uses data analysis to provide insights for better decision-making in
digital ecosystems. E.g, Uber employs AI pricing decisions, by adjusting ride fares in real-time
based on factors like demand, tra
ffi
c, and weather conditions, balancing supply and demand.
Task Automation: AI automates tasks in digital ecosystems, freeing up humans for more
strategic work. E.g, AI chatbots working round-the-clock in customer support, handling
routine inquiries and resolving issues.
Innovation in Products and Services: AI drives innovation by enabling the creation of new
digital products and services. E.g, autonomous vehicles that navigate roads without human
intervention, rede
fi
ning the future of transport and mobility services.
Advanced Analytics: AI conducts advanced analytics on large datasets to identify hidden
patterns and generate insights. (Examples: AI analyses user data for businesses and
generates insights on preferences, behaviours, and purchase trends for targeted marketing.
AI has been a game-
changer for many
platforms and
ecosystems but not
without its challenges.
Even with exciting promises and many possibilities,
there are important ethical problems we need to pay
attention to.
As we give AI more important tasks, we have to
think about what's right, fair, and good in how it
works.
What is ethics?
AI Ethics
Ethics is a set of moral principles which help us discern
between right and wrong.
Since the middle of the 2010s, a visible discourse on
the ethics of arti
fi
cial intelligence (AI) has developed.
AI ethics is a multidisciplinary
fi
eld that studies how to
optimize AI's bene
fi
cial impact while reducing risks and
adverse outcomes.
Provide guidance in the development of practical
interventions.
Most prominent AI ethics issues are also human rights
issues (privacy, equality and non-discrimination).
We study AI ethics to ensure that AI is developed and
used in a way that is bene
fi
cial to society, reduces
unfavourable outcomes and aligned with human values.
What is it about AI that gives rise to
ethical issues in digital ecosystems?
Information
AI systems are trained on large amounts of
data, which can include sensitive
information about people, such as their
personal data, health data, or
fi
nancial data.
This information can be used to invade their
privacy and safety.
E.g, AI systems in navigation or mapping
applications collect location data. If not well
protected, this data can be used to track
people’s movements, compromising their
privacy and safety.
Investopedia: Ellen Lindner
Extrapolation
Being trained on a certain range of data and being
able to predict on a di
ff
erent range of data.
AI systems can extrapolate from the data they are
trained on to make predictions about the future.
Predictions can be inaccurate or biased, which
can lead to ethical problems.
For example, An AI medical system might over-
diagnose a patient with a serious disease, making
the patient go through unnecessary treatment
and bills.
Image: towardsdatascience.com
Automation
AI systems can automate tasks and decision-
making that were previously done by humans.
This can lead to job losses and can also create
ethical problems like bias and discrimination.
For example, automated decision-making
systems might re
fl
ect society’s biases and
stereotypes. Like an AI system that is trained on
data about past criminal convictions could be
used to decide who is likely to commit a crime
in the future. This could lead to inaccurate
judgments and people being discriminated
against or even being denied opportunities.
AI makes decisions based on what it is fed.
Garbage in, garbage out.
Images: propublica.com
Imitation
AI systems can be designed to
imitate human behaviour. This can be
used for good, such as in the
development of virtual assistants that
can provide companionship or
emotional support. However, it can
also be used inappropriately.
For example, deepfakes can be used
to spread misinformation or to
damage someone's reputation.
Deepfakes are AI-altered media that
convincingly impersonate people in
photos, videos, or audio.
Ethical Challenges of AI
Prominent cases of algorithmic biases: Amazon's recommendation algorithm has been
criticized for its bias towards recommending products that are more expensive or popular,
rather than products that are more relevant to the user's needs.
Backlashes against privacy breaches: China's social credit system is a system that uses AI to
track and monitor citizens' behaviour. The system uses this data to assign each citizen a
score, which can then be used to determine their access to jobs, housing, and other services.
The social credit system has been criticized for its potential to violate people's privacy rights
and to create a dystopian society.
Use of data to manipulate deepfakes: Deepfakes are photos, videos or audio recordings that
have been manipulated using AI to make it look or sound like someone is saying or doing
something they never actually said or did. Deepfakes can be used to spread misinformation,
damage someone's reputation, or even blackmail them.
Harm/o
ff
enses caused: Job losses, loss of life, discrimination.
Ethical Challenges of AI
Control: AI is increasingly making split-second decisions. What are the chances of humans
being involved in the decision-making process? E.g, autonomous cars and high-frequency
trading.
Power balance: Monopoly by huge platforms like Facebook, Amazon and Google.
Ownership: Who owns it? Who has the intellectual property rights? Who should be held
responsible? Who should get paid?
Environmental Impact: Power hungry infrastructure used for training AI. Computing power
required to train AI increases dramatically (doubles every 3.4 months since 2012). Training a
single large AI model can produce about 626,000 pounds of carbon dioxide equivalent to the
emissions from 300 round-trip
fl
ights between New York and San Francisco.
Humanity: How does AI impact our feeling of being human? What will our contribution be?
How will it a
ff
ect human dignity and
fl
ourishing?
Are there any standardised steps?
How do we think about or approach such ethical issues / Dilemmas?
•
Markkula Centre for Applied Ethics Bard ChatGpt
Identify the ethical issues Identify the ethical dilemma De
fi
ne the problem
Get the facts Gather information Gather information
Evaluate the alternative actions Identify the ethical principles Identify options
Choose an option for action and test it Weigh the options Evaluate consequences
Implement your decision and re
fl
ect on
the outcome
Make a decision Apply ethical frameworks
Re
fl
ect on the decision Make a decision
Implement the decision
Monitor, review, learn and improve
There is no absolute or standard ethical
framework or steps that work for every
ethical issue.
On what basis can we decide between right
and wrong, good and bad, when working
with AI?
There are theories and lenses that can be
used to guide ethical decision-making.
How Should We Decide?
Ethical Theories/
Lenses/Perspectives
What makes an action ethically better or worse than an
alternative action?
Ethical theories provide us with a framework for thinking
about moral problems, and ethical lenses help us to see
ethical issues from di
ff
erent perspectives.
Ethical theories are more general, while ethical lenses are
more speci
fi
c.
• Identify the relevant moral values and principles that
are at stake in a situation.
• Consider the potential consequences of our actions
for di
ff
erent people and groups.
• Develop a rationale for our decisions that is
consistent with our moral values.
Consequentialism
Greatest good
The right action is the one that produces the
greatest good for the greatest number of people.
(The Principles of Morals and Legislation by Jeremy
Bentham, 1989)
Whether or not an action is right or wrong
depends solely on the consequences of that
action.
Example: Imagine a social media platform that
utilizes AI algorithms by using personal data to
curate users' newsfeeds. Consequentialism would
evaluate the rightness or wrongness of this
algorithm based on its consequences for the
greatest number of users.
•
Consequentialism
Maximizing value, well-being for the greatest number of people (common good).
Consequentialism de
fi
nes value/utility (e.g., happiness, justice) and asserts that an action is morally
right if it maximizes this value. The speci
fi
cs of consequentialist theories vary based on their
understanding of value.
Who bene
fi
ts from these consequences?
Is it just me? Is it everyone but me? Is it all of humanity?
The standard answer is that an action is morally right if its consequences are more favourable than
unfavourable for everyone who can bene
fi
t from what is valuable, such as pleasure and well-being,
including the person taking the action.
Utilitarianism (Jeremy Bentham, John Stuart): form of consequentialism, with a speci
fi
c
account of value. Pleasure is good, pain is bad. Thus, we should maximize pleasure and minimize pain.
Deontology
Immanuel Kant - 18th century
Non-consequentialism: some kinds of actions
are wrong in themselves, not just because they
produce bad consequences (e.g, killing, torture).
Whether or not an action is right or wrong does
not depend fully or partially on its consequences.
Moral rules and duties: The right action is the
one that conforms to moral rules or duties,
regardless of the consequences.
Example: When a person chooses to tell the
truth because they believe that it is their duty to
be honest, even if they know that it will hurt
someone's feelings.
Deontology
Imagine a situation where a large technology company is developing an AI chatbot for
customer support. In deontology, the chatbot should provide honest information, even if it
could potentially harm the company's interests or reputation.
An action can be considered right even if it produces bad consequences.
Rights and duties: Individuals have rights, these rights correspond to duties.
Deontology typically take our negative duties to be stricter than our positive duties.
The intention of a person can help determine whether or not an act is permissible. If two
actions have the same consequence, one may be permissible depending on the intention.
Virtue Ethics
Virtuous character.
Virtue ethics: the right action is the one that a virtuous
person would perform. (Nicomachean Ethics by Aristotle)
Emphasis on Character: Unlike the ethical theories that
prioritize consequences or actions, virtue ethics places a
strong emphasis on the moral character of an individual. It
suggests that a morally good person will naturally make
morally good choices.
E.g: A virtuous person might choose to be brave in the face
of danger, even if it puts them at risk, because they believe
that courage is an important moral virtue.
When faced with a decision to report a security breach that
could potentially harm her company's reputation, Sarah
chooses to disclose the breach to a
ff
ected customers,
even though it may have negative consequences because
the decision aligns with her virtue of honesty and
transparency.
Feminist Ethics
Gender-based approach (e.g, Carol Gilligan, Nel Noddings,
and Virginia Held)
A branch of ethics that centers on the experiences and
perspectives of women.
It is concerned with addressing and challenging gender-based
injustices and inequalities.
While feminist ethics may incorporate virtue ethics elements, it
primarily focuses on issues related to gender, power, and
social justice.
Feminist ethics calls for a shift in societal norms and
organizational practices to reduce gender-based injustices in
the workplace, recognizing the importance of addressing
power imbalances and promoting social justice.
Example: Addressing gender-related biases from the hiring
process by AI systems to contribute to a fairer and more
equitable job market, promoting gender equality and social
justice.
Care Ethics
Caring relationships (Noddings,1984)
Emphasizes the importance of caring
relationships and responsibilities in moral
decision-making.
It highlights the role of empathy, compassion,
and care in addressing moral dilemmas.
Care ethicists stress the signi
fi
cance of
considering the well-being of individuals in one's
care.
For example, a nurse facing a moral dilemma
involving a terminally ill patient might consider
the patient's emotional and psychological well-
being alongside medical treatment.
Viewing AI from Ethical Perspectives/Lenses
The Utilitarian Lens (The consequences of actions) would ask whether AI systems produce the greatest
good for the greatest number of people. E.g, evaluating whether an AI-powered healthcare platform
optimizes treatment options for best overall health outcomes.
The Common Good Lens (The well-being of the community as a whole) would ask whether AI systems are
used in a way that bene
fi
ts society as a whole, not just individuals or speci
fi
c groups. E.g, assessing if an
AI-driven educational platform provides accessible and equitable learning opportunities to all students,
thereby enhancing the educational well-being of the entire community, not just select groups.
The Rights Lens (The protection of individual rights) such as the right to privacy, the right to freedom of
speech, and the right to non-discrimination. E.g, examining whether an AI-powered surveillance system
respects individuals' right to privacy by securely handling personal data and limiting surveillance to lawful
and necessary purposes.
The Justice Lens (Fairness and equality) would ask whether AI systems are used in a way that is fair to all
people, regardless of their race, gender, religion, or other factors. E.g, whether an AI-based hiring platform
provides equal opportunities and eliminates biases, ensuring that all job applicants are treated fairly,
regardless of their demographic backgrounds, such as race, gender, or religion.
Markkula Center for Applied Ethics
Viewing AI from Ethical Perspectives/Lenses
The Virtue Lens (The character of the people who develop and use AI systems) would ask
whether these people are acting in a virtuous way, such as being honest, fair, and transparent.
E.g, ensuring that the team responsible for developing AI-powered
fi
nancial tools acts with
integrity and transparency, embodying virtues such as honesty and responsibility in their work.
Feminist Lens (Gender perspective) emphasizing the recognition and correction of gender-
based injustices and inequalities, as well as the promotion of women's autonomy and well-
being. E.g, whether AI technologies in reproductive healthcare respect women's autonomy
and reproductive rights, and whether they address and rectify historical biases and gender
disparities in healthcare access and treatment.
The Care Lens (The relationships between people and AI systems) would ask whether AI
systems are designed and used in a way that respects and protects these relationships. E.g,
whether AI-powered care robots for the elderly are designed and operated in a manner to
provide companionship, emotional support, and maintain a sense of dignity for the elderly.
Markkula Center for Applied Ethics
Thinking Exercise
During a healthcare crisis with limited ventilators, AI is employed to allocate ventilators to
patients with breathing di
ffi
culties. One approach prioritizes patients with the highest likelihood
of recovery and another approach follows a "
fi
rst come,
fi
rst served" method.
Which approach might be considered by which ethical theory, and why?
Do you see any other approaches?
Next: Group Discussions
Group Discussions
YouTube uses AI to recommend videos to it’s
users. The recommendation algorithm is
designed to keep users engaged on the
platform by suggesting videos that they
might be interested in. The algorithm works
by tracking the videos that users watch, the
videos that they like, and the videos that
they engage with. It also takes account of
the videos that are popular among other
users who have similar interests.
Consider potential ethical implications of this
scenario for di
ff
erent stakeholders involved.
Develop arguments for the identi
fi
ed ethical
implications from di
ff
erent ethical lenses.
Lens/Perspective Main Focus of Approach
Common Good Lens Well-being of the community as a whole
Utilitarian Lens Pleasure is good, pain is bad
Rights Lens Protection of individual rights
Justice Lens Fairness and equality
Virtue Lens Character of the individual
Feminist Lens Gender, power, and social justice
Care Ethics Lens Relationships between parties

The Ethics of Artificial Intelligence in Digital Ecosystems

  • 1.
    The Ethics ofAI in Digital Ecosystems Maryam Washik Department of Informatics University of Oslo
  • 2.
    Digital Ecosystems Networks ofdiverse entities connected through di ff erent levels of collaboration and unique complementarities, without the presence of hierarchical controls.
  • 3.
    Arti fi cial Intelligence (AI) JohnMcCarty (1956): "the science and engineering of making intelligent machines.” The simulation of human intelligence processes by machines, especially computer systems. “Systems that extend human capability by sensing, comprehending, acting and learning.” (Daugherty and Wilson, 2018, p 3) Major areas/sub fi elds: Machine Learning, Natural Language Processing, Expert Systems, Robotics, Computer Vision, Speech Recognition, Reinforcement Learning…) Starting from search engines, call-center chatbots, to AI-enabled humanoid robots, there is a whole range of arti fi cial intelligence products and services used in digital platforms/ecosystems today.
  • 4.
    Examples of AI-drivenDigital Ecosystems Image: Ova Digital Ecosystems
  • 5.
    How AI Worksin Digital Ecosystems Ecosystem-based business models require and generate large volumes of data. Data comes from all participants in the ecosystem, and AI is often required to make sense of the data for productivity and better e ffi ciency. Feedback loop called the virtuous cycle of AI-driven growth Begins with a business problem, data collection, train AI model, serve users, gain more users, gain more data, improve AI model… Data drives growth, growth generates more data, data leads to better AI models, AI drives further growth, positive feedback loop results in exponential growth. AI improvement comes from observing and learning diligently. It needs to continuously evolve to keep up with the growth.
  • 6.
    Roles of AIin an Ecosystem Network E ff ects and Growth: drive network e ff ects in digital ecosystems by enhancing user experiences, increasing engagement, and attracting more users, which in turn ampli fi es the value of the ecosystem for stakeholders involved. YouTube employs AI for personalized video recommendations, which increase user engagement. This, in turn, attracts more users to the platform. The network e ff ect, driven by AI, ampli fi es the value of YouTube for both creators and viewers. Personalization: personalize digital experiences based on user behaviour and preferences. E.g, Amazon recommends products based on purchase history. Security Enhancement: strengthen digital security by detecting and preventing cyber threats. E.g, Google's AI algorithms scan incoming emails for signs of phishing, such as suspicious links, sender inconsistencies, or mismatched domain names.
  • 7.
    Role of AIin an Ecosystem Decision Optimization: AI uses data analysis to provide insights for better decision-making in digital ecosystems. E.g, Uber employs AI pricing decisions, by adjusting ride fares in real-time based on factors like demand, tra ffi c, and weather conditions, balancing supply and demand. Task Automation: AI automates tasks in digital ecosystems, freeing up humans for more strategic work. E.g, AI chatbots working round-the-clock in customer support, handling routine inquiries and resolving issues. Innovation in Products and Services: AI drives innovation by enabling the creation of new digital products and services. E.g, autonomous vehicles that navigate roads without human intervention, rede fi ning the future of transport and mobility services. Advanced Analytics: AI conducts advanced analytics on large datasets to identify hidden patterns and generate insights. (Examples: AI analyses user data for businesses and generates insights on preferences, behaviours, and purchase trends for targeted marketing.
  • 8.
    AI has beena game- changer for many platforms and ecosystems but not without its challenges. Even with exciting promises and many possibilities, there are important ethical problems we need to pay attention to. As we give AI more important tasks, we have to think about what's right, fair, and good in how it works. What is ethics?
  • 9.
    AI Ethics Ethics isa set of moral principles which help us discern between right and wrong. Since the middle of the 2010s, a visible discourse on the ethics of arti fi cial intelligence (AI) has developed. AI ethics is a multidisciplinary fi eld that studies how to optimize AI's bene fi cial impact while reducing risks and adverse outcomes. Provide guidance in the development of practical interventions. Most prominent AI ethics issues are also human rights issues (privacy, equality and non-discrimination). We study AI ethics to ensure that AI is developed and used in a way that is bene fi cial to society, reduces unfavourable outcomes and aligned with human values.
  • 10.
    What is itabout AI that gives rise to ethical issues in digital ecosystems?
  • 11.
    Information AI systems aretrained on large amounts of data, which can include sensitive information about people, such as their personal data, health data, or fi nancial data. This information can be used to invade their privacy and safety. E.g, AI systems in navigation or mapping applications collect location data. If not well protected, this data can be used to track people’s movements, compromising their privacy and safety. Investopedia: Ellen Lindner
  • 12.
    Extrapolation Being trained ona certain range of data and being able to predict on a di ff erent range of data. AI systems can extrapolate from the data they are trained on to make predictions about the future. Predictions can be inaccurate or biased, which can lead to ethical problems. For example, An AI medical system might over- diagnose a patient with a serious disease, making the patient go through unnecessary treatment and bills. Image: towardsdatascience.com
  • 13.
    Automation AI systems canautomate tasks and decision- making that were previously done by humans. This can lead to job losses and can also create ethical problems like bias and discrimination. For example, automated decision-making systems might re fl ect society’s biases and stereotypes. Like an AI system that is trained on data about past criminal convictions could be used to decide who is likely to commit a crime in the future. This could lead to inaccurate judgments and people being discriminated against or even being denied opportunities. AI makes decisions based on what it is fed. Garbage in, garbage out. Images: propublica.com
  • 14.
    Imitation AI systems canbe designed to imitate human behaviour. This can be used for good, such as in the development of virtual assistants that can provide companionship or emotional support. However, it can also be used inappropriately. For example, deepfakes can be used to spread misinformation or to damage someone's reputation. Deepfakes are AI-altered media that convincingly impersonate people in photos, videos, or audio.
  • 15.
    Ethical Challenges ofAI Prominent cases of algorithmic biases: Amazon's recommendation algorithm has been criticized for its bias towards recommending products that are more expensive or popular, rather than products that are more relevant to the user's needs. Backlashes against privacy breaches: China's social credit system is a system that uses AI to track and monitor citizens' behaviour. The system uses this data to assign each citizen a score, which can then be used to determine their access to jobs, housing, and other services. The social credit system has been criticized for its potential to violate people's privacy rights and to create a dystopian society. Use of data to manipulate deepfakes: Deepfakes are photos, videos or audio recordings that have been manipulated using AI to make it look or sound like someone is saying or doing something they never actually said or did. Deepfakes can be used to spread misinformation, damage someone's reputation, or even blackmail them. Harm/o ff enses caused: Job losses, loss of life, discrimination.
  • 16.
    Ethical Challenges ofAI Control: AI is increasingly making split-second decisions. What are the chances of humans being involved in the decision-making process? E.g, autonomous cars and high-frequency trading. Power balance: Monopoly by huge platforms like Facebook, Amazon and Google. Ownership: Who owns it? Who has the intellectual property rights? Who should be held responsible? Who should get paid? Environmental Impact: Power hungry infrastructure used for training AI. Computing power required to train AI increases dramatically (doubles every 3.4 months since 2012). Training a single large AI model can produce about 626,000 pounds of carbon dioxide equivalent to the emissions from 300 round-trip fl ights between New York and San Francisco. Humanity: How does AI impact our feeling of being human? What will our contribution be? How will it a ff ect human dignity and fl ourishing?
  • 17.
    Are there anystandardised steps? How do we think about or approach such ethical issues / Dilemmas? • Markkula Centre for Applied Ethics Bard ChatGpt Identify the ethical issues Identify the ethical dilemma De fi ne the problem Get the facts Gather information Gather information Evaluate the alternative actions Identify the ethical principles Identify options Choose an option for action and test it Weigh the options Evaluate consequences Implement your decision and re fl ect on the outcome Make a decision Apply ethical frameworks Re fl ect on the decision Make a decision Implement the decision Monitor, review, learn and improve
  • 20.
    There is noabsolute or standard ethical framework or steps that work for every ethical issue. On what basis can we decide between right and wrong, good and bad, when working with AI? There are theories and lenses that can be used to guide ethical decision-making. How Should We Decide?
  • 21.
    Ethical Theories/ Lenses/Perspectives What makesan action ethically better or worse than an alternative action? Ethical theories provide us with a framework for thinking about moral problems, and ethical lenses help us to see ethical issues from di ff erent perspectives. Ethical theories are more general, while ethical lenses are more speci fi c. • Identify the relevant moral values and principles that are at stake in a situation. • Consider the potential consequences of our actions for di ff erent people and groups. • Develop a rationale for our decisions that is consistent with our moral values.
  • 22.
    Consequentialism Greatest good The rightaction is the one that produces the greatest good for the greatest number of people. (The Principles of Morals and Legislation by Jeremy Bentham, 1989) Whether or not an action is right or wrong depends solely on the consequences of that action. Example: Imagine a social media platform that utilizes AI algorithms by using personal data to curate users' newsfeeds. Consequentialism would evaluate the rightness or wrongness of this algorithm based on its consequences for the greatest number of users. •
  • 23.
    Consequentialism Maximizing value, well-beingfor the greatest number of people (common good). Consequentialism de fi nes value/utility (e.g., happiness, justice) and asserts that an action is morally right if it maximizes this value. The speci fi cs of consequentialist theories vary based on their understanding of value. Who bene fi ts from these consequences? Is it just me? Is it everyone but me? Is it all of humanity? The standard answer is that an action is morally right if its consequences are more favourable than unfavourable for everyone who can bene fi t from what is valuable, such as pleasure and well-being, including the person taking the action. Utilitarianism (Jeremy Bentham, John Stuart): form of consequentialism, with a speci fi c account of value. Pleasure is good, pain is bad. Thus, we should maximize pleasure and minimize pain.
  • 24.
    Deontology Immanuel Kant -18th century Non-consequentialism: some kinds of actions are wrong in themselves, not just because they produce bad consequences (e.g, killing, torture). Whether or not an action is right or wrong does not depend fully or partially on its consequences. Moral rules and duties: The right action is the one that conforms to moral rules or duties, regardless of the consequences. Example: When a person chooses to tell the truth because they believe that it is their duty to be honest, even if they know that it will hurt someone's feelings.
  • 25.
    Deontology Imagine a situationwhere a large technology company is developing an AI chatbot for customer support. In deontology, the chatbot should provide honest information, even if it could potentially harm the company's interests or reputation. An action can be considered right even if it produces bad consequences. Rights and duties: Individuals have rights, these rights correspond to duties. Deontology typically take our negative duties to be stricter than our positive duties. The intention of a person can help determine whether or not an act is permissible. If two actions have the same consequence, one may be permissible depending on the intention.
  • 26.
    Virtue Ethics Virtuous character. Virtueethics: the right action is the one that a virtuous person would perform. (Nicomachean Ethics by Aristotle) Emphasis on Character: Unlike the ethical theories that prioritize consequences or actions, virtue ethics places a strong emphasis on the moral character of an individual. It suggests that a morally good person will naturally make morally good choices. E.g: A virtuous person might choose to be brave in the face of danger, even if it puts them at risk, because they believe that courage is an important moral virtue. When faced with a decision to report a security breach that could potentially harm her company's reputation, Sarah chooses to disclose the breach to a ff ected customers, even though it may have negative consequences because the decision aligns with her virtue of honesty and transparency.
  • 27.
    Feminist Ethics Gender-based approach(e.g, Carol Gilligan, Nel Noddings, and Virginia Held) A branch of ethics that centers on the experiences and perspectives of women. It is concerned with addressing and challenging gender-based injustices and inequalities. While feminist ethics may incorporate virtue ethics elements, it primarily focuses on issues related to gender, power, and social justice. Feminist ethics calls for a shift in societal norms and organizational practices to reduce gender-based injustices in the workplace, recognizing the importance of addressing power imbalances and promoting social justice. Example: Addressing gender-related biases from the hiring process by AI systems to contribute to a fairer and more equitable job market, promoting gender equality and social justice.
  • 28.
    Care Ethics Caring relationships(Noddings,1984) Emphasizes the importance of caring relationships and responsibilities in moral decision-making. It highlights the role of empathy, compassion, and care in addressing moral dilemmas. Care ethicists stress the signi fi cance of considering the well-being of individuals in one's care. For example, a nurse facing a moral dilemma involving a terminally ill patient might consider the patient's emotional and psychological well- being alongside medical treatment.
  • 29.
    Viewing AI fromEthical Perspectives/Lenses The Utilitarian Lens (The consequences of actions) would ask whether AI systems produce the greatest good for the greatest number of people. E.g, evaluating whether an AI-powered healthcare platform optimizes treatment options for best overall health outcomes. The Common Good Lens (The well-being of the community as a whole) would ask whether AI systems are used in a way that bene fi ts society as a whole, not just individuals or speci fi c groups. E.g, assessing if an AI-driven educational platform provides accessible and equitable learning opportunities to all students, thereby enhancing the educational well-being of the entire community, not just select groups. The Rights Lens (The protection of individual rights) such as the right to privacy, the right to freedom of speech, and the right to non-discrimination. E.g, examining whether an AI-powered surveillance system respects individuals' right to privacy by securely handling personal data and limiting surveillance to lawful and necessary purposes. The Justice Lens (Fairness and equality) would ask whether AI systems are used in a way that is fair to all people, regardless of their race, gender, religion, or other factors. E.g, whether an AI-based hiring platform provides equal opportunities and eliminates biases, ensuring that all job applicants are treated fairly, regardless of their demographic backgrounds, such as race, gender, or religion. Markkula Center for Applied Ethics
  • 30.
    Viewing AI fromEthical Perspectives/Lenses The Virtue Lens (The character of the people who develop and use AI systems) would ask whether these people are acting in a virtuous way, such as being honest, fair, and transparent. E.g, ensuring that the team responsible for developing AI-powered fi nancial tools acts with integrity and transparency, embodying virtues such as honesty and responsibility in their work. Feminist Lens (Gender perspective) emphasizing the recognition and correction of gender- based injustices and inequalities, as well as the promotion of women's autonomy and well- being. E.g, whether AI technologies in reproductive healthcare respect women's autonomy and reproductive rights, and whether they address and rectify historical biases and gender disparities in healthcare access and treatment. The Care Lens (The relationships between people and AI systems) would ask whether AI systems are designed and used in a way that respects and protects these relationships. E.g, whether AI-powered care robots for the elderly are designed and operated in a manner to provide companionship, emotional support, and maintain a sense of dignity for the elderly. Markkula Center for Applied Ethics
  • 31.
    Thinking Exercise During ahealthcare crisis with limited ventilators, AI is employed to allocate ventilators to patients with breathing di ffi culties. One approach prioritizes patients with the highest likelihood of recovery and another approach follows a " fi rst come, fi rst served" method. Which approach might be considered by which ethical theory, and why? Do you see any other approaches?
  • 32.
  • 33.
    Group Discussions YouTube usesAI to recommend videos to it’s users. The recommendation algorithm is designed to keep users engaged on the platform by suggesting videos that they might be interested in. The algorithm works by tracking the videos that users watch, the videos that they like, and the videos that they engage with. It also takes account of the videos that are popular among other users who have similar interests. Consider potential ethical implications of this scenario for di ff erent stakeholders involved. Develop arguments for the identi fi ed ethical implications from di ff erent ethical lenses.
  • 34.
    Lens/Perspective Main Focusof Approach Common Good Lens Well-being of the community as a whole Utilitarian Lens Pleasure is good, pain is bad Rights Lens Protection of individual rights Justice Lens Fairness and equality Virtue Lens Character of the individual Feminist Lens Gender, power, and social justice Care Ethics Lens Relationships between parties