Companies that priorities the responsible implementation of AI have better business outcomes than those who don’t, even among the AI leaders. Why is it then that more organizations aren’t prioritizing responsible AI (rAI)? The answer is, there is a knowledge gap as well as a confusing matrix of tools, standards pending regulations and frameworks. Over the last decade, Dr Dobrin has been a leader and champion for the responsible implementation of AI and as developed a successful formula, similar to the successful formula Pixar has developed for movies. Dr Dobrin also walks thru what good looks like and common pitfalls of AI implementation in the real world.
9711147426✨Call In girls Gurgaon Sector 31. SCO 25 escort service
[DSC Europe 22] A Story Spine for Responsible AI - Seth Dobrin
1. A Story Spine
for Responsible AI
Seth Dobrin, PhD
President
Responsible AI Institute
Founder
Qantm AI
2. AI
EU white Paper
"On Artificial Intelligence — A European
approach to excellence and trust"
should work for people
and be a force for good in society
5. One of the forces
behind innovation
is the application of
Artificial Intelligence (AI)
6. Ethical issues have blighted
the image of AI
with concerns ranging from using AI to manipulate
behavior to inherent racial and sex bias
7. are currently using or
exploring the use of AI
in their business
The Morningstar Global AI Adoption Index 2022
organizations
%
77
8. Ethical stance is now
a topmost thought
by business leaders
A 2021 business survey from PwC found that
the most leaders plan to ensure that AI is
compliant with applicable regulations during 2022
57% 1%
no plans
to address AI
responsibility issues
plan to
ensure AI with
regulations
9. No key steps to ensure AI
trustworthy and responsible
actions to show
reduced bias
74% 68% 61%
tracking performance
variations and
model drift
making sure
explanations AI–
powered decisions
Organizations
10. “
Organizations seeing the
highest returns from AI engage
in risk–mitigation practices
more often than others
McKinsey State of AI in 2021 report
11. AI leaders, prioritizing responsible AI
implementation, see higher business impact
Better brand
reputation
Higher levels
innovation
Better
products
MIT Sloan and BCG study
12. New era in AI
Responsible AI involves
Responsible AI will further adopt AI–enabled
systems, but it starts with human–centric design
trustworthiness is part of the AI–based system
rigorous training and testing of data
risk mitigation by careful measurement
of model bias and accuracy
13. Good AI design
centers on people
The only way to ensure
a collaboration between AI
and humans is to put humans
at the center of the design, development,
and monitoring of AI systems
“ “
14. From Stone Tools to AI and Web3
Humans have placed
technology central
to our lives
15. AI is a tool
developed to add intelligence
to computational tasks
16. Technology design has often
been at odds with the needs
of an individual or a group
The results are sub-optimal at best
without a design process centered
on human-machine collaboration
19. 74%
When good AI goes bad
The untrustworthy potential
of AI–enabled systems has been
heavily criticized as evidence for
misuse and abuse has appeared
LGBTQ couples being more
likely to be denied a mortgage
than heterosexual couples
Responsible AI Institute (RAII)
21. Bias in healthcare
Artificial intelligence and algorithmic
bias: implications for health systems"
explores the effect of bias "when the
application of an algorithm compounds
existing inequities in socioeconomic
status, race, ethnic background, religion,
gender, disability or sexual orientation
to amplify them and adversely impact
inequities in health systems.
Paper by Panch et.al, 2019
“
Source
22. Racist
Racial bias in algorithms has
become a stalwart in the industry
When don’t use a human–
centric design approach
and
Sexist AI
23. Pervasive examples of
algorithmic racism and sexism
Autonomous racist, sexist, and scientifically–
discredited physiognomic behavior is already
encoded into Robots with AI
Johns Hopkins University, the Georgia Institute of Technology,
and the University of Washington by Hundt
“
24. “
Call to Justice, imploring the Robotics, AI,
and AI Ethics communities to collaborate
in addressing racist, sexist, and other
harmful culture or behavior relating to
learning agents, robots, and other systems
Source 1 Source 2
25.
26. Governance and
regulations for
responsible AI
Artificial Intelligence
and Data Act (AIDA)
Guidance for Regulation of AI
Applications, Executive Order (E.O.)
13859 — Maintaining American
Leadership in AI
A pro–innovation
approach to regulating AI
Internet Information
Service Algorithmic
Recommendation
Management Provisions
Regulatory framework
proposal on artificial
intelligence; EU’s first ever
legal framework on AI
27. Despite the obvious commercial
benefits of this technology
(trustworthy AI) commerce still
often fails to create systems that
truly empower and augment
people in a responsible manner
The business of
trustworthy AI
“
30. Their secret to success?
Every day,
One day,
Once upon a time there was
Because of that,
Until finally,
Because of that,
A formula
31. Their secret to success?
Every day,
One day,
Once upon a time there was
Because of that,
Until finally,
Because of that,
A formula
a fish, Marlin, and his son Nemo
Marlin warns Nemo about dangers of the ocean
Nemo ignores his father
Nemo ends up in a fish tank
Marlin sets off on a journey to find Nemo
Marlin finds Nemo and brings him home safely
32. Ensure explainable and
interpretable AI systems
Measure bias and
fairness of AI systems
Validate systems ope-
rations for AI systems
Augment robustness,
security, and safety AI
Deliver accountability
of AI systems
Enable consumer pro-
tection where AI used
Formula can help in the design
and development of responsible AI
33. No strategic
planning
Design as an
afterthought
Business value
unmapped
Trust is an
afterthought
No human-
centric
approach
5 mistakes in the
AI application
35. Inclusion and diversity —
watchwords XXI century
By building AI–enabled systems that act
responsibly and can be trusted, we are
using technology to better society
By using a human–centric
design approach to AI–
enablement, systems will be
empowered through trust
and ultimately drive better
business decisions
Responsible and trustworthy AI can ensure:
AI harms
are mitigated
Better patient
outcomes are for
all, not just some
AI–enabled
decisions are
accurate
36. AI can be used to mitigate
these and other issues
AI can propagate existing
structural issues such
as bias and inequality
37. Ensure explainable and
interpretable AI systems
Measure bias and
fairness of AI systems
Validate systems ope-
rations for AI systems
Augment robustness,
security, and safety AI
Deliver accountability
of AI systems
Enable consumer pro-
tection where AI used
Formula creates a story arc for
responsible AI systems
Hello everyone, I’m Seth Dobrin, I must say that I am really thrilled to be here with you in person,
Today I’d like to explore an approach to the implementation and operationalization of responsible AI at scale.
The EU white paper put it best, "AI should work for people and be a force for good in society."
For AI to do good it must be responsibly implemented and trusted. Trust is essential to human beings: trust encapsulates aspects of humanity that define a responsible attitude that is inclusive and fair: without trust, relationships stall, and transactions fail.
Trust and responsibility are vital aspects of our online and offline life, without which normal operations would come to a grinding halt.
As technological advances continue at pace, one of the forces behind innovation is the application of artificial intelligence (AI).
But AI has not had an easy ride, with issues originating from a lack of trust due to poor design. As a result, ethical issues have scared the image of AI, with concerns ranging from using AI to manipulate behavior to inherent racial and sex bias.
Even in this context, AI adoption is growing: the Morningstar Global AI Adoption Index 2022 found that 77% of organizations are currently using, or exploring the use of, AI in their business; however, issues around ethics have crept in, making ethical governance and oversight essential for continued uptake and acceptance by consumers and citizens.
Research shows that an ethical stance is now on the minds of business leaders: A 2021 business survey from PwC found that 57% of companies under the category of ‘AI leader’ plan to ensure that AI is compliant with applicable regulations during 2022. Perhaps more surprisingly, only 1% of AI Leaders have “No plans to address AI responsibility issues.” This last statistic reflects the fact that trustworthy and responsible AI is expected.
The PwC report also highlights that most organizations have not yet taken key steps to ensure their AI is trustworthy and responsible; these steps would include actions to show reduced bias (74%), tracking performance variations and model drift (68%), and making sure they can explain AI-powered decisions (61%).
Evidence demonstrating that ethical oversight in the application of AI is essential is found in the McKinsey State of AI in 2021 report, which states:
“Organizations seeing the highest returns from AI engage in risk-mitigation practices more often than others.”
Furthermore, a recent study from MIT Sloan and BCG demonstrated that of the AI leaders, those prioritizing responsible implementation of AI see higher business impact than those that don’t. Specifically they cite, better products, better brand reputation and higher levels of innovation.
Ethical oversight that reduces risk is driven by AI transparency. This is heralding a new era in AI, one where trustworthiness is a fundamental part AI-based systems. Trust is empowering innovations with the result that a new generation of responsible AI is dovetailing the digital with the natural world to benefit humanity. However, responsible AI (rAI) involves rigorous training and testing of data as well as risk mitigation by careful measurement of model bias and accuracy. Responsible AI will drive further adoption of AI-enabled systems, but it starts with human-centric design.
Good AI design centers on people
“The only way to ensure a collaboration between AI and humans is to put humans at the
center of the design, development, and monitoring of AI systems.” - Seth Dobrin, 2022
The human species excels in technological innovation. From paleolithic stone tools to present-day emerging technologies, including AI and Web3, humans have placed technology central to our lives, sometimes making lives better, and sometimes not so much.
AI is a tool developed to add intelligence to computational tasks. However, human behavior is often left out of an equation that requires careful balance.
As such, technology design has often been at odds with the needs of an individual or a group. This is an acute issue in technologies driven by artificial intelligence. The results are sub-optimal at best without a design process centered on human-machine collaboration.
An excellent example of human-machine collaboration is the work of Refik Anadol, a media, and digital artist. His 2021 collection was created using an artificial intelligence model trained with the public metadata of The New York Museum of Modern Art's collection.
The resulting artworks are beautiful and ephemeral, touching the very soul of anyone viewing the works. Through his work, Refik manages to transcend the technical and black box nature of machine learning and generative algorithms into a true collaboration between machine and human.
Unfortunately, this human-machine collaboration vision has been challenging to achieve because of the adverse effects that ungoverned AI systems can have on humans. The potential benefits of intelligent machines have been mitigated by bias and unethical use of AI-enabled systems.
AI has come under the watch of organizations across the globe. The untrustworthy potential of AI-enabled systems has been heavily criticized as evidence for misuse and abuse has appeared. Examples from the Responsible AI Institute (RAII) include LGBTQ couples being 73% more likely to be denied a mortgage than heterosexual couples; a Florida county sheriff’s office combining academic data with highly sensitive health department data to label specific children as possible "criminals." Further examples of what happens when you don’t use a human-centric design approach include:
Discrimination in the workplace whic is as old as work itself. However, being discriminated against by an algorithm is a new level of insult. A recent example of AI-enabled discrimination in action was the case of three employees of Estee Lauder. The make-up artists were given video interviews to reapply for positions; an algorithm assessed the interview. The three subsequently received a redundancy notice based on the results of the automated judgment. All three received an out-of-court settlement for unfair dismissal. (Source)
Bias-by-algorithm is an increasing concern in healthcare as modern medicine embraces AI. A 2019 paper by Panch et.al., "Artificial intelligence and algorithmic bias: implications for health systems" explores the effect of bias "when the application of an algorithm compounds existing inequities in socioeconomic status, race, ethnic background, religion, gender, disability or sexual orientation to amplify them and adversely impact inequities in health systems." The paper concludes that societal bias will manifest in healthcare systems utilizing algorithms. The paper also points out the importance of 'Explainability' and the use of design input from people, notably, "clinical expertise to propose relevant counterfactuals for the context in which the algorithm is being developed.” (Source)
The Uber Eats courier app has come under fire for being racist and not recognizing faces during a facial ID check of drivers. Employees were sacked, or their accounts were frozen because they failed the facial recognition check during identity verification. Racial bias in algorithms has become a stalwart in the industry. This type of facial bias has far-reaching effects as many identity apps and FinTech increasingly uses facial recognition for Know Your Customer (KYC) to create a digital identity.
Examples of algorithmic racism and sexism are pervasive. For example, research from the Johns Hopkins University, the Georgia Institute of Technology, and the University of Washington by Hundt et.al. on "Robots Enact Malignant Stereotypes" clearly shows that "autonomous racist, sexist, and scientifically- discredited physiognomic behavior is already encoded into Robots with AI."
The paper's authors make an urgent call for collaboration: "Call to Justice, imploring the Robotics, AI, and AI Ethics communities to collaborate in addressing racist, sexist, and other harmful culture or behavior relating to learning agents, robots, and other systems. (Source 1, Source 2)
For more examples, check out the Responsible AI Institute (RAII) heatmap that captures issues in the use of AI as they occur:
AI needs data, and if these data include inherent bias, then AI amplifies this bias. The AI is ignorant of this unless it is told the bias exists. Collaboration between AI and humans is only achievable when humans are at the center of designing, developing, and monitoring AI systems. The human-centric design captures the unique requirements of individuals and ensures that the output is inclusive, non-biased, accurate, and responsible.
Governance and regulations for responsible AI
Regulations augment human-centric design. As such, regulations can be used to promote responsible AI and, in turn, help AI-enabled system adoption. The regulatory landscape for responsible and trustworthy AI is fluid, but some examples of works in progress include:
Canada: Government of Canada introduced the Artificial Intelligence and Data Act (AIDA) an article by the RAII explains further.
USA: Guidance for Regulation of Artificial Intelligence Applications, Executive Order (E.O.) 13859 - Maintaining American Leadership in Artificial Intelligence.
UK: a pro-innovation approach to regulating AI.
China: Internet Information Service Algorithmic Recommendation Management Provisions.
EU: Regulatory framework proposal on artificial intelligence; EU’s first ever legal framework on AI.
The business of trustworthy AI
“Despite the obvious commercial benefits of this technology (trustworthy AI) commerce still often fails to create systems that truly empower and augment people in a responsible manner.” - Seth Dobrin, 2022
AI-enabled technologies need to be backed by sound business use cases and models empowered by responsible and trustworthy AI. These alignments of planets build a solid base for AI uptake now and into the future.
But how do you do this at scale in organzaitons, lets looks elsewhere for inspiration...
A tried and tested process formula from another industry helps kickstart the creation of a repeatable and reliable process for responsible AI.
The entertainment industry, specifically Pixar, is a case in point.
Pixar is arguably one of the greatest storytellers of our generation, as evidenced by the company's accolades, including 16 Academy Awards. Pixar uses a formula to create the films we know and love today.
This formula is called a story spine, and it follows this process:
Once upon a time there was,
Every day,
One day,
Because of that,
Because of that,
Until finally.
For example
Once upon a time there was, a fish, Marlin, and his son Nemo
Every day, Marlin warns Nemo about dangers of the ocean
One day, like all kids ,Nemo ignores his father
Because of that, Nemo ends up in a fish tank
Because of that, Marlin sets off on a journey to find Nemo
Until finally, Marlin finds Nemo and brings him home safely
The same storyline-type formula can help in the design and development of responsible AI systems:
Ensure explainable and interpretable AI systems
Measure bias and fairness of AI systems
Validate systems operations for AI systems
Augment robustness, security, and safety of AI systems
Deliver accountability of AI systems
Enable consumer protection where AI systems are used
5 mistakes businesses make in the application of AI
Learning from mistakes is how Pixar came up with its winning formula. Similarly, the AI industry must learn from its mistakes. After many years in the industry, implementing AI systems across hundreds of companies, five critical mishaps stand out:
No strategic planning: AI Strategy is an afterthought and not tied to the core business strategy.
Business value unmapped: KPIs are not measuring business value.
Design as an afterthought: Design is not considered from the onset.
Trust is an afterthought: trust is left out of the design remit.
No human-centric approach: Human impact is not considered
To encourage the uptake of AI-enabled systems the AI community must deliver certain pieces of the puzzle:
Education: in the Pixar example, language is an essential component. The equivalent in AI system design is Education. The AI landscape is technically complex and filled with poor analogies. Business leaders and decision-makers must access a community expert who can decipher the industry's terminology. The industry must make efforts to make AI education accessible to all.
Organizational maturity assessments (OMA): an OMA is essential for understanding the maturity of a data-driven business process. A standardized assessment process for AI-enabled systems would help a business to improve its AI maturity over time.
Regulatory harmonization: a ‘regulatory tracker’ is needed to harmonize the AI industry. This would allow a business to understand the latest regulatory updates and how they apply to AI-enabled systems.
Audit: AI system assessments are needed to provide clarity and feedback on system design and implementation. A scheme for Third party audits that validates alignment to standards and regulations is a must-have.
Inclusion and diversity are watchwords of the twenty-first century. This is a mature stance taken by societies that have failed to ensure that technology is equivalent for everyone. Our expectations of diversity and inclusion must be reflected in our technologies for them to be accepted and advanced. By building AI-enabled systems that act responsibly and can be trusted, we are using technology to better society. This is not a bold unsubstantiated statement. Responsible and trustworthy AI can ensure that AI harms are mitigated, that better patient outcomes are for all, not just some, and that AI-enabled decisions are accurate. People need to trust technology, and the proof is in the pudding. By using a human-centric design approach to AI-enablement, systems will be empowered through trust and ultimately drive better business decisions.
It comes down to this: AI systems can have one of two impacts on society: AI can propagate existing structural issues such as bias and inequality, or AI can be used to mitigate these and other issues.
If you want to create AI systems that truly generate real, tangible value for your organization, I encourage you to follow this story arc for building responsible AI systems developed by an independent non-profit, that is the de facto framework and scheme for aligning to global standards and regulations. If you want to join the cause
Become an individual member of the Responsible AI Institute today and then see how Qantm AI can help you from there