WSO2's API Vision: Unifying Control, Empowering Developers
The Future is in Responsible Generative AI
1. The Future is in Responsible
Generative AI
Saeed Aldhaheri, Ph.D
Director Center for Futures Studies, University of Dubai
President, Robotics and Automation Society
Ecosystems 2030, 2- 4 May 2023, A Coruna, Spain
4. Potential of Generative AI
• What is Generative AI:
• LLMs models that generate text, images, code, videos, music,, etc
• Disrupting nature
• Becoming powerful platforms
• Potential to be used in almost any industry
• Benefits:
• Augmenting human capabilities – < Improve efficiency and productivity
• Democratizing AI, creativity and imagination
• Accelerate R & D
• Easy to consume and customize
• A spark for creativity and creator economy
• Places premium on creativity
• Uses
• Generating arts/writing articles & software/generating ideas/ task
automation/Chatbots/disrupting search
• Consumer sentiment/Marketing/finance/health care/customer
service/NLP-based data analytics/Education/law/
Stable Diffusion ERNIE
5. Potential of Generative AI
Macroeconomic effects:
“Broad adoption of AI has the potential for major macroeconomic
effects” – Goldman Sachs report
Boosting Global GDP:
Generative AI can boost annual global GDP by 7% ($7 trillion)
in the next 10 years
Productivity growth:
40% of all working hours across all industries can be impacted by
LLMs such as ChatGPT. Accenture report 2023
integrating NLP tools in the workforce could get a 1.5% growth in
the US labor productivity per year, in the next 10 years
Prompt Engineering
6. Question?
How can we minimize Generative AI Risks
and address its ethical concerns?
9. UNESCO call to implement its Global Ethical Framework
“The world needs stronger ethical rules for artificial
intelligence: this is the challenge of our time.
UNESCO’s Recommendation on the ethics of AI sets
the appropriate normative framework. Our Member
States all endorsed this Recommendation in November
2021. It is high time to implement the strategies and
regulations at national level. We have to walk the talk
and ensure we deliver on the Recommendation’s
objectives.”
- Audrey Azoulay, UNESCO's Director-General
Policy areas
1- Ethical impact assessment
2- Ethical governance
3- Data policy
4- Development and international cooperation
10. Current AI Tech Industry Approach
• Generative AI is a wild west now
• AI ethics in the back seat
• “Researchers building AI outnumber those focused on safety by 30-to-1 ratio”
- Center for Humane technology
• Moving fast while breaking things
• “It’s important *NOT* to ‘move fast and break things’ for tech as important as AI,”
- Demis Hassabis, DeepMind Founder
• Lunch problematic AI products and label them as “experiment”
• Ethical teams at tech firms are in unsupported atmosphere
• Public trust in generative AI is decreasing
• Industry self-regulation is not sufficient
Image by Tim Bel from Pixabay
11. With new capabilities comes new risks!
Ethical Risks of Generative AI
• Safety and generating harmful content
• Bias
• Fake news and disinformation - “confident failure”
• “hallucination” – faking things
• Privacy and data protection - Leaks
• Intellectual property & copyright infringement
• Liability & responsibility
• Societal values
• Unemployment and workforce displacement
• Unwanted acceleration: AI tech race – < decline in safety
• Emergent behavior
• The unpredictability!
“sometimes writes plausible-sounding but incorrect or
nonsensical answers”. OpenAI
12. How to build responsible generative AI?
• Ethics by design and responsible AI by design
• actionable AI ethics principles
• Translate principles into effective governance
• development, deployment, and use
• Respect creators’ choice and control
• Technical tools for Responsible AI
• Support AI ethics team and encourage responsible practices
• Develop new methods for risk assessment
• Data-related risks: safe and inclusive data-set
• Model-related risks: guardrails or NSFW classifiers to eliminate models
harmful/toxic output
• Testing and transparency
• 3rd party auditing and red teams
• Human feedback mechanisms
• Compliance
• Establishing responsible culture
• Responsible AI must be CEO-led
• Development of mature responsible AI capabilities
• Addressing public discomfort around AI
13. How to build responsible generative AI?
“To be responsible by design, organizations
need to move from a reactive compliance
strategy to the proactive development of
mature Responsible AI capabilities through a
framework that includes principles and
governance; risk, policy and control;
technology and enablers and culture and
training.”
14. Regulations are necessary to enforce responsible AI
• Debate between regulate and not to regulate
• No effective global effort to regulate AI
• Country specific efforts exist
• Big tech is leaning on EU not regulate GPAI
• Governments need to build capability and capacity
• EU AI Act (AIA)
• Risk-based approach in relations to use cases
• UK AI regulation policy
• “light touch” approach
• Sector specific-approach
• 6 cross-sectorial AI governance principles
• US chamber of commerce calls for AI regulation
• 5 pillars for AI regulation
• Google recommendations for regulating AI
• China released rules for Generative AI
15. If we use, to achieve our purposes, a mechanical
agency with whose operation we cannot
interfere effectively … we had better be quite
sure that the purpose put into the machine is the
purpose which we really desire.
- Norbert Weiner