The document discusses recent achievements in AI such as improvements in speech recognition and image captioning. It then addresses the widespread use of AI and potential benefits as well as concerns regarding issues like data bias, model reliability, misuse of AI systems, and adversarial AI. The document argues that addressing these technical issues and social implications will help maximize the benefits of AI.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Technology for everyone - AI ethics and BiasMarion Mulder
Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Technology for everyone - AI ethics and BiasMarion Mulder
Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
Contemporary AI engenders hopes and fears – hopes of harnessing AI for productivity growth and innovation – fears of mass unemployment and conflict between humankind and an artificial super-intelligence. Before we let AI drive our hopes and fears, we need to understand what it is and what it is not. Then we need to understand how to implement AI in an ethical and responsible manner. Only then can we harness the power of AI to our benefit.
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, it has become increasingly important to consider the ethical implications of this technology. AI has the potential to transform many industries and improve our lives in numerous ways, but it also raises important ethical questions.
In this presentation, the ethical concerns surrounding AI are explored and discussed, with a focus on the need for ethical guidelines to be developed for AI development and use. We will examine issues such as privacy, bias, transparency, accountability, and the impact on jobs and society as a whole.
Through this exploration, we will consider the various perspectives on these issues and weigh the benefits and drawbacks of different ethical approaches to AI. We will also examine some of the current efforts being made to address these concerns, including the development of ethical frameworks and best practices.
The most important goal of this presentation is to disseminate a deeper understanding of the ethical considerations surrounding AI and the need for ethical guidelines to ensure that this technology is developed and used in a way that benefits all of us while respecting our values and principles.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
Lecture on ethical issues taught as part of Heriot-Watt's course on Conversational Agents (2021). Topics covered:
- General Research Ethics with Human Subjects
- Bias and fairness in Machine Learning
- Specific Issues for ConvAI
Designing AI for Humanity at dmi:Design Leadership Conference in BostonCarol Smith
As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain user’s trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs.
Presented at dmi:Design Leadership Conference in Boston in October 2018.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
Contemporary AI engenders hopes and fears – hopes of harnessing AI for productivity growth and innovation – fears of mass unemployment and conflict between humankind and an artificial super-intelligence. Before we let AI drive our hopes and fears, we need to understand what it is and what it is not. Then we need to understand how to implement AI in an ethical and responsible manner. Only then can we harness the power of AI to our benefit.
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, it has become increasingly important to consider the ethical implications of this technology. AI has the potential to transform many industries and improve our lives in numerous ways, but it also raises important ethical questions.
In this presentation, the ethical concerns surrounding AI are explored and discussed, with a focus on the need for ethical guidelines to be developed for AI development and use. We will examine issues such as privacy, bias, transparency, accountability, and the impact on jobs and society as a whole.
Through this exploration, we will consider the various perspectives on these issues and weigh the benefits and drawbacks of different ethical approaches to AI. We will also examine some of the current efforts being made to address these concerns, including the development of ethical frameworks and best practices.
The most important goal of this presentation is to disseminate a deeper understanding of the ethical considerations surrounding AI and the need for ethical guidelines to ensure that this technology is developed and used in a way that benefits all of us while respecting our values and principles.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
Lecture on ethical issues taught as part of Heriot-Watt's course on Conversational Agents (2021). Topics covered:
- General Research Ethics with Human Subjects
- Bias and fairness in Machine Learning
- Specific Issues for ConvAI
Designing AI for Humanity at dmi:Design Leadership Conference in BostonCarol Smith
As design leaders we must enable our teams with skills and knowledge to take on the new and exciting opportunities that building powerful AI systems bring. Dynamic systems require transparency regarding data provenance, bias, training methods, and more, to gain user’s trust. Carol will cover these topics and challenge us as design leaders, to represent our fellow humans by provoking conversations regarding critical ethical and safety needs.
Presented at dmi:Design Leadership Conference in Boston in October 2018.
Keynote presentation on policy approaches to socio-technical causes of algorithmic bias at the Bias in Information, Algorithms and Systems workshop at the iConference on 25 March 2018.
AI Ethics and Implications For Developing Societies.pptxIshaku Gayus Bwala
The presentation delves into several remarkable breakthroughs that artificial intelligence (AI) has achieved across diverse domains, including healthcare, medicine, gaming, and the creative arts. These breakthroughs underscore the profound impact that AI is making on these sectors, revolutionizing the way we approach medical diagnostics, enhancing interactive gaming experiences, and even inspiring new forms of artistic expression.
In these accomplishments, the presentation also shines a spotlight on the potent ethical challenges that AI presents. As AI systems become increasingly integrated into our daily lives, questions about privacy, bias, accountability, and transparency become ever more pressing. The implications of AI's unethical use are far-reaching, potentially affecting not only individuals but society as a whole.
Towards the end of the presentation, a compelling visual representation is shared: a chart that maps out the progress of 70 countries in developing policies, strategies, and regulations for the ethical use of AI. This chart provides a global perspective on how different nations are addressing the ethical considerations surrounding AI. It highlights the varying degrees of preparedness and commitment among countries to ensure that AI technologies are harnessed for the greater good while safeguarding against misuse, with developing countries lagging far behind
By exploring these AI breakthroughs and ethical dilemmas, and by examining the global landscape of AI governance, the presentation offers a comprehensive view of AI's transformative potential and the collective responsibility we share in guiding its ethical evolution.
A Glimpse Into the Future of Data Science - What's Next for AI, Big Data & Ma...Pangea.ai
We are living in the era of "the fourth industrial revolution". How did we get here? Read this presentation to explore current application trends in Artificial Intelligence (AI,) The Internet of Things (IoT), Big Data, and Machine Learning (ML) technology. Also, to discover the future implications of big data in our lives.
Read the original article here: https://www.pangea.ai/data-science-resources/future-of-data-science/
Work with a data science expert at Pangea: https://www.pangea.ai/
20240104 HICSS Panel on AI and Legal Ethical 20240103 v7.pptxISSIP
20240103 HICSS Panel
Ethical and legal implications raised by Generative AI and Augmented Reality in the workplace.
Souren Paul - https://www.linkedin.com/in/souren-paul-a3bbaa5/
Event: https://kmeducationhub.de/hawaii-international-conference-on-system-sciences-hicss/
Trusted, Transparent and Fair AI using Open SourceAnimesh Singh
Fairness, robustness, and explainability in AI are some of the key cornerstones of trustworthy AI. Through its open source projects, IBM and IBM Research bring together the developer, data science and research community to accelerate the pace of innovation and instrument trust into AI.
Ethical Dimensions of Artificial Intelligence (AI) by Rinshad ChoorapparaRinshad Choorappara
Explore the ethical landscape of Artificial Intelligence (AI) through our insightful PowerPoint presentation. Delve into crucial considerations that shape the responsible development and deployment of AI technologies. From privacy concerns and bias mitigation to transparency and accountability, this presentation covers the key ethical dimensions of AI. Gain a comprehensive understanding of the ethical challenges and solutions in the rapidly evolving world of artificial intelligence. Stay informed and empower your audience with the knowledge needed to navigate the ethical intricacies of AI responsibly.
Let us see the good and bad effects of the impact of Artificial Intelligence and the emerging technologies!
The Ethics of Artificial Intelligence in Digital Ecosystemswashikmaryam
The ethics of AI go beyond just the technology itself. When we consider AI within the complex web of digital platforms and services (the digital ecosystem), new ethical concerns arise.
A big focus is on how AI decisions can be biased, reflecting the data it's trained on and potentially leading to discrimination. We also need to be mindful of privacy issues and how AI might be used to manipulate users.
To ensure ethical AI in digital ecosystems, we need to consider these potential pitfalls during development and use frameworks to make responsible choices. This includes reflecting on the decision-making process and how AI can be used for good.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
AI in Action: Real World Use Cases by AnitarajAnitaRaj43
The presentation was made in “Web3 Fusion: Embracing AI and Beyond” is more than a conference; it's a journey into the heart of digital transformation.
The conference a provided a platform where the future of technology meets practical application. This three-day hybrid event, set in the heart of innovation, served as a gateway to the latest trends and transformative discussions in AI, Blockchain, IoT, AR/VR, and their collective impact on the information space.
Similar to How do we train AI to be Ethical and Unbiased? (20)
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Generative AI Deep Dive: Advancing from Proof of Concept to Production
How do we train AI to be Ethical and Unbiased?
1. HOW DO WE
TRAIN AI TO BE
ETHICAL AND UNBIASED?
MARK BORG
AI MALTA SUMMIT – 13 JULY 2018
2. RECENT ACHIEVEMENTS IN AI
2
Word
Error
rate
Improvements in word error rate over time on the Switchboard
conversational speech recognition benchmark.
Credit: Awni Hannun
Automated Speech Recognition results
Credit: Business Insider/Yu Han
3. RECENT ACHIEVEMENTS IN AI
3
Credit: H. Fang et al. (2015), “From Captions to Visual Concepts and Back”
#1 A woman holding a
camera in a crowd.
Image Captioning
4. RECENT ACHIEVEMENTS IN AI
4
0 days
AlphaGo Zero has no prior knowledge of the
game and only the basic rules as an input.
3 days
AlphaGo Zero surpasses the abilities of AlphaGo
Lee, the version that beat world champion Lee
Sedol in 4 out of 5 games in 2016.
21 days
AlphaGo Zero reaches the level of AlphaGo
Master, the version that defeated 60 top
professionals online and world champion Ke Jie in
3 out of 3 games in 2017.
40 days
AlphaGo Zero surpasses
all other versions of
AlphaGo and, arguably,
becomes the best Go
player in the world. It
does this entirely from
self-play, with no human
intervention and using no
historical data.
Credit: DeepMind
AlphaGo Zero
5. WIDESPREAD USE OF AI
• AI has now wide and deep societal influences, permeating every sphere of our lives
• No longer single applications operating in standalone mode
• ML Pipelines, more complex AI systems, operating at Internet Scale
• AI as a Service (AIaaS), Machine Learning as a Service (MLaaS)
• Running “under the hood”, as well as in “human-facing technology”
• High-stake applications, sometimes involving life-and-death decisions
➢ AI-enabled Future
➢ Benefits and Implications
5
6. BENEFITS AND CONCERNS OF AI
6
• What if an AI algorithm could predict death better than doctors?
• The “dying algorithm” (NY Times)
• Stanford's AI Predicts Death for Better End-of-Life Care (IEEE Spectrum)
• What are the benefits and implications of such a system?
7. CONCERNS
• A Predictive Policing algorithm unfairly targeted
certain neighbourhoods – Chicago 2013/2014
• Idea: to stop crime before it occurs
• Unintended consequences due to systematic bias in the
data used by these systems
• Saunders et al. (2016), “Predictions put into practice: a quasi-
experimental evaluation of Chicago’s predictive policing project”
• COMPAS assesses a defendant’s risk of re-offending
• used for bail determination by judges
• Issues of reliability and racial bias
• Dressel & Farid (2018), “The Accuracy, Fairness, and Limits of Predicting
Recividism”
7Credit: ProPublica
8. CONCERNS
• YouTube Recommender system
• The algorithm appears to have concluded that
people are drawn to content that is more
extreme than what they started with — or to
incendiary content in general
• Accusations that YouTube is acting as a
“radicalisation agent”
8
Credit: Covington
Recommendations drive 70%
of YouTube’s viewing time
(~200 million
recommendations per day)
YouTube tops a cumulative of 1
billion hours of video per day in
2017
10. CONCERNS
• Ethical and moral issues
• Self driving cars
10
The Trolley Problem
Credit: Waymo
(Philippa Foot, 1967)
11. LONG-TERM CONCERNS
• GAI, Superintelligence, existential threat, need for Benevolent AI
• The Sorcerer’s Apprentice problem
• Eliezer Yudkowsky: The Paperclip Maximiser Scenario
11
Credit: Disney
If a machine can think,
it might think more
intelligently than we do,
and then where should we
be? …
This new danger … is
certainly something which
can give us anxiety
Alan Turing, 1951
“
“
12. IMPLICATIONS & CONSEQUENCES OF AI
• To maximise the benefits of AI: saving lives, raising the quality of life, …
Need also to address issues and consequences
• the “rough edges of AI” – Eric Horvitz (Microsoft Research)
• Robustness, Ethics, Benevolent AI
• Short-term implications (need solving now)
• Longer term implications (prepare the groundwork…)
• Spans multiple fields: engineering, cognitive science, philosophy, etc.
12
13. 13
AIES
ICAILEP
Conference on Artificial Intelligence:
Law, Ethics, and Policy
7008 - Standard for Ethically Driven Nudging for Robotic, Intelligent & Autonomous Systems
7009 - Standard for Fail-Safe Design of Autonomous & Semi-Autonomous Systems
7010 - Wellbeing Metrics Standard for Ethical Artificial Intelligence & Autonomous Systems
14. IMPLICATIONS & CONSEQUENCES OF AI
14
Benevolent AI
AI Safety
Robust AI Beneficial AI
Value Alignment
AI Ethics
Roboethics
Machine Ethics
Adversarial AI
Increasedcomplexity
AI transparency
ANI
Artificial Narrow Intelligence
AGI
Artificial General Intelligence
ASI
Artificial Super Intelligence
16. AI SAFETY
• Data Bias (Algorithmic Bias)
• Fairness
• AI Robustness & Reliability
• AI Transparency
16
17. DATA BIAS
• Algorithmic Bias is NOT model bias (bias-variance trade-off, generalisation problem)
• Algorithmic Bias (or Data Bias) – will always be present; need to minimise the impact
• E.g. predictive policing algorithm
• Police-recorded datasets suffer from systematic bias:
• Non-complete census
• Not a representative random sample
• Crime databases do not measure crime; they measure some complex interaction
between criminality, policing strategy, and community-police relationships
17
18. DATA BIAS
• Data bias is prevalent throughout the whole field of AI
• Unintentional bias vs. intentional bias
• Addressing data bias has particular significance in ML pipelines, complex AI systems,
AIaaS, etc.
• E.g.
• Howard (2017), “Addressing Bias in Machine Learning Algorithms: A Pilot Study on
Emotion Recognition for Intelligent Systems”
• did not perform well for children
• original training dataset had few such cases
18
19. DATA BIAS
• Unintentional self-created bias (“poisoning your own data”)
• E.g. Google Flu Trends
• began suggesting flu-related queries to people who did not have the flu, and
thus Google Flu Trends began itself corrupting the dataset by seeding it with
excess flu-related queries, thus creating a feedback loop
• Despite good intentions, biased data can lead to a far worse result
• E.g. beauty.ai
• a startup organising the world's first AI-driven beauty contest in 2016
• The concept is to remove social biases of human judges
• problem: image samples used to train the algorithms weren’t balanced in terms
of race and ethnicity.
• so-called 'white guy problem’
19
20. DATA BIAS
• Naive application of algorithms to everyday problems
could amplify structural discrimination and reproduce
biases present in the data
• Detecting (automatically?) such bias and addressing it?
• Quite difficult!
• Since AI is data-driven, it’s difficult
20
Credit: Buolamwini & Gebru
• Some very recent work on two fronts:
• More balanced datasets, e.g., new facial image dataset released in February 2018 (Pilot Parliaments
Benchmark dataset)
• Buolamwini & Gebru (2018), “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender
Classification”
• Measuring bias and fairness:
• Shaikh et al. (2017), “An End-to-End Machine Learning Pipeline that Ensures Fairness Policies”
• Srivastava & Rossi (2018), “Towards Composable Bias Rating of AI Services”
21. DATA BIAS
• creating more balanced (heterogeneous) datasets
• One solution would be to create shared and regulated databases that are in
possession of no single entity, thus preventing any party from unilaterally manipulating
the data to their own favour.
• public datasets curated to be bias free
• One concern is that even when machine-learning systems are programmed to
be blind to race or gender, for example, they may use other signals in data
such as the location of a person’s home as a proxy for it
• E.g. In COMPAS bail system, geographic neighbourhood highly correlated to ethnicity,
thus still suffering from racial discrimination
21
22. AI ROBUSTNESS & RELIABILITY
• Making AI systems more robust, so that they work as intended, without failing
or getting misused?
• Reliable prediction of performance
• Avoiding overconfidence in AI systems
• How much it knows that it does not know?
• Make strong predictions that are just inaccurate
• Classification label accuracy + ROC curve
• Learning to predict confidence
• Current statistical models “tend to assume that the data that they’ll see in the future will
look a lot like the data they’ve seen in the past”
22
23. AI ROBUSTNESS & RELIABILITY
• Blindspots of algorithms (Eric Horvitz, Microsoft Research)
• The “unknown unknowns” (Tom Dietterich , Oregon State University)
• AI algorithms for learning and acting safely in the presence of unknown unknowns.
• learning about blindspots of algorithms:
• Lakkaraju (2016), “Identifying Unknown Unknowns in the Open World: Representations
and Policies for Guided Exploration”
• Ramakrishnan (2018), “Discovering Blind Spots in Reinforcement Learning”
• Human supervision
• human correction to prevent AI failure
23
24. AI ROBUSTNESS & RELIABILITY
• Watch out for Anomalies?
• Robust anomaly detection
• BugID system by Tom Dietterich,
• A system that learn when there is another unknown class out there
• automated counting of freshwater macro-invertebrates
• trained on 29 insect classes, detection of novel classes
• monitoring of performance of the system, especially for self-learning systems or local-
based learning
• E.g. Microsoft’s chatbot Tay
• research into adding a “reflection layer” into systems (introspection?)
• Failsafe designs
• Auto-pilot of self-driving cars disengaging suddenly
24
25. MISUSE OF AI
• Privacy Challenges
• Exclusion – denying services
• Persuasion, and Manipulation of Attention / Behaviour / Beliefs
• Harms
• Hacking of AI systems
• Adversarial AI
25
26. MISUSE OF AI
• Tay
• Microsoft’s Chatbot
• "The more you talk the smarter Tay gets!"
• March 2016, suspended after 16 hours
• Tay’s conversation extended to racist, inflammatory and political statements
• A main problem was Tay’s “repeat after me” feature
• Intentional misuse of AI (coordinated attack)
• Neff and Nagy (2016), “Talking to Bots: Symbiotic Agency and the Case of Tay”
26
Credit: Microsoft
27. MISUSE OF AI
• Harnessing AI to increase attention & engagement for a particular application or service
• large-scale personalised targeting
• Persuasion, and Manipulation of Attention / Behaviour / Beliefs
• auto twitter feed generation that persuades a user to click on links
• data-driven behaviour change
• Intentional / Unintentional
27
28. YOUTUBE RECOMMENDER SYSTEM
• The recommender system’s goal is to
maximise attention and engagement via
personalised targeting
• Eric Horvitz (Microsoft Research) calls this:
"Adversarial Attacks on Attention"
28
Credit: Eric Horvitz
Recommendations drive
70% of YouTube’s viewing
time (~200 million
recommendations per day)
YouTube tops a cumulative of
1 billion hours of video per
day in 2017
Recommendation system architecture demonstrating the “funnel”
where candidate videos are retrieved and ranked before
presenting only a few to the user
Covington et al. (2016), “Deep Neural Networks for YouTube
Recommendations”
29. YOUTUBE RECOMMENDER SYSTEM
• Its algorithm seems to have concluded that people are drawn to content that is more
extreme than what they started with — or to incendiary content in general
• a bias toward extreme/divisive/inflammatory/fringe/sensational content
• WSJ investigation (Feb 2018):
• amplifies human bias, fake news, isolate users in "filter bubbles"
• AlgoTransparency.org
• Zeynep Tufekci (sociologist, Univ. of North Carolina):
• Calls YouTube the “Great Radicaliser”
• AI exploiting a natural human desire to "look behind the curtain", to dig deeper into something that
engages us. As we click and click, we are carried along by the exciting sensation of uncovering more
secrets and deeper truths. YouTube leads viewers down a rabbit hole of extremism, while Google racks
up the ad sales.
29
30. YOUTUBE RECOMMENDER SYSTEM
• But is the algorithm really to blame?
• Main issue due to scale
• Also simplified human behaviour modelling:
• Watching more nuanced content or videos which diverge from the
established viewing pattern could be rooted out as noise, thus contributing
to simplification and generalization of interests towards the more extreme
ends of a spectrum, instead of complex content catering to views which
are harder to define.
• Possible solutions?
• YouTube has been applying changes to their algorithm
• Improved human behaviour models
• Changes to the exploration-exploitation strategy adopted by the
recommender system
• Value policies, encoding notion of “time well spent”
30
31. ADVERSARIAL AI
• Goodfellow et al. (2015), “Explaining and Harnessing Adversarial Examples”
• Szegedy (2013), “Traversing the manifold to find blind spots in the input space”
• DNN can be easily fooled by adversaries
• No need for hand-crafting the adversarial attack
• Can exploit AI to perform an adversarial attack
• One AI deceiving another AI
31
“panda”
57.7% confidence
“gibbon”
99.3% confidence
Adversarial Noise
(exaggerated)
32. ADVERSARIAL AI
• Adversarial systems subtly alter normal inputs such that humans doing the same task can
easily recognize what the intended input is, but mislead the AI into giving a predictable
and very different false output.
• Performed by stealth (humans won’t spot the difference)
• Potential Attacks
• Adversarial examples can be printed out on standard paper, then photographed with a
standard smartphone, and it will still fool AI systems.
• Kurakin et al. (2017), “Adversarial examples in the physical world”
32
Credit: Biggio & Roli
33. ADVERSARIAL AI
• The famous 3D printed Turtle that fooled
Google’s AI
• Adversarial attacks without perturbing the
whole image
33
Athalye et al. (2017), “Synthesizing Robust Adversarial Examples”
• Sharif et al. (2016), “Accessorize to a Crime: Real and
Stealthy Attacks on State-of-the-Art Face Recognition”
• Impersonation attacks
• Invisibility attacks
Credit: Sharif et al. (2016)
34. ADVERSARIAL AI
• Audio Adversarial attacks
• Carlini and Wagner (2018), “Audio Adversarial Examples: Targeted Attacks on Speech-to-Text”
• Given any speech audio, can produce another that is 99.9% similar to the original, but contains any text
one wants.
• Fools DeepSpeech with 100% accuracy
34Credit: IBM
35. ADVERSARIAL AI
• Not limited to Deep Neural Networks
• Papernot et al. (2016), “Transferability in machine learning: from phenomena to black-box
attacks using adversarial samples”
• DNNs, logistic regression, support vector machines, decision trees, nearest neighbour
classifiers, ensembles – all vulnerable to adversarial AI!
• any machine learning classifier can be tricked to give incorrect predictions, and with a little
bit of work, one can get them to give pretty much any result one wants
35
37. ADVERSARIAL AI
• White-box adversarial attack
37
Adversarial AI
panda57.7%
Defended AI model
gradient
+
gibbon100%
Score-based
attack
38. Substitute AI model
ADVERSARIAL AI
• Black-box adversarial attack
38
Adversarial AI
panda
Defended AI model
+
gibbon
Transfer-based
attack
Decision-based
attack
scores & gradients
39. Some countermeasures:
DEFENDING AGAINST ADVERSARIAL AI
39
• Smoothing and hiding the gradients
• Randomisation techniques
• image compression
• Image blurring
• random image resizing
• employ dropout in neural networks
• Defensive distillation
• Use of ensembles
• Evaluate model’s adversarial resilience
• Metrics available
• Pre-emptive hardening of AI models
• Enhance robustness to tampering
Some defensive techniques:
IBM Adversarial Robustness Toolbox (ART)
https://github.com/IBM/adversarial-robustness-toolbox
Cleverhans library
https://github.com/openai/cleverhans
DeepFool
https://github.com/LTS4/DeepFool
40. • Fredrikson et al. (2015), “Model Inversion Attacks that Exploit Confidence Information and Basic
Countermeasures”
• Violating privacy of subjects in the training set
ADVERSARIAL AI – MODEL INVERSION ATTACKS
40
Adversarial AI
Tom
2.3%
70%
Training Set
Tom
Tom
Defended AI model
Face Recognition
41. • Leveraging adversarial AI to make a generative model, consisting of two neural networks
competing with each other
• The discriminator tries to distinguish genuine data from forgeries created by the generator
• The generator turns random noise into imitations of the data, in an attempt to fool the
discriminator
GENERATIVE ADVERSARIAL NETWORKS (GANS)
41
real
DiscriminatorGenerator
random
noise
fake
real fake
42. AI ETHICS & VALUE ALIGNMENT
• Codification of Ethics
• Values, Utility Functions
• Teaching AI to be Ethical
• Reinforcement Learning
• Inverse Reinforcement Learning and beyond
▪ Ethics – comprehending “right” from “wrong”, and behaving in a right way
▪ Value Alignment – ensuring that the goals, behaviours, values and ethics of autonomous
AI systems align with those of humans
42
43. CODIFICATION OF ETHICS
• Rule-based ethics (deontological ethics)
• Isaac Asimov’s “Three Laws of Robotics”
• And similar sets of rules
43
• Challenges:
• Too rigid
• Asimov’s literature addresses many of these issues:
• conflicts between the 3 laws, conflicts within a law by itself, conflicting orders, etc.
• How to codify the rules?
• How to program the notion of "harm"?
• Often human ethics and values are implicit
• Process of elicitation is very challenging
Isaac Asimov (1942)
44. CODIFICATION OF ETHICS
• Pre-programming ethical rules:
• Impossible to program for every scenario
• Fail to address uncertainty and randomness
• Fail to address ambiguous cases, ethical and moral dilemmas
• Rules on their own not enough
• Must be accompanied by very strong accountability mechanisms
• Need moral conflict resolution mechanism
• Values and ethics are dependent on the socio-cultural context
• Difficult to standardise
• Need to account for changes in the values of society, shifts in beliefs, attitudes, etc.
44
45. CODIFICATION OF ETHICS
45
• Rule-based ethics example:
• specifically & explicitly programme ethical values into self-driving
cars to prioritize the protection of human life above all else
• In the event of an unavoidable accident, the car should be
“prohibited to offset victims against one another”
• A car must not choose whether to kill a person based on individual
features, when a fatal crash is inescapable
Credit: BMVI (www.bmvi.de) The Trolley Problem
46. VALUES, UTILITY FUNCTIONS
• Ethics as Utility Functions
• Any system or person who acts or gives advice is using some value system of what is important and
what is not
• Utility-based Agent
• Agent’s Actions
• Agent’s Beliefs
• Agent’s Preferences
• The agent chooses actions based on their outcomes
• Outcomes are what the agent has preference on
• Preferences → Utility → Utility Function
• A policy specifies what an agent should do under all contingencies
• An agent wants to find an optimal policy – one that maximises its expected utility
46
47. TEACHING AI TO BE ETHICAL
• Teaching AI ethics, social rules and norms
• Adopt a “blank slate” approach
• Similar to how a human child learns ethics from those around him/her
• Basic values are learnt, and the AI will, in time, be able to apply those principles in unforeseen scenarios
• What machine learning method to use?
47Credit: GoodAI
48. TEACHING AI TO BE ETHICAL
48
• Reinforcement Learning
• Has shown promise in learning policies that can solve complex problems
• An agent explores its environment, performing action after action and receiving rewards and punishments
according to the reward function (i.e. utility function)
• As it repeats this, the agent will gradually learn to perform the right actions in the right states so as to
maximise its reward
• Reward = total sum of the actions’ rewards over
time, where future rewards are discounted (treated
as less valuable than present rewards)
• When learning ethics, the reward function will
reward/punish the agent depending on the choice of
action performed, whether “right” or “wrong”
Environment
Model
Reward
Function(s)
Reinforcement
Learning
Reward-
maximising
behaviour
Kose (2017), “Ethical Artificial Intelligence – An Open Question”
49. TEACHING AI TO BE ETHICAL
• Reinforcement Learning (RL) challenges:
• Difficulty in setting up ethical scenarios in the environment model of RL
• May take a very long time till the agent manages to fully cover all ethical scenarios, ambiguous cases, etc.
49
• Potential solution:
• Using stories as a way of short-circuiting the
reinforcement learning process
• Employ more complex stories as time goes by
• Riedl et al. (2016), “Using Stories to Teach Human
Values to Artificial Agents”
Environment
Model
Reward
Function(s)
Reinforcement
Learning
Kose (2017), “Ethical Artificial Intelligence – An Open Question”
Reward-
maximising
behaviour
50. TEACHING AI TO BE ETHICAL
• Another solution:
• Curriculum-based approach to improve the learning process
• The learning process in humans and animals is enhanced when scenarios are not randomly presented, but
organized in a meaningful order – gradual exposure to an increasing number of concepts, and to more
complex ones
• For teaching ethics, simpler scenarios are presented before more complex and ambiguous cases
• GoodAI’s “School for AI” project is employing a curriculum based approach for enhancing the teaching of
ethics via reinforcement learning
• www.goodai.com/school-for-ai
• Bengio et al. (2009), “Curriculum Learning”
• Weinshall et al. (2018), “Curriculum Learning by Transfer Learning: Theory and Experiments with Deep
Networks”
50
51. TEACHING AI TO BE ETHICAL
• Crowd-Sourcing Ethics and Morality
• Crowdsourced stories simplify the manually-intensive process of creating stories
• Can capture consensus for ambiguous and moral dilemmas (“wisdom of the crowds”)
• Example:
• An AI agent is given several hundred stories about stealing versus not stealing, explores different actions in a
reinforcement learning setting, and learns the consequences and optimal policy based on the rewards /
punishments given. (Mark Riedl, Georgia Tech)
51
52. TEACHING AI TO BE ETHICAL
• MIT’s “Moral Machine”:
• Crowdsourcing to aid self-driving cars
make better moral decisions in cases
of moral dilemmas (variations of the
Trolley problem)
• http://moralmachine.mit.edu
52
53. TEACHING AI TO BE ETHICAL
• Reinforcement Learning (RL) requires the manual specification of the reward function
• “Reward engineering” is hard (especially for ethics)
• May be susceptible to “reward cheating” by the AI agent
53
In RL, the reward function is specified by the
user, and then the agent does the acting
What if the agent could instead watch
someone else do the acting, and try to come
up with the reward function by itself?
Environment
Model
Reward
Function(s)
Reinforcement
Learning
Provided by the user
Reward-
maximising
behaviour
54. TEACHING AI TO BE ETHICAL
• Inverse Reinforcement Learning (IRL)
• IRL is able to learn the underlying reward function
(what is ethical?) from expert demonstrations
(humans solving ethical problems)
• IRL is also called “imitation-based learning”
• Learn from watching good behaviour
54
Reward-
maximising
behaviour
Environment
Model
Reward
Function(s)
Reinforcement
Learning
Provided by the user
Environment
Model
Reward
Function(s)
Inverse
Reinforcement
Learning
Provided by the user
Observed
Behaviour
Reward-
maximising
behaviour
Reinforcement
Learning
55. TEACHING AI TO BE ETHICAL
• Inverse Reinforcement Learning (IRL)
• Very promising results for AI ethics (value alignment)
• No need to explicitly model rules, reward function
• Recent works advocating IRL:
• Russell et al. (2016), “Research Priorities for Robust and
Beneficial Artificial Intelligence”
• Abel (2016), “Reinforcement Learning as a Framework for
Ethical Decision Making”
• Challenges of IRL:
• Interpretability of the auto-learnt reward function
• Human bias can creep into the observed behaviour
• Difficulty of learnt ethics to be domain independent
• Arnold (2017), “Value Alignment or Misalignment – What Will
Keep Systems Accountable?”
55
Environment
Model
Reward
Function(s)
Inverse
Reinforcement
Learning
Provided by the user
Observed
Behaviour
Reward-
maximising
behaviour
Reinforcement
Learning
56. BEYOND IRL…
• Cooperative IRL
• What if we reward both the “good behaviour” of the AI while
learning ethics, as well as reward the “good teaching” of the
human?
• Cooperation between AI and humans to accomplish a shared
goal – value alignment
• Generative Adversarial Networks (GANs)
• Hadfield-Menell (2016), “Cooperative Inverse Reinforcement
Learning”
56
Environment
Model
Reward
Function(s)
Inverse
Reinforcement
Learning
Provided by the user
Observed
Behaviour
Reward-
maximising
behaviour
Reinforcement
Learning
57. BEYOND IRL…
• Harnessing Counterfactuals
• … “imagination” rung on the ladder of causation
• As perfect knowledge of the world is unavailable, counterfactuals
allow for the revision of one’s belief system and the relying solely
on past (data driven) experience
• It is also through counterfactuals that one ultimately enters into
social appraisals of blame and praise
• Might prove to be one of the key technologies needed both for
the advancement of AI itself on the trajectory towards AGN, as
well as for aligning as much as possible the values of machines
with our values to achieve benevolent AI
57
59. BENEVOLENT AI
59
our values
AI values
mutually
beneficial
values
Value Alignment
Everything we love about civilisation is a product of intelligence, so amplifying our
human intelligence with artificial intelligence has the potential of helping civilisation
flourish like never before – as long as we manage to keep the technology beneficial.
Max Tegmark, Cosmologist & President of the Future of Life Institute
“
“