The document summarizes a presentation about bias in AI systems. It discusses understanding bias by examining how human biases enter AI systems through data and algorithms. It also covers approaches for mitigating bias, including pre-processing the data, changing the learning algorithm, and post-processing models. As an example, it describes changing decision tree algorithms to incorporate fairness metrics when selecting attributes for splits. The overall goal is to deal with bias at different stages of AI system development and deployment.
Fairness-aware learning: From single models to sequential ensemble learning a...Eirini Ntoutsi
An overview of fairness-aware learning: from batch-learning with single models to batch-learning with sequential ensembles and to fairness-aware learning over non-stationary data streams
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
This tutorial extensively covers the definitions, nuances, challenges, and requirements for the design of interpretable and explainable machine learning models and systems in healthcare. We discuss many uses in which interpretable machine learning models are needed in healthcare and how they should be deployed. Additionally, we explore the landscape of recent advances to address the challenges model interpretability in healthcare and also describe how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare.
Fairness-aware learning: From single models to sequential ensemble learning a...Eirini Ntoutsi
An overview of fairness-aware learning: from batch-learning with single models to batch-learning with sequential ensembles and to fairness-aware learning over non-stationary data streams
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
This tutorial extensively covers the definitions, nuances, challenges, and requirements for the design of interpretable and explainable machine learning models and systems in healthcare. We discuss many uses in which interpretable machine learning models are needed in healthcare and how they should be deployed. Additionally, we explore the landscape of recent advances to address the challenges model interpretability in healthcare and also describe how one would go about choosing the right interpretable machine learnig algorithm for a given problem in healthcare.
In den letzten fünf Jahren ist das Ökosystem der auf KI basierten Anwendungen explodiert. Die Anwendungen haben jetzt schon einen grösseren Einfluss auf unser Leben, als den meisten Menschen bewusst ist. Mit den neuen Technologien sind Chancen und Risiken verbunden. Im Gegensatz zu den apokalyptischen Szenarien einer auf KI basierten Superintelligenz gibt es ganz reale Probleme mit diesen Systemen. Dieser Vortrag zeigt auf, wo diese Probleme liegen und warum es nötig ist, dass ein Diskurs darüber in der Politik und in der Öffentlichkeit immer dringlicher wird.
Intelligent and Smart Systems define the cutting edge of information technology now. They are invisible yet ubiquitous. From identifying individual student’s lack of attention to suggesting remedial measures, from predicting financial failures to preventing future fraud, and from assisting noninvasive surgery to guiding missiles to moving targets, the Artificial Intelligence based applications are stepping into every domain.
Numerous concerns have emerged in parallel. Should they be permitted to run a completely human less system? Can they be assigned all cognitive non routine tasks that humans are good at? Are they effective communicators and consensus builders? What role should they play in decision making? How good are they in picking up data compared to human senses? These and many other questions have surfaced in many fora.
Data used in model building adds another dimension. How unbiased are the data sets used in training? Can a data set be ever unbiased? What are the consequences of data bias in models and algorithms?
This talk explores the issues of setting the boundary for use of AI technology. Areas of concern are delineated, and principles of restraint advocated. It aims to inspire researchers to keep the boundary in mind as they explore new frontiers in AI and to design stable boundary line interfaces.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
ML practitioners and advocates are increasingly finding themselves becoming gatekeepers of the modern world. The models you create have power to get people arrested or vindicated, get loans approved or rejected, determine what interest rate should be charged for such loans, who should be shown to you in your long list of pursuits on your Tinder, what news do you read, who gets called for a job phone screen or even a college admission... the list goes on. My goal in this talk is to summarize the kinds of disparate outcomes that are caused by cargo cult machine learning, and recent academic efforts to address some of them.
Explainable AI with H2O Driverless AI's MLI moduleMartin Dvorak
My H2O.ai Prague meetup presentation on explainable AI, model debugging, strategies to identify security vulnerabilities and undesired ML biases + explanation how Driverless AI Machine Learning Interpretability module helps to mitigate aforementioned problems.
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
The workshop session focuses on the following topics:
Introduction to AI & Machine Learning (Algorithms)
Key Components of Algorithmic Impact Assessment
Algorithmic Explainability
Algorithmic Fairness
Algorithmic Robustness
Don't Handicap AI without Explicit KnowledgeAmit Sheth
Keynote at IEEE Services 2021: Abstract: https://conferences.computer.org/services/2021/keynotes/sheth.html
Video: https://lnkd.in/d-r3YXaC
Video of the same keynote given at DEXA2021: https://www.youtube.com/watch?v=u-06kK9TysA
September 9, 2021 15:00 - 16:20 UTC
ABSTRACT
Knowledge representation as expert system rules or using frames and a variety of logics played a key role in capturing explicit knowledge during the hay days of AI in the past century. Such knowledge, aligned with planning and reasoning is part of what we refer to as Symbolic AI. The resurgent AI of this century in the form of Statistical AI has benefitted from massive data and computing. On some tasks, deep learning methods have even exceeded human performance levels. This gave the false sense that data alone is enough, and explicit knowledge is not needed. But as we start chasing machine intelligence that is comparable with human intelligence, there is an increasing realization that we cannot do without explicit knowledge. Neuroscience (role of long-term memory, strong interactions between different specialized regions of data on tasks such as multimodal sensing), cognitive science (bottom brain versus top brain, perception versus cognition), brain-inspired computing, behavioral economics (system 1 versus system 2), and other disciplines point to need for furthering AI to neuro-symbolic AI (i.e., hybrid of Statistical AI and Symbolic AI, also referred to as the third wave of AI). As we make this progress, the role of explicit knowledge becomes more evident. I will specifically look at our endeavor to support human-like intelligence, our desire for AI systems to interact with humans naturally, and our need to explain the path and reasons for AI systems’ workings. Nevertheless, the variety of knowledge needed to support understanding and intelligence is varied and complex. Using the example of progressing from NLP to NLU, I will demonstrate the dimensions of explicit knowledge, which may include, linguistic, language syntax, common sense, general (world model), specialized (e.g., geographic), and domain-specific (e.g., mental health) knowledge. I will also argue that despite this complexity, such knowledge can be scalability created and maintained (even dynamically or continually). Finally, I will describe our work on knowledge-infused learning as an example strategy for fusing statistical and symbolic AI in a variety of ways.
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...Maryam Farooq
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning for High-Stakes Applications with Dr. Kush Varshney (Principal Research Manager, IBM Research AI).
Check out the the IBM AI Fairness 360 open source toolkit: https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
nyai.co
This is the slideshow for a presentation I gave as part of my graduate coursework at the Institute for Innovation and Public Purpose at University College London (UCL IIPP). Drawing on the work of IIPP professors including Carlota Perez (techno-economic paradigms), Mariana Mazzucato (“The Entrepreneurial State”), and Tim O’Reilly, I evaluate the innovation trajectory of Deep Neural Networks as a method of machine learning. I trace the history of machine learning to its present-day and conclude that while Deep Neural Networks have not yet reached technological maturity, they are already starting to encounter barriers to exponential growth and innovation. These slides were designed to be read independently from the spoken portion. If you found this useful or interesting, please message me on LinkedIn! - Justin Beirold
This presentation is the part of the webinar conducted by CloudxLab. This was the free session on Machine Learning.
Cloudxlab conducts such webinars very frequently and to make sure you never miss the future webinar update, please see the 'Events' section at CloudxLab.com
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
A Theory of Knowledge Lecture given by Mark Steed, Director of JESS Dubai on Monday 4th March 2019
The lecture explains how AI works and then looks at some of the ethical implications
In den letzten fünf Jahren ist das Ökosystem der auf KI basierten Anwendungen explodiert. Die Anwendungen haben jetzt schon einen grösseren Einfluss auf unser Leben, als den meisten Menschen bewusst ist. Mit den neuen Technologien sind Chancen und Risiken verbunden. Im Gegensatz zu den apokalyptischen Szenarien einer auf KI basierten Superintelligenz gibt es ganz reale Probleme mit diesen Systemen. Dieser Vortrag zeigt auf, wo diese Probleme liegen und warum es nötig ist, dass ein Diskurs darüber in der Politik und in der Öffentlichkeit immer dringlicher wird.
Intelligent and Smart Systems define the cutting edge of information technology now. They are invisible yet ubiquitous. From identifying individual student’s lack of attention to suggesting remedial measures, from predicting financial failures to preventing future fraud, and from assisting noninvasive surgery to guiding missiles to moving targets, the Artificial Intelligence based applications are stepping into every domain.
Numerous concerns have emerged in parallel. Should they be permitted to run a completely human less system? Can they be assigned all cognitive non routine tasks that humans are good at? Are they effective communicators and consensus builders? What role should they play in decision making? How good are they in picking up data compared to human senses? These and many other questions have surfaced in many fora.
Data used in model building adds another dimension. How unbiased are the data sets used in training? Can a data set be ever unbiased? What are the consequences of data bias in models and algorithms?
This talk explores the issues of setting the boundary for use of AI technology. Areas of concern are delineated, and principles of restraint advocated. It aims to inspire researchers to keep the boundary in mind as they explore new frontiers in AI and to design stable boundary line interfaces.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
ML practitioners and advocates are increasingly finding themselves becoming gatekeepers of the modern world. The models you create have power to get people arrested or vindicated, get loans approved or rejected, determine what interest rate should be charged for such loans, who should be shown to you in your long list of pursuits on your Tinder, what news do you read, who gets called for a job phone screen or even a college admission... the list goes on. My goal in this talk is to summarize the kinds of disparate outcomes that are caused by cargo cult machine learning, and recent academic efforts to address some of them.
Explainable AI with H2O Driverless AI's MLI moduleMartin Dvorak
My H2O.ai Prague meetup presentation on explainable AI, model debugging, strategies to identify security vulnerabilities and undesired ML biases + explanation how Driverless AI Machine Learning Interpretability module helps to mitigate aforementioned problems.
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
The workshop session focuses on the following topics:
Introduction to AI & Machine Learning (Algorithms)
Key Components of Algorithmic Impact Assessment
Algorithmic Explainability
Algorithmic Fairness
Algorithmic Robustness
Don't Handicap AI without Explicit KnowledgeAmit Sheth
Keynote at IEEE Services 2021: Abstract: https://conferences.computer.org/services/2021/keynotes/sheth.html
Video: https://lnkd.in/d-r3YXaC
Video of the same keynote given at DEXA2021: https://www.youtube.com/watch?v=u-06kK9TysA
September 9, 2021 15:00 - 16:20 UTC
ABSTRACT
Knowledge representation as expert system rules or using frames and a variety of logics played a key role in capturing explicit knowledge during the hay days of AI in the past century. Such knowledge, aligned with planning and reasoning is part of what we refer to as Symbolic AI. The resurgent AI of this century in the form of Statistical AI has benefitted from massive data and computing. On some tasks, deep learning methods have even exceeded human performance levels. This gave the false sense that data alone is enough, and explicit knowledge is not needed. But as we start chasing machine intelligence that is comparable with human intelligence, there is an increasing realization that we cannot do without explicit knowledge. Neuroscience (role of long-term memory, strong interactions between different specialized regions of data on tasks such as multimodal sensing), cognitive science (bottom brain versus top brain, perception versus cognition), brain-inspired computing, behavioral economics (system 1 versus system 2), and other disciplines point to need for furthering AI to neuro-symbolic AI (i.e., hybrid of Statistical AI and Symbolic AI, also referred to as the third wave of AI). As we make this progress, the role of explicit knowledge becomes more evident. I will specifically look at our endeavor to support human-like intelligence, our desire for AI systems to interact with humans naturally, and our need to explain the path and reasons for AI systems’ workings. Nevertheless, the variety of knowledge needed to support understanding and intelligence is varied and complex. Using the example of progressing from NLP to NLU, I will demonstrate the dimensions of explicit knowledge, which may include, linguistic, language syntax, common sense, general (world model), specialized (e.g., geographic), and domain-specific (e.g., mental health) knowledge. I will also argue that despite this complexity, such knowledge can be scalability created and maintained (even dynamically or continually). Finally, I will describe our work on knowledge-infused learning as an example strategy for fusing statistical and symbolic AI in a variety of ways.
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...Maryam Farooq
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning for High-Stakes Applications with Dr. Kush Varshney (Principal Research Manager, IBM Research AI).
Check out the the IBM AI Fairness 360 open source toolkit: https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
nyai.co
This is the slideshow for a presentation I gave as part of my graduate coursework at the Institute for Innovation and Public Purpose at University College London (UCL IIPP). Drawing on the work of IIPP professors including Carlota Perez (techno-economic paradigms), Mariana Mazzucato (“The Entrepreneurial State”), and Tim O’Reilly, I evaluate the innovation trajectory of Deep Neural Networks as a method of machine learning. I trace the history of machine learning to its present-day and conclude that while Deep Neural Networks have not yet reached technological maturity, they are already starting to encounter barriers to exponential growth and innovation. These slides were designed to be read independently from the spoken portion. If you found this useful or interesting, please message me on LinkedIn! - Justin Beirold
This presentation is the part of the webinar conducted by CloudxLab. This was the free session on Machine Learning.
Cloudxlab conducts such webinars very frequently and to make sure you never miss the future webinar update, please see the 'Events' section at CloudxLab.com
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
A Theory of Knowledge Lecture given by Mark Steed, Director of JESS Dubai on Monday 4th March 2019
The lecture explains how AI works and then looks at some of the ethical implications
The Ethical Journey of Artificial Intelligence- Navigating Privacy, Bias, and...VishnuPrasath86
Artificial Intelligence (AI) emerges as both a beacon of innovation and a forerunner of ethical issues in the grand tapestry of technological advancements. As computer based intelligence proceeds with its consistent walk, it presents a bunch of complicated moral contemplations that request our consideration. In this article, we set out on a provocative excursion, investigating three conspicuous moral difficulties related with man-made intelligence: security concerns, inclination predicaments, and the phantom of occupation removal. 10 major difficulties and factors that require our attention are as follows:
The Ethics of Artificial Intelligence in Digital Ecosystemswashikmaryam
The ethics of AI go beyond just the technology itself. When we consider AI within the complex web of digital platforms and services (the digital ecosystem), new ethical concerns arise.
A big focus is on how AI decisions can be biased, reflecting the data it's trained on and potentially leading to discrimination. We also need to be mindful of privacy issues and how AI might be used to manipulate users.
To ensure ethical AI in digital ecosystems, we need to consider these potential pitfalls during development and use frameworks to make responsible choices. This includes reflecting on the decision-making process and how AI can be used for good.
The Ethical Maze to Be Navigated: AI Bias in Data Science and Technology.sakshibhattco
The rapid advancement of data science and artificial intelligence (AI) in recent years has revolutionized a wide range of sectors, including healthcare and finance.
Slides from International Journalism Festival 2023, AI and Disinformation panel. Here the video https://www.journalismfestival.com/programme/2023/ai-and-disinformation
Ethical Issues in Machine Learning Algorithms (Part 2)Vladimir Kanchev
The presentation deals with types of biases found in AI/ML systems - data bias, algorithmic bias, and lack of interpretability. Reasons for their appearances are given, and major approaches for their reduction.
Artificial intelligence (AI) refers to a constellation of technologies, including machine learning, perception, reasoning, and natural language processing. While the field has been pursuing principles and applications for over 65 years, recent advances, uses, and attendant public excitement have returned it to the spotlight. The impact of early AI 1 systems is already being felt, bringing with it challenges and opportunities, and laying the foundation on which future advances in AI will be integrated into social and economic domains. The potential wide-ranging impact make it necessary to look carefully at the ways in which these technologies are being applied now, whom they’re benefiting, and how they’re structuring our social, economic, and interpersonal lives.
A brief introduction to DataScience with explaining of the concepts, algorithms, machine learning, supervised and unsupervised learning, clustering, statistics, data preprocessing, real-world applications etc.
It's part of a Data Science Corner Campaign where I will be discussing the fundamentals of DataScience, AIML, Statistics etc.
Artificial Intelligence: Shaping the Future of Technologycyberprosocial
In the realm of technology, Artificial Intelligence (AI) stands as a beacon of innovation, promising transformative changes across various industries and facets of our lives. This rapidly evolving field is not just about machines mimicking human intelligence; it’s about revolutionizing the way we live, work, and interact with the world. In this article, we will delve into the intricacies of AI, exploring its applications, potential impact, and the ethical considerations that accompany this technological marvel.
Similar to AAISI AI Colloquium 30/3/2021: Bias in AI systems (20)
Discovering and Monitoring Product Features and the Opinions on them with OPI...Eirini Ntoutsi
Opinion stream mining encompasses methods for monitoring and understanding how people’s attitude towards products changes. Understanding which product features influence a buyer’s choice positively or
negatively allows decision makers to make well-informed decisions on improving their products or marketing them properly. We propose OPINSTREAM, a framework for the discovery and polarity monitoring of implicit product features deemed important in the people’s reviews on different products.
Το NeeMo είναι ένα εργαλείο για την αυτόματη σύνοψη της Ελληνικής blogo-σφαιρας.
Ανά τακτά χρονικά διαστήματα, το NeeMo συλλέγει posts από όλα τα blogs, τα ομαδοποιεί σε θέματα και τα παρουσιάζει με βάση το πόσο ενδιαφέρουν την κοινή γνώμη.
Read| The latest issue of The Challenger is here! We are thrilled to announce that our school paper has qualified for the NATIONAL SCHOOLS PRESS CONFERENCE (NSPC) 2024. Thank you for your unwavering support and trust. Dive into the stories that made us stand out!
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
2024.06.01 Introducing a competency framework for languag learning materials ...Sandy Millin
http://sandymillin.wordpress.com/iateflwebinar2024
Published classroom materials form the basis of syllabuses, drive teacher professional development, and have a potentially huge influence on learners, teachers and education systems. All teachers also create their own materials, whether a few sentences on a blackboard, a highly-structured fully-realised online course, or anything in between. Despite this, the knowledge and skills needed to create effective language learning materials are rarely part of teacher training, and are mostly learnt by trial and error.
Knowledge and skills frameworks, generally called competency frameworks, for ELT teachers, trainers and managers have existed for a few years now. However, until I created one for my MA dissertation, there wasn’t one drawing together what we need to know and do to be able to effectively produce language learning materials.
This webinar will introduce you to my framework, highlighting the key competencies I identified from my research. It will also show how anybody involved in language teaching (any language, not just English!), teacher training, managing schools or developing language learning materials can benefit from using the framework.
The Art Pastor's Guide to Sabbath | Steve ThomasonSteve Thomason
What is the purpose of the Sabbath Law in the Torah. It is interesting to compare how the context of the law shifts from Exodus to Deuteronomy. Who gets to rest, and why?
We all have good and bad thoughts from time to time and situation to situation. We are bombarded daily with spiraling thoughts(both negative and positive) creating all-consuming feel , making us difficult to manage with associated suffering. Good thoughts are like our Mob Signal (Positive thought) amidst noise(negative thought) in the atmosphere. Negative thoughts like noise outweigh positive thoughts. These thoughts often create unwanted confusion, trouble, stress and frustration in our mind as well as chaos in our physical world. Negative thoughts are also known as “distorted thinking”.
The Roman Empire A Historical Colossus.pdfkaushalkr1407
The Roman Empire, a vast and enduring power, stands as one of history's most remarkable civilizations, leaving an indelible imprint on the world. It emerged from the Roman Republic, transitioning into an imperial powerhouse under the leadership of Augustus Caesar in 27 BCE. This transformation marked the beginning of an era defined by unprecedented territorial expansion, architectural marvels, and profound cultural influence.
The empire's roots lie in the city of Rome, founded, according to legend, by Romulus in 753 BCE. Over centuries, Rome evolved from a small settlement to a formidable republic, characterized by a complex political system with elected officials and checks on power. However, internal strife, class conflicts, and military ambitions paved the way for the end of the Republic. Julius Caesar’s dictatorship and subsequent assassination in 44 BCE created a power vacuum, leading to a civil war. Octavian, later Augustus, emerged victorious, heralding the Roman Empire’s birth.
Under Augustus, the empire experienced the Pax Romana, a 200-year period of relative peace and stability. Augustus reformed the military, established efficient administrative systems, and initiated grand construction projects. The empire's borders expanded, encompassing territories from Britain to Egypt and from Spain to the Euphrates. Roman legions, renowned for their discipline and engineering prowess, secured and maintained these vast territories, building roads, fortifications, and cities that facilitated control and integration.
The Roman Empire’s society was hierarchical, with a rigid class system. At the top were the patricians, wealthy elites who held significant political power. Below them were the plebeians, free citizens with limited political influence, and the vast numbers of slaves who formed the backbone of the economy. The family unit was central, governed by the paterfamilias, the male head who held absolute authority.
Culturally, the Romans were eclectic, absorbing and adapting elements from the civilizations they encountered, particularly the Greeks. Roman art, literature, and philosophy reflected this synthesis, creating a rich cultural tapestry. Latin, the Roman language, became the lingua franca of the Western world, influencing numerous modern languages.
Roman architecture and engineering achievements were monumental. They perfected the arch, vault, and dome, constructing enduring structures like the Colosseum, Pantheon, and aqueducts. These engineering marvels not only showcased Roman ingenuity but also served practical purposes, from public entertainment to water supply.
Operation “Blue Star” is the only event in the history of Independent India where the state went into war with its own people. Even after about 40 years it is not clear if it was culmination of states anger over people of the region, a political game of power or start of dictatorial chapter in the democratic setup.
The people of Punjab felt alienated from main stream due to denial of their just demands during a long democratic struggle since independence. As it happen all over the word, it led to militant struggle with great loss of lives of military, police and civilian personnel. Killing of Indira Gandhi and massacre of innocent Sikhs in Delhi and other India cities was also associated with this movement.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
This is a presentation by Dada Robert in a Your Skill Boost masterclass organised by the Excellence Foundation for South Sudan (EFSS) on Saturday, the 25th and Sunday, the 26th of May 2024.
He discussed the concept of quality improvement, emphasizing its applicability to various aspects of life, including personal, project, and program improvements. He defined quality as doing the right thing at the right time in the right way to achieve the best possible results and discussed the concept of the "gap" between what we know and what we do, and how this gap represents the areas we need to improve. He explained the scientific approach to quality improvement, which involves systematic performance analysis, testing and learning, and implementing change ideas. He also highlighted the importance of client focus and a team approach to quality improvement.
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
1. AI Colloquium by AAISI (online), 30.3.2021
Bias in AI systems: A multi-step approach
Eirini Ntoutsi
Free University Berlin
2. Outline
Introduction
Dealing with bias in data-driven AI systems
Understanding bias
Mitigating bias
Accounting for bias
Case: bias-mitigation with sequential ensemble learners (boosting)
Wrapping up
3
Eirini Ntoutsi Bias in AI systems: A multi-step approach
4. Questionable uses/ failures
5
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Google flu trends failure Microsoft’s bot Tay taken offline
after racist tweets
IBM’s Watson for Oncology
cancelled
Facial recognition works better
for white males
5. Why AI-projects might fail?
Back to basics: How machines learn
Machine Learning gives computers the ability to learn without being
explicitly programmed (Arthur Samuel, 1959)
We don’t codify the solution, we don’t even know it!
DATA & the learning algorithms are the keys
6
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
6. Mind the (hidden) assumptions
Assumptions include: stationarity, independent & identically distributed
(iid) data, balanced class representation, ...
In this talk, I will focus on the assumption/myth of algorithmic objectivity
1. The common misconception that humans are subjective, but data and
algorithms not and therefore they cannot discriminate.
7
Eirini Ntoutsi Bias in AI systems: A multi-step approach
7. Reality check: Can algorithms discriminate?
Bloomberg analysts compared Amazon same-day delivery areas with U.S.
Census Bureau data
They found that in 6 major same-day delivery cities, the service area
excludes predominantly black ZIP codes to varying degrees.
Shouldn’t this service be based on customer’s spend rather than race?
Amazon claimed that race was not used in their models.
8
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Source: https://www.bloomberg.com/graphics/2016-amazon-same-day/
8. Reality check cont’: Can algorithms discriminate?
There have been already plenty of cases of algorithmic discrimination
State of the art visions systems (used e.g. in autonomous driving) recognize
better white males than black women (racial and gender bias)
Google’s AdFisher tool for serving personalized ads was found to serve
significantly fewer ads for high paid jobs to women than men (gender-bias)
COMPAS tool (US) for predicting a defendant’s risk of committing another
crime predicted higher risks of recidivism for black defendants (and lower for
white defendants) than their actual risk (racial-bias)
9
Eirini Ntoutsi Bias in AI systems: A multi-step approach
9. Dont blame (only) the AI
“Bias is as old as human civilization” and “it is human nature for members
of the dominant majority to be oblivious to the experiences of other
groups”
Human bias: a prejudice in favour of or against one thing, person, or group
compared with another usually in a way that’s considered to be unfair.
Bias triggers (protected attributes): ethnicity, race, age, gender, religion, sexual
orientation …
Algorithmic bias: the inclination or prejudice of a decision made by an AI
system which is for or against one person or group, especially in a way
considered to be unfair.
10
Eirini Ntoutsi Bias in AI systems: A multi-step approach
10. Every bias is not necessarily a bad bias 1/2
Inductive bias ”refers to a set of (explicit or implicit) assumptions made by
a learning algorithm in order to perform induction, that is, to generalize a
finite set of observation (training data) into a general model of the
domain. Without a bias of that kind, induction would not be possible,
since the observations can normally be generalized in many ways.”
(Hüllermeier, Fober & Mernberger, 2013)
Bias-free learning is futile: A learner that makes no a priori assumptions
regarding the identity of the target concept has no rational basis for
classifying any unseen instances.
11
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Model
Future unknown instances
11. Every bias is not necessarily a bad bias 2/2
Some biases are positive and helpful, e.g., making healthy eating choices
Some biases help us to become more efficient
E.g., start work early if you are a morning person
We refer here to bias that might cause discrimination and unfair actions
to an individual or group on the basis of protected attributes like race or
gender.
Bias and gender are examples of bias triggers, but not the only ones
Combined triggers as well, e.g., black females in IT jobs
Bias and discrimination depend on context
Women in tech
Males in ballet dancing
12
Eirini Ntoutsi Bias in AI systems: A multi-step approach
12. Outline
Introduction
Dealing with bias in data-driven AI systems
Understanding bias
Mitigating bias
Accounting for bias
Case: bias-mitigation with sequential ensemble learners (boosting)
Wrapping up
13
Eirini Ntoutsi Bias in AI systems: A multi-step approach
13. The fairness-aware machine learning domain
A young, fast evolving, multi-disciplinary research field
Bias/fairness/discrimination/… have been studied for long in philosophy, social
sciences, law, …
Existing approaches can be divided into three categories
Understanding bias
How bias is created in the society and enters our sociotechnical systems, is
manifested in the data used by AI algorithms, and can be formalized.
Mitigating bias
Approaches that tackle bias in different stages of AI-decision making.
Accounting for bias
Approaches that account for bias proactively or retroactively.
14
Eirini Ntoutsi Bias in AI systems: A multi-step approach
E. Ntoutsi, P. Fafalios, U. Gadiraju, V. Iosifidis, W. Nejdl, M.-E. Vidal, S. Ruggieri, F. Turini, S. Papadopoulos, E. Krasanakis, I. Kompatsiaris, K. Kinder-
Kurlanda, C. Wagner, F. Karimi, M. Fernandez, H. Alani, B. Berendt, T. Kruegel, C. Heinze, K. Broelemann, G. Kasneci, T. Tiropanis, S. Staab"Bias in
data-driven artificial intelligence systems—An introductory survey", WIREs Data Mining and Knowledge Discovery, 2020.
14. Outline
Introduction
Dealing with bias in data-driven AI systems
Understanding bias
Mitigating bias
Accounting for bias
Case: bias-mitigation with sequential ensemble learners (boosting)
Wrapping up
16
Eirini Ntoutsi Bias in AI systems: A multi-step approach
15. Understanding bias: Sociotechnical causes of bias
AI-systems rely on data generated by humans (UGC) or collected via
systems created by humans.
As a result human biases
enter AI systems
E.g., bias in word-embeddings (Bolukbasi et al, 2016)
might be amplified by complex sociotechnical systems such as the Web and,
new types of biases might be created
17
Eirini Ntoutsi Bias in AI systems: A multi-step approach
16. Understanding bias: How is bias manifested in data?
Protected attributes and proxies
E.g., neighborhoods in U.S. cities are highly correlated with race
Representativeness of data
E.g., underrepresentation of women and people of color in IT developer
communities and image datasets
E.g., overrepresentation of black people in drug-related arrests
Depends on data modalities
18
Eirini Ntoutsi Bias in AI systems: A multi-step approach
https://incitrio.com/top-3-lessons-learned-from-
the-top-12-marketing-campaigns-ever/
https://ellengau.medium.com/emily-in-
paris-asian-women-i-know-arent-like-
mindy-chen-6228e63da333
17. Typical (batch) fairness-aware learning setup
Input: D = training dataset drawn from a joint distribution P(F,S,y)
F: set of non-protected attributes
S: (typically: binary, single) protected attribute
s (s ̄): protected (non-protected) group
y = (typically: binary) class attribute {+,-} (+ for accepted, - for rejected)
Goal of fairness-aware classification: Learn a mapping from f(F, S) → y
achieves good predictive performance
eliminates discrimination
19
Eirini Ntoutsi Bias in AI systems: A multi-step approach
F1 F2 S y
User1 f11 f12 s +
User2 f21 -
User3 f31 f23 s +
… … … … …
Usern fn1 +
We know how to measure this
According to some fairness measure
18. Measuring (un)fairness: some measures
Statistical parity: If subjects in both protected and unprotected groups
should have equal probability of being assigned to the positive class
𝑃 ො
𝑦 = + 𝑆 = 𝑠 = 𝑃 ො
𝑦 = + 𝑆 = ҧ
𝑠
Equal opportunity: There should be no difference in model’s prediction
errors regarding the positive class
𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠+ = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠+
Disparate Mistreatment: There should be no difference in model’s
prediction errors between protected and non-protected groups for both
classes
𝛿FNR = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠+ − 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠+
𝛿FPR = 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = 𝑠− − 𝑃 ො
𝑦 ≠ 𝑦 𝑆 = ҧ
𝑠−
Disparate Mistreatment = 𝛿𝐹𝑁𝑅 + 𝛿𝐹𝑃𝑅
22
Eirini Ntoutsi Bias in AI systems: A multi-step approach
F1 F2 S y ෝ
𝒚
User1 f11 f12 s + -
User2 f21 - +
User3 f31 f23 s + -
… … … … … …
Usern fn1 + +
19. Outline
Introduction
Dealing with bias in data-driven AI systems
Understanding bias
Mitigating bias
Accounting for bias
Case: bias-mitigation with sequential ensemble learners (boosting)
Wrapping up
24
Eirini Ntoutsi Bias in AI systems: A multi-step approach
20. Mitigating bias
Goal: tackling bias in different stages of AI-decision making
26
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
21. Mitigating bias: pre-processing approaches
Intuition: making the data more fair will result in a less unfair model
Idea: balance the protected and non-protected groups in the dataset
Design principle: minimal data interventions (to retain data utility for the
learning task)
Different techniques:
Instance class modification (massaging), (Kamiran & Calders, 2009),(Luong,
Ruggieri, & Turini, 2011)
Instance selection (sampling), (Kamiran & Calders, 2010) (Kamiran & Calders,
2012)
Instance weighting, (Calders, Kamiran, & Pechenizkiy, 2009)
Synthetic instance generation (Iosifidis & Ntoutsi, 2018)
…
27
Eirini Ntoutsi Bias in AI systems: A multi-step approach
22. Mitigating bias: pre-processing approaches: Massaging
Change the class label of carefully selected instances (Kamiran & Calders, 2009).
The selection is based on a ranker which ranks the individuals by their probability to
receive the favorable outcome.
The number of massaged instances depends on the fairness measure (group fairness)
28
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Image credit Vasileios Iosifidis
23. Mitigating bias
Goal: tackling bias in different stages of AI-decision making
30
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
24. Mitigating bias: in-processing approaches
Intuition: working directly with the algorithm allows for better control
Idea: explicitly incorporate the model’s discrimination behavior in the
objective function
Design principle: “balancing” predictive- and fairness-performance
Different techniques:
Regularization (Kamiran, Calders & Pechenizkiy, 2010),(Kamishima, Akaho,
Asoh & Sakuma, 2012), (Dwork, Hardt, Pitassi, Reingold & Zemel, 2012) (Zhang
& Ntoutsi, 2019)
Constraints (Zafar, Valera, Gomez-Rodriguez & Gummadi, 2017)
Training on latent target labels (Krasanakis, Xioufis, Papadopoulos &
Kompatsiaris, 2018)
In-training altering of data distribution (Iosifidis & Ntoutsi, 2019)
…
31
Eirini Ntoutsi Bias in AI systems: A multi-step approach
25. Mitigating bias: in-processing approaches: change the objective
function 1/2
An example with decision tree (DTs) classifiers
Traditional DTs use pureness of the data (in terms of class-labels) to decide on
which attribute to use for splitting
Which attribute to choose for splitting? A1 or A2?
The goal is to select the attribute that is most useful for classifying examples.
helps us be more certain about the class after the split
we would like the resulting partitioning to be as pure as possible
Pureness evaluated via entropy - A partition is pure if all its instances belong to the same class.
But traditional measures (Information Gain, Gini Index, …) ignore fairness
32
Eirini Ntoutsi Bias in AI systems: A multi-step approach
26. Mitigating bias: in-processing approaches: change the objective
function 2/2
We introduce the fairness gain of an attribute (FG)
Disc(D) corresponds to statistical parity (group fairness)
We introduce the joint criterion, fair information gain (FIG) that evaluates
the suitability of a candidate splitting attribute A in terms of both
predictive performance and fairness.
33
Eirini Ntoutsi Bias in AI systems: A multi-step approach
D
D1
D2
W. Zhang, E. Ntoutsi, “An Adaptive Fairness-aware Decision Tree Classifier", IJCAI 2019.
27. Mitigating bias
Goal: tackling bias in different stages of AI-decision making
34
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Algorithms
Models
Models
Data
Applications
Hiring
Banking
Healthcare
Education
Autonomous
driving
…
Pre-processing
approaches
In-processing
approaches
Post-processing
approaches
28. Mitigating bias: post-processing approaches
Intuition: start with predictive performance
Idea: first optimize the model for predictive performance and then tune
for fairness
Design principle: minimal interventions (to retain model predictive
performance)
Different techniques:
Correct the confidence scores (Pedreschi, Ruggieri, & Turini, 2009), (Calders &
Verwer, 2010)
Correct the class labels (Kamiran et al., 2010)
Change the decision boundary (Kamiran, Mansha, Karim, & Zhang, 2018), (Hardt,
Price, & Srebro, 2016)
Wrap a fair classifier on top of a black-box learner (Agarwal, Beygelzimer, Dudík,
Langford, & Wallach, 2018)
…
35
Eirini Ntoutsi Bias in AI systems: A multi-step approach
29. Mitigating bias: pοst-processing approaches: shift the decision
boundary
An example of decision boundary shift
36
Eirini Ntoutsi Bias in AI systems: A multi-step approach
V. Iosifidis, H.T. Thi Ngoc, E. Ntoutsi, “Fairness-enhancing interventions in stream classification", DEXA 2019.
30. Outline
Introduction
Dealing with bias in data-driven AI systems
Understanding bias
Mitigating bias
Accounting for bias
Case: bias-mitigation with sequential ensemble learners (boosting)
Wrapping up
37
Eirini Ntoutsi Bias in AI systems: A multi-step approach
31. Accounting for bias
Algorithmic accountability refers to the assignment of responsibility for
how an algorithm is created and its impact on society (Kaplan et al, 2019).
Many facets of accountability for AI-driven algorithms and different
approaches
Proactive approaches:
bias-aware data collection, e.g., for Web data, crowd-sourcing
Bias-description and modeling, e.g., via ontologies
...
Retroactive approaches:
Explaining AI decisions in order to understand whether decisions are biased
What is an explanation? Explanations w.r.t. legal/ethical grounds?
Using explanations for fairness-aware corrections (inspired by Schramowski et al, 2020)
38
Eirini Ntoutsi Bias in AI systems: A multi-step approach
32. Outline
Introduction
Dealing with bias in data-driven AI systems
Understanding bias
Mitigating bias
Accounting for bias
Case: bias-mitigation with sequential ensemble learners (boosting)
Wrapping up
40
Eirini Ntoutsi Bias in AI systems: A multi-step approach
33. Fairness with sequential learners (boosting)
Sequential ensemble methods generate base learners in a sequence
The sequential generation of base learners promotes the dependence between
the base learners.
Each learner learns from the mistakes of the previous predictor
The weak learners are combined to build a strong learner
Popular examples: Adaptive Boosting (AdaBoost), Extreme Gradient Boosting
(XGBoost).
Our base model is AdaBoost (Freund and Schapire, 1995), a sequential ensemble
method that in each round, re-weights the training data to focus on misclassified
instances.
41
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Round 1: Weak learner h1 Round 2: Weak learner h2 Round 3: Weak learner h3 Final strong learner H()
34. Intuition behind using boosting for fairness
It is easier to make “fairness-related interventions” in simpler models
rather than complex ones
We can use the whole sequence of learners for the interventions instead
of the current one
44
Eirini Ntoutsi Bias in AI systems: A multi-step approach
35. Limitations of related work
Existing works evaluate predictive performance in terms of the overall
classification error rate (ER), e.g., [Calders et al’09, Calmon et al’17, Fish et
al’16, Hardt et al’16, Krasanakis et al’18, Zafar et al’17]
In case of class-imbalance, ER is misleading
Most of the datasets however suffer from imbalance
Moreover, Dis.Mis. is “oblivious” to the class imbalance problem
47
Eirini Ntoutsi Bias in AI systems: A multi-step approach
36. From Adaboost to AdaFair
We tailor AdaBoost to fairness
We introduce the notion of cumulative fairness that assesses the fairness of
the model up to the current boosting round (partial ensemble).
We directly incorporate fairness in the instance weighting process
(traditionally focusing on classification performance).
We optimize the number of weak learners in the final ensemble based on
balanced error rate thus directly considering class imbalance in the best model
selection.
48
Eirini Ntoutsi Bias in AI systems: A multi-step approach
𝐸𝑅 = 1 −
𝑇𝑃 + 𝑇𝑁
𝑇𝑃 + 𝐹𝑁 + 𝑇𝑁 + 𝐹𝑃
V. Iosifidis, E. Ntoutsi, “AdaFair: Cumulative Fairness Adaptive Boosting", ACM CIKM 2019.
37. AdaFair: Cumulative boosting fairness
Let j: 1−T be the current boosting round, T is user defined
Let be the partial ensemble, up to current round j.
The cumulative fairness of the ensemble up to round j, is defined based
on the parity in the predictions of the partial ensemble between
protected and non-protected groups
``Forcing’’ the model to consider ``historical’’ fairness over all previous
rounds instead of just focusing on current round hj() results in better
classifier performance and model convergence.
50
Eirini Ntoutsi Bias in AI systems: A multi-step approach
38. AdaFair: fairness-aware weighting of instances
Vanilla AdaBoost already boosts misclassified instances for the next round
Our weighting explicitly targets fairness by extra boosting discriminated
groups for the next round
The data distribution at boosting round j+1 is updated as follows
The fairness-related cost ui of instances xi ϵ D which belong to a group
that is discriminated is defined as follows:
51
Eirini Ntoutsi Bias in AI systems: A multi-step approach
39. AdaFair: optimizing the number of weak learners
Typically, the number of boosting rounds/ weak learners T is user-defined
We propose to select the optimal subsequence of learners 1 … θ, θ ≤ T
that minimizes the balanced error rate (BER)
In particular, we consider both ER and BER in the objective function
𝑎𝑟𝑔𝑚𝑖𝑛𝜃 𝑐 ∗ 𝐵𝐸𝑅𝜃 + 1 − 𝑐 𝐸𝑅𝜃 + 𝑀𝑖𝑠. 𝐷𝑖𝑠.
The result of this optimization if a final ensemble model with Eq.Odds
fairness
53
Eirini Ntoutsi Bias in AI systems: A multi-step approach
40. Experimental evaluation
Datasets of varying imbalance
Baselines
AdaBoost [Sch99]: vanilla AdaBoost
SMOTEBoost [CLHB03]: AdaBoost with SMOTE for imbalanced data.
Krasanakis et al. [KXPK18]: Boosting method which minimizes Dis.Mis. by approximating
the underlying distribution of hidden correct labels.
Zafar et al.[ZVGRG17]: Training logistic regression model with convex-concave
constraints to minimize Dis.Mis.
AdaFair NoCumul: Variation of AdaFair that computes the fairness weights based on
individual weak learners.
54
Eirini Ntoutsi Bias in AI systems: A multi-step approach
41. Experiments: Predictive and fairness performance
Adult census income (ratio 1+:3-) Bank dataset (ratio 1+:8-)
Eirini Ntoutsi Bias in AI systems: A multi-step approach 55
Larger values are better, for Dis.Mis. lower values are better
Our method achieves high balanced accuracy and low discrimination (Dis.Mis.) while maintaining high
TPRs and TNRs for both groups.
The methods of Zafar et al and Krasanakis et al, eliminate discrimination by rejecting more positive
instances (lowering TPRs).
42. Cumulative vs non-cumulative fairness
Cumulative vs non-cumulative fairness impact on model performance
Cumulative notion of fairness performs better
The cumulative model (AdaFair) is more stable than its non-cumulative
counterpart (standard deviation is higher)
56
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Please note: Eq.Odds Dis.Mis.
43. Outline
Introduction
Dealing with bias in data-driven AI systems
Understanding bias
Mitigating bias
Accounting for bias
Case: bias-mitigation with sequential ensemble learners (boosting)
Wrapping up
57
Eirini Ntoutsi Bias in AI systems: A multi-step approach
44. Wrapping-up, ongoing work and future directions
In this talk I focused on the myth of algorithmic objectivity and
the reality of algorithmic bias and discrimination and how algorithms can pick biases
existing in the input data and further reinforce them
A large body of research already exists but
focuses mainly on fully-supervised batched learning with single-protected (and typically
binary) attributes with binary classes
Moving from batch learning to online learning
targets bias in some step of the analysis-pipeline, but biases/errors might be propagated
and even amplified (unified approached are needed)
Moving from isolated approaches (pre-, in- or post-) to combined approaches
58
Eirini Ntoutsi Bias in AI systems: A multi-step approach
V. Iosifidis, E. Ntoutsi, “FABBOO - Online Fairness-aware Learning under Class Imbalance", DS 2020.
T. Hu, V. Iosifidis, W. Liao, H. Zang, M. Yang, E. Ntoutsi,B. Rosenhahn, "FairNN - Conjoint Learning of Fair Representations for Fair
Decisions”, DS 2020.
45. Wrapping-up, ongoing work and future directions
Moving from single-protected attribute fairness-aware learning to multi-
fairness
Existing legal studies define multi-fairness as compound, intersectional and
overlapping [Makkonen 2002].
Moving from fully-supervised learning to unsupervised and reinforcement
learning
Moving from myopic (maximize short-term/immediate performance) solutions
to non-myopic ones (that consider long-term performance) [Zhang et al,2020]
Actionable approaches (counterfactual generation)
59
Eirini Ntoutsi Bias in AI systems: A multi-step approach
46. Thank you for you attention!
60
Eirini Ntoutsi Bias in AI systems: A multi-step approach
Questions?
https://nobias-project.eu/
@NoBIAS_ITN
https://lernmint.org/
Feel free to contact me:
• eirini.ntoutsi@fu-berlin.de
• @entoutsi
• https://www.mi.fu-
berlin.de/en/inf/groups/ag-KIML/index.html
https://www.bias-project.org/