Keynote at the Dutch-Belgian Information Retrieval Workshop, November 2016, Delft, Netherlands.
Based on KDD 2016 tutorial with Sara Hajian and Francesco Bonchi.
KDD 2016 tutorial on Algorithmic Bias, Parts I and II.
Video:
Part I: https://www.youtube.com/watch?v=mJcWrfoGup8
Part II: https://www.youtube.com/watch?v=nKemhMbaYcU
Part III: https://www.youtube.com/watch?v=ErgHjxJsEKA
By Sara Hajian, Francesco Bonchi, and Carlos Castillo.
http://francescobonchi.com/algorithmic_bias_tutorial.html
KDD 2016 tutorial on Algorithmic Bias, Parts III and IV.
Video: https://www.youtube.com/watch?v=ErgHjxJsEKA
By Sara Hajian, Francesco Bonchi, and Carlos Castillo.
http://francescobonchi.com/algorithmic_bias_tutorial.html
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
KDD 2016 tutorial on Algorithmic Bias, Parts I and II.
Video:
Part I: https://www.youtube.com/watch?v=mJcWrfoGup8
Part II: https://www.youtube.com/watch?v=nKemhMbaYcU
Part III: https://www.youtube.com/watch?v=ErgHjxJsEKA
By Sara Hajian, Francesco Bonchi, and Carlos Castillo.
http://francescobonchi.com/algorithmic_bias_tutorial.html
KDD 2016 tutorial on Algorithmic Bias, Parts III and IV.
Video: https://www.youtube.com/watch?v=ErgHjxJsEKA
By Sara Hajian, Francesco Bonchi, and Carlos Castillo.
http://francescobonchi.com/algorithmic_bias_tutorial.html
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WW...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
The workshop session focuses on the following topics:
Introduction to AI & Machine Learning (Algorithms)
Key Components of Algorithmic Impact Assessment
Algorithmic Explainability
Algorithmic Fairness
Algorithmic Robustness
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we present open problems and research directions for the data mining / machine learning community.
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
Fairness, Transparency, and Privacy in AI @LinkedInC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2V9zW73.
Krishnaram Kenthapadi talks about privacy breaches, algorithmic bias/discrimination issues observed in the Internet industry, regulations & laws, and techniques for achieving privacy and fairness in data-driven systems. He focusses on the application of privacy-preserving data mining and fairness-aware ML techniques in practice, by presenting case studies spanning different LinkedIn applications. Filmed at qconsf.com.
Krishnaram Kenthapadi is part of the AI team at LinkedIn, where he leads the transparency and privacy modeling efforts across different applications. He is LinkedIn's representative in Microsoft's AI and Ethics in Engineering & Research Committee. He shaped the technical roadmap for LinkedIn Salary product, and served as the relevance lead for the LinkedIn Careers & Talent Solutions Relevance team.
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we protect the privacy of users when building large-scale AI based systems? How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of privacy-preserving AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...Maryam Farooq
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning for High-Stakes Applications with Dr. Kush Varshney (Principal Research Manager, IBM Research AI).
Check out the the IBM AI Fairness 360 open source toolkit: https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
nyai.co
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
n this talk, we will walk through the steps of how to build an algorithm to predict property prices from a dataset of property listings, focusing predominantly on finding the right features to include in building the model.
ML practitioners and advocates are increasingly finding themselves becoming gatekeepers of the modern world. The models you create have power to get people arrested or vindicated, get loans approved or rejected, determine what interest rate should be charged for such loans, who should be shown to you in your long list of pursuits on your Tinder, what news do you read, who gets called for a job phone screen or even a college admission... the list goes on. My goal in this talk is to summarize the kinds of disparate outcomes that are caused by cargo cult machine learning, and recent academic efforts to address some of them.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
All Needle, No Haystack: Bring Predictive Analysis to Information GovernanceAIIM International
The hottest topic in ediscovery continues to be the use of predictive analytics in making litigation more efficient, but why should we stop there? Can we take the lessons learned from the Da Silva Moore case about using new analytical tools and techniques and apply them to the information governance space? In this session, the founder of the TREC Legal Track and former Co-Chair of Working Group 1 of The Sedona Conference discusses how analytics may be used by law firms to add value to their client’s information governance issues, including with respect to business intelligence, e-record archiving, and record classification, retention and remediation.
In AI We Trust? Ethics & Bias in Machine Learning
Hosted by Seven Peaks Ventures and Fast Forward Labs
September 2016
Decision-making about critical life stages (college admissions, creditworthiness, employability, jail sentencing) is rapidly becoming centralized and predicted by automated systems like machine learning models. As Data Scientists, the creators of those models, how do we take responsibility for those decisions? How do we define our goals, and how do we measure the effect? As this presents a unique opportunity and risk to businesses, we all become invested in the answers to these questions. Here, I focus on the tactical elements of measuring fairness as well as the forward-looking concerns and opportunities this paradigmatic change presents.
https://www.eventbrite.com/e/in-ai-we-trust-ethics-bias-in-machine-learning-tickets-26313436196#
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
Updated vesion of my talk from 2013 as given in March 2016.
Coves the basics of why algorithmic governance may be problematic for users and society and suggests some legal remedies for these problems including competition law and defamation law.
Main topics: social media mining, social networks, and influence propagation. Includes an application to social media in disasters.
Talk given at the European Summer School on Information Retrieval (ESSIR 2015) on September 1st, 2015.
See also: http://chato.cl/
Algorithmic Impact Assessment: Fairness, Robustness and Explainability in Aut...Adriano Soares Koshiyama
The workshop session focuses on the following topics:
Introduction to AI & Machine Learning (Algorithms)
Key Components of Algorithmic Impact Assessment
Algorithmic Explainability
Algorithmic Fairness
Algorithmic Robustness
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we present open problems and research directions for the data mining / machine learning community.
Ethical Issues in Machine Learning Algorithms. (Part 1)Vladimir Kanchev
This presentation describes recent ethical issues related to AI and ML algorithms. Its focus is data and algorithmic bias, algorithmic interpretability and how GDPR relates to these issues.
Fairness, Transparency, and Privacy in AI @LinkedInC4Media
Video and slides synchronized, mp3 and slide download available at URL https://bit.ly/2V9zW73.
Krishnaram Kenthapadi talks about privacy breaches, algorithmic bias/discrimination issues observed in the Internet industry, regulations & laws, and techniques for achieving privacy and fairness in data-driven systems. He focusses on the application of privacy-preserving data mining and fairness-aware ML techniques in practice, by presenting case studies spanning different LinkedIn applications. Filmed at qconsf.com.
Krishnaram Kenthapadi is part of the AI team at LinkedIn, where he leads the transparency and privacy modeling efforts across different applications. He is LinkedIn's representative in Microsoft's AI and Ethics in Engineering & Research Committee. He shaped the technical roadmap for LinkedIn Salary product, and served as the relevance lead for the LinkedIn Careers & Talent Solutions Relevance team.
Privacy in AI/ML Systems: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we protect the privacy of users when building large-scale AI based systems? How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of privacy-preserving AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning fo...Maryam Farooq
NYAI #24: Developing Trust in Artificial Intelligence and Machine Learning for High-Stakes Applications with Dr. Kush Varshney (Principal Research Manager, IBM Research AI).
Check out the the IBM AI Fairness 360 open source toolkit: https://www.ibm.com/blogs/research/2018/09/ai-fairness-360/
nyai.co
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
n this talk, we will walk through the steps of how to build an algorithm to predict property prices from a dataset of property listings, focusing predominantly on finding the right features to include in building the model.
ML practitioners and advocates are increasingly finding themselves becoming gatekeepers of the modern world. The models you create have power to get people arrested or vindicated, get loans approved or rejected, determine what interest rate should be charged for such loans, who should be shown to you in your long list of pursuits on your Tinder, what news do you read, who gets called for a job phone screen or even a college admission... the list goes on. My goal in this talk is to summarize the kinds of disparate outcomes that are caused by cargo cult machine learning, and recent academic efforts to address some of them.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
All Needle, No Haystack: Bring Predictive Analysis to Information GovernanceAIIM International
The hottest topic in ediscovery continues to be the use of predictive analytics in making litigation more efficient, but why should we stop there? Can we take the lessons learned from the Da Silva Moore case about using new analytical tools and techniques and apply them to the information governance space? In this session, the founder of the TREC Legal Track and former Co-Chair of Working Group 1 of The Sedona Conference discusses how analytics may be used by law firms to add value to their client’s information governance issues, including with respect to business intelligence, e-record archiving, and record classification, retention and remediation.
In AI We Trust? Ethics & Bias in Machine Learning
Hosted by Seven Peaks Ventures and Fast Forward Labs
September 2016
Decision-making about critical life stages (college admissions, creditworthiness, employability, jail sentencing) is rapidly becoming centralized and predicted by automated systems like machine learning models. As Data Scientists, the creators of those models, how do we take responsibility for those decisions? How do we define our goals, and how do we measure the effect? As this presents a unique opportunity and risk to businesses, we all become invested in the answers to these questions. Here, I focus on the tactical elements of measuring fairness as well as the forward-looking concerns and opportunities this paradigmatic change presents.
https://www.eventbrite.com/e/in-ai-we-trust-ethics-bias-in-machine-learning-tickets-26313436196#
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
Updated vesion of my talk from 2013 as given in March 2016.
Coves the basics of why algorithmic governance may be problematic for users and society and suggests some legal remedies for these problems including competition law and defamation law.
Main topics: social media mining, social networks, and influence propagation. Includes an application to social media in disasters.
Talk given at the European Summer School on Information Retrieval (ESSIR 2015) on September 1st, 2015.
See also: http://chato.cl/
Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, a...Toshihiro Kamishima
Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, and Theoretical Aspects
Invited Talk @ Workshop on Fairness, Accountability, and Transparency in Machine Learning
In conjunction with the ICML 2015 @ Lille, France, Jul. 11, 2015
Web Site: http://www.kamishima.net/fadm/
Handnote: http://www.kamishima.net/archive/2015-ws-icml-HN.pdf
The goal of fairness-aware data mining (FADM) is to analyze data while taking into account potential issues of fairness. In this talk, we will cover three topics in FADM:
1. Fairness in a Recommendation Context: In classification tasks, the term "fairness" is regarded as anti-discrimination. We will present other types of problems related to the fairness in a recommendation context.
2. What is Fairness: Most formal definitions of fairness have a connection with the notion of statistical independence. We will explore other types of formal fairness based on causality, agreement, and unfairness.
3. Theoretical Problems of FADM: After reviewing technical and theoretical open problems in the FADM literature, we will introduce the theory of the generalization bound in terms of accuracy as well as fairness.
Joint work with Jun Sakuma, Shotaro Akaho, and Hideki Asoh
Various examples of observational studies, mostly fo the analysis of social media.
Lecture for the M. Sc. Data Science, Sapienza University of Rome, Spring 2016.
Fairness-aware learning: From single models to sequential ensemble learning a...Eirini Ntoutsi
An overview of fairness-aware learning: from batch-learning with single models to batch-learning with sequential ensembles and to fairness-aware learning over non-stationary data streams
What is algorithmic bias, and what does it mean for an algorithm to be fair or unfair? This talk explores fair decision making in the context of criminal justice, lending, hiring, and so on, providing both intuitions and their connection to legal and mathematical principles. It describes the basic frameworks of "allocative fairness," that is, fairness when giving out a benefit or a punishment.
Talk video and more at http://jonathanstray.com/introduction-to-algorithmic-bias
A talk at Code for America HQ in San Francisco.
Protection of Direct and Indirect Discrimination using Prevention Methods IOSR Journals
Along with privacy, discrimination is a very important issue when considering the legal and ethical
aspects of data mining. It is more than observable that the majority people do not want to be discriminated
because of their gender, nationality, religion, age and so on, particularly when those aspects are used for
making decisions about them like giving them a occupation, loan, insurance, etc. determining such possible
biases and eliminating them from the training data without harming their decision-making utility is therefore
extremelypopular. For this reason, antidiscrimination methods containing discrimination detection and
prevention have been introduced in data mining. Discrimination prevention consists of suggestmodels that do
not lead to discriminatory decisions even if the original training datasets are essentially biased. In this section,
by focusing on the discrimination prevention, we present taxonomy for classifying and examining discrimination
prevention schemes. Then, we begin a group of pre-processing discrimination prevention schemes and indicate
the special features of each approach and how these approaches deal with direct or indirect discrimination. A
production of metrics used to estimate the performance of those approaches is also specified. In conclusion, we
finish our learn by specifying interesting future directions in this research body.
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
Provides a basic overview automated decision systems and recent attempts to guarantee their allocations are fair. It then examines 4 key papers in the field that suggest that this cannot be guaranteed and offers a brief sketch of a different direction.
The International Journal of Engineering & Science is aimed at providing a platform for researchers, engineers, scientists, or educators to publish their original research results, to exchange new ideas, to disseminate information in innovative designs, engineering experiences and technological skills. It is also the Journal's objective to promote engineering and technology education. All papers submitted to the Journal will be blind peer-reviewed. Only original articles will be published.
Anti-bribery, digital investigation and privacyPECB
This presentation was delivered at the ISO 37001 & Anti-Bribery PECB Insights Conference by Sylvain Desharnais, Digital investigation at CFIJ in Canada
Privacy-preserving Data Mining in Industry (WSDM 2019 Tutorial)Krishnaram Kenthapadi
Preserving privacy of users is a key requirement of web-scale data mining applications and systems such as web search, recommender systems, crowdsourced platforms, and analytics applications, and has witnessed a renewed focus in light of recent data breaches and new regulations such as GDPR. In this tutorial, we will first present an overview of privacy breaches over the last two decades and the lessons learned, key regulations and laws, and evolution of privacy techniques leading to differential privacy definition / techniques. Then, we will focus on the application of privacy-preserving data mining techniques in practice, by presenting case studies such as Apple's differential privacy deployment for iOS / macOS, Google's RAPPOR, LinkedIn Salary, and Microsoft's differential privacy deployment for collecting Windows telemetry. We will conclude with open problems and challenges for the data mining / machine learning community, based on our experiences in industry.
Data ethics and machine learning: discrimination, algorithmic bias, and how t...Data Driven Innovation
Machine learning and data mining algorithms construct predictive models and decision making systems based on big data. Big data are the digital traces of human activities - opinions, preferences, movements, lifestyles, ... - hence they reflect all human biases and prejudices. Therefore, the models learnt from big data may inherit all such biases, leading to discriminatory decisions. In my talk, I discuss many real examples, from crime prediction to credit scoring to image recognition, and how we can tackle the problem of discovering discrimination using the very same approach: data mining.
Similar to Detecting Algorithmic Bias (keynote at DIR 2016) (20)
Basic concepts about natural experiments, based mostly on Dunning's book.
Lecture for the M. Sc. Data Science, Sapienza University of Rome, Spring 2016.
Predictions of links in graphs based on content and information propagations.
Lecture for the M. Sc. Data Science, Sapienza University of Rome, Spring 2016.
Characterizing the Life Cycle of Online News Stories Using Social Media React...Carlos Castillo (ChaTo)
Carlos Castillo, Mohammed El-Haddad, Jürgen Pfeffer and Matt Stempeck: Characterizing the Life Cycle of Online News Stories Using Social Media Reactions. In CSCW. Baltimore, USA. February 2014.
What to Expect When the Unexpected Happens: Social Media Communications Acros...Carlos Castillo (ChaTo)
Alexandra Olteanu, Sarah Vieweg and Carlos Castillo: "What to Expect When the Unexpected Happens: Social Media Communications Across Crises" In CSCW 2015, 14-18 March in Vancouver, Canada. ACM Press.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
2. Part I
Part I Algorithmic Discrimination Concepts
Part II Algorithmic Discrimination Discovery
2
3. Part I: algorithmic discrimination concepts
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
3
4. A dangerous reasoning
To discriminate is to treat someone differently
(Unfair) discrimination is based on group membership, not individual merit
People's decisions include objective and subjective elements
Hence, they can be discriminate
Algorithmic inputs include only objective elements
Hence, they cannot discriminate?
4
5. Judiciary use of COMPAS scores
COMPAS (Correctional Offender Management Profiling for Alternative Sanctions):
137-questions questionnaire and predictive model for "risk of recidivism"
Prediction accuracy of recidivism for blacks and whites is about 60%, but ...
● Blacks that did not reoffend
were classified as high risk twice as much as whites that did not reoffend
● Whites who did reoffend
were classified as low risk twice as much as blacks who did reoffend
5
Pro Publica, May 2016. https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
6. On the web: race and gender stereotypes reinforced
● Results for "CEO" in Google Images: 11% female, US 27% female CEOs
○ Also in Google Images, "doctors" are mostly male, "nurses" are mostly female
● Google search results for professional vs. unprofessional hairstyles for work
Image results:
"Unprofessional
hair for work"
Image results:
"Professional
hair for work"
6M. Kay, C. Matuszek, S. Munson (2015): Unequal Representation and Gender Stereotypes in Image Search Results for Occupations. CHI'15.
7. The dark side of Google Ads
AdFisher: tool to automate the creation of behavioral
and demographic profiles.
Used to demonstrate that setting gender = female
results in less ads for high-paying jobs.
A. Datta, M. C. Tschantz, and A. Datta (2015). Automated experiments on ad privacy settings.
Proceedings on Privacy Enhancing Technologies, 2015(1):92–112.
7
8. Self-perpetuating algorithmic biases
Ranking algorithm for finding candidates puts Maria at the bottom of the list
Hence, Maria gets less jobs offers
Hence, Maria gets less job experience
Hence, Maria becomes less employable
The same spiral happens in credit scoring, and in
stop-and-frisk of minorities that increases incarceration rates
8
9. To make things worse ...
Algorithms are "black boxes" protected by
Industrial secrecy
Legal protections
Intentional obfuscation
Discrimination becomes invisible
Mitigation becomes impossible
9F. Pasquale (2015): The Black Box Society. Harvard University Press.
10. Weapons of Math Destruction (WMDs)
"[We] treat mathematical models as a neutral and
inevitable force, like the weather or the tides, we
abdicate our responsibility" - Cathy O'Neil
Three main ingredients of a "WMD":
● Opacity
● Scale
● Damage
10C. O'Neil (2016): Weapons of Math Destruction. Crown Random House
11. Part I: algorithmic discrimination concepts
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
11
12. Two areas of concern: data and algorithms
Data inputs:
● Poorly selected (e.g., observe only car trips, not bicycle trips)
● Incomplete, incorrect, or outdated
● Selected with bias (e.g., smartphone users)
● Perpetuating and promoting historical biases (e.g., hiring people that "fit the culture")
Algorithmic processing:
● Poorly designed matching systems
● Personalization and recommendation services that narrow instead of expand user options
● Decision making systems that assume correlation implies causation
● Algorithms that do not compensate for datasets that disproportionately represent populations
● Output models that are hard to understand or explain hinder detection and mitigation of bias
12Executive Office of the US President (May 2016): "Big Data: A Report on Algorithmic Systems,Opportunity,and Civil Rights"
13. Detecting algorithmic discrimination
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
13
14. Legal concepts
Anti-discrimination legislation typically seeks equal access to employment,
working conditions, education, social protection, goods, and services
Anti-discrimination legislation is very diverse and includes many legal concepts:
Genuine occupational requirement (male actor to portray male character)
Disparate impact and disparate treatment
Burden of proof and situation testing
Group under-representation principle
14
15. Discrimination: treatment vs impact
Modern legal frameworks offer various levels of protection for being discriminated
by belonging to a particular class of: gender, age, ethnicity, nationality, disability,
religious beliefs, and/or sexual orientation
Disparate treatment:
Treatment depends on class membership
Disparate impact:
Outcome depends on class membership
Even if (apparently?) people are treated the same way
15
16. Proving discrimination is hard
The burden of proof can be shared for discrimination cases:
the accuser must produce evidence of the consequences,
the defendant must produce evidence that the process was fair
(This is the criterion of the European Court of Justice)
16Migration Policy Group and Swedish Centre for Equal Rights (2009): Proving discrimination cases - the role of situational testing.
17. Part I: algorithmic discrimination concepts
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
17
18. Principles for quantifying discrimination
Two basic frameworks for measuring discrimination:
Discrimination at the individual level: consistency or individual fairness
Discrimination at the group level: statistical parity
18
I. Žliobaitė (2015): A survey on measuring indirect discrimination in machine learning. arXiv pre-print.
19. Individual fairness is about consistency
Consistency score
C = 1 - ∑i
∑yj∊knn(yi)
|yi
- yj
|
Where knn(yi
) = k nearest neighbors of yi
A consistent or individually fair algorithm is one in which similar people experience
similar outcomes … but note that perhaps they are all treated equally badly
19
Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013. Learning Fair Representations. In Proc. of the 30th Int.
Conf. on Machine Learning. 325–333.
20. Group fairness is about statistical parity
Example:
"Protected group" ~ "people with disabilities"
"Benefit granted" ~ "getting a scholarship"
20
Intuitively, if
a/n1
, the fraction of people with disabilities that does not get a scholarship
is much larger than
c/n1
, the fraction of people without disabilities that does not get a scholarship,
then people with disabilities could claim they are being discriminated.
D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
21. Simple discrimination measures
These measures compare the protected group
against the unprotected group:
● Risk difference = RD = p1
- p2
● Risk ratio or relative risk = RR = p1
/ p2
● Relative chance = RC = (1-p1
) / (1-p2
)
● Odds ratio = RR/RC
21
D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
22. Simple discrimination measures
These measures compare the protected group
against the unprotected group:
● Risk difference = RD = p1
- p2
● Risk ratio or relative risk = RR = p1
/ p2
● Relative chance = RC = (1-p1
) / (1-p2
)
● Odds ratio = RR/RC
22
D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
Mentioned in UK law
Mentioned by EU Court of Justice
US courts focus on selection rates:
(1-p1
) and (1-p2
)
There are many other measures of discrimination!
23. Part I: algorithmic discrimination concepts
Motivation and examples of algorithmic bias
Sources of algorithmic bias
Legal definitions and principles of discrimination
Measures of discrimination
Discrimination and privacy
23
24. A connection between privacy and discrimination
Finding if people having attribute X were discriminated is like inferring attribute X
from a database in which:
the attribute X was removed
a new attribute (the decision), which is based on X, was added
This is similar to trying to reconstruct a column from a privacy-scrubbed dataset
24
25. Part II
Part I Algorithmic Discrimination Concepts
Part II Algorithmic Discrimination Discovery
25
27. The discrimination discovery task at a glance
Given a large database of historical decision records,
find discriminatory situations and practices.
27S. Ruggieri, D. Pedreschi and F. Turini (2010). DCUBE: Discrimination discovery in databases. In SIGMOD, pp. 1127-1130.
29. Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
29
Classification rule mining Group discr.
k-NN classification Individual discr.
Bayesian networks Individual discr.
Probabilistic causation Ind./Group discr.
Privacy attack strategies Group discr.
Predictability approach Group discr.
30. Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
30
Classification rule mining Group discr.
k-NN classification Individual discr.
Bayesian networks Individual discr.
Probabilistic causation Ind./Group discr.
Privacy attack strategies Group discr.
Predictability approach Group discr.
See KDD 2016 tutorial
by S. Hajian, F. Bonchi, C. Castillo
31. Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
31
Classification rule mining
k-NN classification
D. Pedreschi, S. Ruggieri and F. Turini (2008). Discrimination-aware data mining. In KDD'08.
D. Pedreschi, S. Ruggieri, and F. Turini (2009). Measuring discrimination in socially-sensitive decision records. In SDM'09.
S. Ruggieri, D. Pedreschi, and F. Turini (2010). Data mining for discrimination discovery. In TKDD 4(2).
32. Defining potentially discriminated (PD) groups
A subset of attribute values are perceived as potentially discriminatory based on
background knowledge. Potentially discriminated groups are people with those
attribute values.
Examples:
Female gender
Ethnic minority (racism) or minority language
Specific age range (ageism)
Specific sexual orientation (homophobia)
32
33. Direct discrimination
Direct discrimination implies rules or procedures
that impose ‘disproportionate burdens’ on minorities
PD rules are any classification rule of the form:
A, B → C
where A is a PD group (B is called a "context")
33
Example:
gender="female", saving_status="no known savings"
→ credit=no
34. Indirect discrimination
Indirect discrimination implies rules or procedures
that impose ‘disproportionate burdens’ on minorities,
though not explicitly using discriminatory attributes
Potentially non-discriminatory (PND) rules may
unveil discrimination, and are of the form:
D, B → C where D is a PND group
34
Example:
neighborhood="10451", city="NYC"
→ credit=no
35. Indirect discrimination example
Suppose we know that with high confidence:
(a) neighborhood=10451, city=NYC → benefit=deny
But we also know that with high confidence:
(b) neighborhood=10451, city=NYC → race=black
Hence:
(c) race=black, neighborhood=10451, city=NYC → benefit=deny
Rule (b) is background knowledge that allows us to infer (c), which shows that
rule (a) is indirectly discriminating against blacks
35
36. Discrimination discovery
Definition
Data mining approaches
Discriminatory rankings
36
B. T. Luong, S. Ruggieri, and F. Turini (2011). k-NN as an implementation of situation testing for discrimination discovery and prevention. KDD'11
Classification rule mining
k-NN classification
37. ● Legal approach for creating controlled experiments
● Matched pairs undergo the same situation, e.g. apply for a job
○ Same characteristics apart from the discrimination ground
37
Situation testing
38. k-NN as situation testing (algorithm)
For r ∈ P(R), look at its k closest neighbors
● ... in the protected set
○ define p1
= proportion with the same decision as r
● … in the unprotected set
○ define p2
= proportion with the same decision as r
● measure the degree of discrimination of the decision for r
○ define diff(r) = p1
- p2
(think of it as expressed in percentage points of difference)
38
knnP
(r,k)
knnU
(r,k)
p1
= 0.75
p2
= 0.25
diff(r) = p1
- p2
= 0.50
39. k-NN as situation testing (results)
● If decision=deny-benefit, and diff(r) ≥ t, then we found discrimination around r
39
41. How to generate synthetic discriminatory rankings?
How to generate a ranking that is "unfair" towards a protected group?
Let P be the protected group, N be the non-protected group
● Rank each group P, N, independently
● While both are non-empty
○ With probability f pick top from protected, (1-f) top from non-protected
● Fill-in the bottom positions with remaining group
41Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016.
42. Synthetic ranking generation example
42Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016.
f = 0.0 f = 0.3 f = 0.5
43. Measuring discrimination using NDCG framework
Use logarithmic discounts per position
discount(i) = 1 / log2
(i)
Compute set-based differences at different positions
fairness = ∑ i=10,20,...
diff(P, N, TopResult1
...TopResulti
)
diff(P, N, Results) evaluates to what extent P and N
are represented in the results, e.g., proportions
43Ke Yang, Julia Stoyanovich: Measuring Fairness in Ranked Outputs. FATML 2016.
i discount(i)
5 0.43
10 0.30
20 0.23
30 0.20
See Yang and Stoyanovich 2016 for details
44. Fair rankings
Given a ranking policy and a fairness policy (possibly an affirmative action one),
produce a ranking that:
● Generates ranking that ensures group fairness
○ E.g.: sum of discounted utility for each group, ...
● Minimizes individual unfairness
○ E.g.: average distance from color-blind ranking, people ranked below lower scoring people, …
Work in progress!
44Work in progress with Meike Zehlike, Sara Hajian, Francesco Bonchi
46. Concluding remarks
1. Discrimination legislation attempts to create a balance
2. Algorithms can discriminate
3. Methods to avoid discrimination are not "color blind"
4. Group fairness (statistical parity) vs. individual fairness
46
47. Additional resources
● Presentations/keynotes/book
○ Sara Hajian, Francesco Bonchi, and Carlos Castillo: Algorithmic Bias Tutorial at KDD 2016
○ Workshop on Fairness, Accountability, and Transparency on the Web at WWW 2017
○ Suresh Venkatasubramanian: Keynote at ICWSM 2016
○ Ricardo Baeza: Keynote at WebSci 2016
○ Toon Calders: Keynote at EGC 2016
○ Discrimination and Privacy in the Information Society by Custers et al. 2013
● Groups/workshops/communities
○ Fairness, Accountability, and Transparency in Machine Learning (FATML) workshop and
resources
○ Data Transparency Lab - http://dtlconferences.org/
47