Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
The field of Artificial Intelligence (AI) has progressed rapidly in the past few years. AI systems are having a growing impact on society and concerns have been raised whether AI system can be trusted. A way to address these concerns is to employ ethically aligned design principles to the development of AI software. Yet these principles are still far away from practical application. This talk provides state-of-the-art empirical insight into what should researchers and professionals do today when the client wants ethics to be added to their system.
Recent advances in Machine Learning, have created powerful algorithms, pushing the boundaries of Artificial Intelligence (AI). As machine learning becomes increasingly prevalent, one biggest issue it needs to addressing is bias that seeps into AI. This presentation focuses on bias in AI algorithms, provides a range of examples where AI is racist or sexist. We explore causes like biased data, lack of attention to the inputs, and insufficient understanding of the algorithm. Finally, we propose steps which could help reduce these incidences of discrimination.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Ethical Considerations in the Design of Artificial IntelligenceJohn C. Havens
A presentation for IEEE's Ethics Symposium happening in Vancouver, May 2016. Featuring presentations from John C. Havens, Mike Van der Loos, John P. Sullins, and Alan Mackworth.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Ethical Issues in Machine Learning Algorithms (Part 2)Vladimir Kanchev
The presentation deals with types of biases found in AI/ML systems - data bias, algorithmic bias, and lack of interpretability. Reasons for their appearances are given, and major approaches for their reduction.
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, it has become increasingly important to consider the ethical implications of this technology. AI has the potential to transform many industries and improve our lives in numerous ways, but it also raises important ethical questions.
In this presentation, the ethical concerns surrounding AI are explored and discussed, with a focus on the need for ethical guidelines to be developed for AI development and use. We will examine issues such as privacy, bias, transparency, accountability, and the impact on jobs and society as a whole.
Through this exploration, we will consider the various perspectives on these issues and weigh the benefits and drawbacks of different ethical approaches to AI. We will also examine some of the current efforts being made to address these concerns, including the development of ethical frameworks and best practices.
The most important goal of this presentation is to disseminate a deeper understanding of the ethical considerations surrounding AI and the need for ethical guidelines to ensure that this technology is developed and used in a way that benefits all of us while respecting our values and principles.
Artificial Intelligence and mobile robotics are transforming businesses and the economy: this deck explores possible futures for companies and workers.
TEDx Manchester: AI & The Future of WorkVolker Hirsch
TEDx Manchester talk on artificial intelligence (AI) and how the ascent of AI and robotics impacts our future work environments.
The video of the talk is now also available here: https://youtu.be/dRw4d2Si8LA
Artificial Intelligence (AI) and Job LossIkhlaq Sidhu
The arguments of job displacement, economic growth, and policy arguments related to artificial intelligence, data, algorithms, and automated technologies.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
In our dynamic session at Diet Ernakulam, we explored the transformative possibilities of integrating Artificial Intelligence (AI) in educational settings. The talk aimed to empower primary school educators with insights and practical strategies to leverage AI for an enriched learning experience. This talk marks the beginning of an ongoing conversation. The journey of integrating AI in classrooms is an evolving one, and we look forward to continued collaboration, exploration, and innovation in the intersection of education and technology.
A brief introduction to DataScience with explaining of the concepts, algorithms, machine learning, supervised and unsupervised learning, clustering, statistics, data preprocessing, real-world applications etc.
It's part of a Data Science Corner Campaign where I will be discussing the fundamentals of DataScience, AIML, Statistics etc.
The field of Artificial Intelligence (AI) has progressed rapidly in the past few years. AI systems are having a growing impact on society and concerns have been raised whether AI system can be trusted. A way to address these concerns is to employ ethically aligned design principles to the development of AI software. Yet these principles are still far away from practical application. This talk provides state-of-the-art empirical insight into what should researchers and professionals do today when the client wants ethics to be added to their system.
Recent advances in Machine Learning, have created powerful algorithms, pushing the boundaries of Artificial Intelligence (AI). As machine learning becomes increasingly prevalent, one biggest issue it needs to addressing is bias that seeps into AI. This presentation focuses on bias in AI algorithms, provides a range of examples where AI is racist or sexist. We explore causes like biased data, lack of attention to the inputs, and insufficient understanding of the algorithm. Finally, we propose steps which could help reduce these incidences of discrimination.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
Nick Schmidt of BLDS, LLC to the Maryland AI meetup, June 4, 2019 (https://www.meetup.com/Maryland-AI). Nick discusses ideas of fairness and how they apply to machine learning. He explores recent academic work on identifying and mitigating bias, and how his work in lending and employment can be applied to other industries. Nick explains how to measure whether an algorithm is fair and also demonstrate the techniques that model builders can use to ameliorate bias when it is found.
Today, I will be presenting on the topic of
"Generative AI, responsible innovation, and the law."
Artificial Intelligence has been making rapid strides in recent years,
and its applications are becoming increasingly diverse.
Generative AI, in particular, has emerged as a promising area of innovation, the potential to create highly realistic and compelling outputs.
Ethical Considerations in the Design of Artificial IntelligenceJohn C. Havens
A presentation for IEEE's Ethics Symposium happening in Vancouver, May 2016. Featuring presentations from John C. Havens, Mike Van der Loos, John P. Sullins, and Alan Mackworth.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
Ethical Issues in Machine Learning Algorithms (Part 2)Vladimir Kanchev
The presentation deals with types of biases found in AI/ML systems - data bias, algorithmic bias, and lack of interpretability. Reasons for their appearances are given, and major approaches for their reduction.
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, it has become increasingly important to consider the ethical implications of this technology. AI has the potential to transform many industries and improve our lives in numerous ways, but it also raises important ethical questions.
In this presentation, the ethical concerns surrounding AI are explored and discussed, with a focus on the need for ethical guidelines to be developed for AI development and use. We will examine issues such as privacy, bias, transparency, accountability, and the impact on jobs and society as a whole.
Through this exploration, we will consider the various perspectives on these issues and weigh the benefits and drawbacks of different ethical approaches to AI. We will also examine some of the current efforts being made to address these concerns, including the development of ethical frameworks and best practices.
The most important goal of this presentation is to disseminate a deeper understanding of the ethical considerations surrounding AI and the need for ethical guidelines to ensure that this technology is developed and used in a way that benefits all of us while respecting our values and principles.
Artificial Intelligence and mobile robotics are transforming businesses and the economy: this deck explores possible futures for companies and workers.
TEDx Manchester: AI & The Future of WorkVolker Hirsch
TEDx Manchester talk on artificial intelligence (AI) and how the ascent of AI and robotics impacts our future work environments.
The video of the talk is now also available here: https://youtu.be/dRw4d2Si8LA
Artificial Intelligence (AI) and Job LossIkhlaq Sidhu
The arguments of job displacement, economic growth, and policy arguments related to artificial intelligence, data, algorithms, and automated technologies.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
In our dynamic session at Diet Ernakulam, we explored the transformative possibilities of integrating Artificial Intelligence (AI) in educational settings. The talk aimed to empower primary school educators with insights and practical strategies to leverage AI for an enriched learning experience. This talk marks the beginning of an ongoing conversation. The journey of integrating AI in classrooms is an evolving one, and we look forward to continued collaboration, exploration, and innovation in the intersection of education and technology.
A brief introduction to DataScience with explaining of the concepts, algorithms, machine learning, supervised and unsupervised learning, clustering, statistics, data preprocessing, real-world applications etc.
It's part of a Data Science Corner Campaign where I will be discussing the fundamentals of DataScience, AIML, Statistics etc.
Machine Learning: Addressing the Disillusionment to Bring Actual Business Ben...Jon Mead
'Machine learning’ is one of those cringy phrases, almost (if not already) taboo in the world of high-tech SaaS. Applying true machine learning to an organization’s product(s), however, can have real benefit for the business, its clients, and the industry as a whole. From credit card fraud investigations to the way that a car is built, machine learning has permeated our everyday life without a common understanding of what it is and how to implement it.
AI+Labor Markets Presentation to CSM-16-may-2024Joaquim Jorge
Presentation Title: AI & Labor Markets
Presenter: Joaquim Jorge
Description:
Explore the transformative impact of Artificial Intelligence (AI) on labor markets in this comprehensive presentation by Joaquim Jorge. This insightful slideset delves into the opportunities and challenges that AI integration brings to various industries, highlighting key AI techniques and their real-world applications.
Bias in Hiring and Firing:
The presentation critically examines biases in AI systems used for hiring and firing decisions:
Hiring Bias: Instances where AI systems, like LinkedIn’s recommendation system and OpenAI's GPT, have shown biases in résumé ranking and job advertisements, including gender bias and cost-efficiency algorithms inadvertently favoring male candidates.
Firing Bias: AI's role in monitoring productivity and making termination decisions, with examples from Amazon’s “Time off Task” system and Uber’s driver performance metrics, highlighting unfair terminations affecting minority groups.
Mitigation Strategies:
Bias Audits: Regularly auditing AI systems to identify and mitigate biases.
Diverse Training Data: Ensuring training data are diverse and representative of all demographic groups.
Human Oversight: Implementing human oversight to review and validate AI decisions.
Explainable AI (XAI): Making AI decisions transparent and accountable to detect and correct biases.
Future of Labor Markets:
The presentation explores potential futures of labor markets with AI, presenting both utopian and dystopian scenarios:
Utopian Scenario: AI could lead to increased worker satisfaction by automating repetitive tasks, creating new career opportunities, and reducing physical labor demands, resulting in better work-life balance and economic opportunities.
Dystopian Scenario: AI could widen the economic divide, increase job precarity, and erode worker rights. Risks include increased surveillance, loss of autonomy, and the social and psychological impacts of job displacement.
Key Takeaways:
Understand the role and impact of different AI technologies in various sectors.
Recognize and address biases in AI systems, especially in hiring and firing decisions.
Explore potential futures of labor markets with AI integration.
Learn strategies for ensuring ethical and fair AI applications.
This presentation is essential for professionals, researchers, and policymakers interested in the intersection of AI and labor markets, providing a detailed analysis of current trends, challenges, and future possibilities.
Machine Learning and/or AI is being adopted across many industries at a rapid pace. But Bias in AI, lack of talent diversity in AI and lack of access to knowledge pose major risks. In this presentation, I showcase some real-life example of Bias in AI. But if we take the right steps we can build an Inclusive AI. Building an Inclusive AI is the right thing to do for the society, it also makes for a great product and business.
Despite AI’s potential for beneficial use, it creates important risks for Australians. AI, big data, and AI-informed decision making can cause exclusion, discrimination, skill loss, and economic impact; and can affect privacy, security of critical infrastructure and social well-being. What types of technology raise particular human rights concerns? Which human rights are particularly implicated?
Melinda Thielbar, Data Science Practice Lead and Director of Data Science at Fidelity Investments
From corporations to governments to private individuals, most of the AI community has recognized the growing need to incorporate ethics into the development and maintenance of AI models. Much of the current discussion, though, is meant for leaders and managers. This talk is directed to data scientists, data engineers, ML Ops specialists, and anyone else who is responsible for the hands-on, day-to-day of work building, productionalizing, and maintaining AI models. We'll give a short overview of the business case for why technical AI expertise is critical to developing an AI Ethics strategy. Then we'll discuss the technical problems that cause AI models to behave unethically, how to detect problems at all phases of model development, and the tools and techniques that are available to support technical teams in Ethical AI development.
I developed this presentation to discuss the framework for automation and autonomic operations in particular in the Finance domain. It is high level introductory but includes guidance of how to best select AI and RPA projects with higher implementation success rates. If you are interested in a copy dont be shy! Reach out!
THE SOCIAL IMPACTS OF AI AND HOW TO MITIGATE ITS HARMSTekRevol LLC
In the wake of mass automation, UBIs might be the answer low-income families and citizens might be looking towards. As automation across industries increases, the induced fear within citizens of its impact is severe. From privacy concerns through rogue AI to doomsday scenarios to more realistic concerns of misused AI and loss of jobs, pop-culture led paranoia has shaken up the world. These concerns have to be dealt with, and tech companies and businesses need to have a robust moral framework under which decisions are made, to ensure any negative externalities of implementing AI are mitigated to the maximum degree. Artificial Intelligence is a great tool to optimize businesses and make our world more efficient, but the moral imperative on all of us is to ensure it happens sides by side human sustainability, not at its expense.
This knolx is about an introduction to machine learning, wherein we see the basics of various different algorithms. This knolx isn't a complete intro to ML but can be a good starting point for anyone who wants to start in ML. In the end, we will take a look at the demo wherein we will analyze the FIFA dataset going through the understanding of various data analysis techniques and use an ML algorithm to derive 5 players that are similar to each other.
In this presentation, you will discover how you can begin to leverage on the power and potential of Machine Learning as a technology tool and as a framework for growth.
Unlocking the Potential of Artificial Intelligence_ Machine Learning in Pract...eswaralaldevadoss
Machine learning is a subset of artificial intelligence that involves training computers to learn from data and make predictions or decisions based on that data. It involves building algorithms and models that can learn patterns and relationships from data and use that knowledge to make predictions or take actions.
Here are some key concepts that can help beginners understand machine learning:
Data: Machine learning algorithms require data to learn from. This data can come from a variety of sources such as databases, spreadsheets, or sensors. The quality and quantity of data can greatly impact the accuracy and effectiveness of machine learning models.
Training: In machine learning, training involves feeding data into a model and adjusting its parameters until it can accurately predict outcomes. This process involves testing and tweaking the model to improve its accuracy.
Algorithms: There are many different algorithms used in machine learning, each with its own strengths and weaknesses. Common machine learning algorithms include decision trees, random forests, and neural networks.
Supervised vs. Unsupervised Learning: Supervised learning involves training a model on labeled data, where the desired outcome is already known. Unsupervised learning, on the other hand, involves training a model on unlabeled data and allowing it to identify patterns and relationships on its own.
Evaluation: After training a model, it's important to evaluate its accuracy and performance on new data. This involves testing the model on a separate set of data that it hasn't seen before.
Overfitting vs. Underfitting: Overfitting occurs when a model is too complex and fits the training data too closely, leading to poor performance on new data. Underfitting occurs when a model is too simple and fails to capture important patterns in the data.
Applications: Machine learning is used in a wide range of applications, from predicting stock prices to identifying fraudulent transactions. It's important to understand the specific needs and constraints of each application when building machine learning models.
Overall, machine learning is a powerful tool that can help businesses and organizations make more informed decisions based on data. By understanding the basic concepts and techniques of machine learning, beginners can begin to explore the potential applications and benefits of this exciting field.
Similar to Algorithmic Bias - What is it? Why should we care? What can we do about it? (20)
Slides for Muslims in ML workshop presentation at NeurlPS 2020 on December 8, 2020 - this is a shorter 25 minute version of the UMass Lowell talk of November 2020 (so the slides are a subset of that).
Hate speech is language intended to cause harm against a particular individual or group, often based on their racial, ethnic, religious, or gender identity. Hate speech is widespread on social media, and is increasingly common in mainstream political discourse. That said, there is no clear consensus as to what constitutes hate speech. In addition, human moderators come with their own biases, and automatic computer algorithms are often easy to fool. All of these factors complicate the efforts of social media platforms to filter or reduce such content. During this interactive workshop we will discuss examples from Twitter in the hopes of reaching some consensus as to what is and is not hate speech. We will also try to determine what kind of knowledge a human moderator or an automatic algorithm would need to have in order to make this determination. We will try to avoid particularly graphic examples of hate speech and focus on more subtle cases.
Poster presented at the Semeval 2015 workshop. Our system clustered words based on their contexts in order to identify their underlying meanings or senses.
These are the slides used in my tutorial at MICAI 2013 (presented November 25, 2013). The tutorial is on Measuring the Similarity and Relatedness of Concepts, and focuses on methods that rely on information from ontologies, possibly augmented with corpus statistics. This tutorial includes background on the measures, plus information on software that implements different measures, plus reference standard data sets that can be used to evaluate measures.
Some thoughts on what it's like to do a Master's thesis with me, including general ideas about research, my research interests, and a few suggestions as to what will lead to success
These are the slides for a talk given at the University of Alabama, Birmingham on April 19, 2013. The title of the talk is "Measuring Similarity and Relatedness in the Biomedical Domain : Methods and Applications"
Measuring Semantic Similarity and Relatedness in the Biomedical Domain : Methods and Applications - presented Feb 21, 2012 as a webinar to the Mayo Clinic BMI group.
How to Create Map Views in the Odoo 17 ERPCeline George
The map views are useful for providing a geographical representation of data. They allow users to visualize and analyze the data in a more intuitive manner.
Palestine last event orientationfvgnh .pptxRaedMohamed3
An EFL lesson about the current events in Palestine. It is intended to be for intermediate students who wish to increase their listening skills through a short lesson in power point.
How to Split Bills in the Odoo 17 POS ModuleCeline George
Bills have a main role in point of sale procedure. It will help to track sales, handling payments and giving receipts to customers. Bill splitting also has an important role in POS. For example, If some friends come together for dinner and if they want to divide the bill then it is possible by POS bill splitting. This slide will show how to split bills in odoo 17 POS.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Ethnobotany and Ethnopharmacology:
Ethnobotany in herbal drug evaluation,
Impact of Ethnobotany in traditional medicine,
New development in herbals,
Bio-prospecting tools for drug discovery,
Role of Ethnopharmacology in drug evaluation,
Reverse Pharmacology.
How libraries can support authors with open access requirements for UKRI fund...
Algorithmic Bias - What is it? Why should we care? What can we do about it?
1. Algorithmic Bias
What is it? Why should we care?
What can we do about it?
Ted Pedersen
Department of Computer Science / UMD
tpederse@d.umn.edu
@SeeTedTalk
http://umn.edu/home/tpederse
1
2. Me?
Computer Science Professor at UMD since 1999
Research in Natural Language Processing since even before then
How can we determine what a word means in a given context?
Automatically, with a computer
Have used Machine Learning and other Data Driven techniques for many years
In the last decade these techniques have entered the real world
Important to think about impacts and consequences of that
2
3. Our Plan
What are Algorithms? What is Bias? What is Algorithmic Bias?
What are some examples of Algorithmic Bias?
Why should we care?
What can we do about it?
3
4. What are Algorithms?
A series of steps that we follow to accomplish a task.
Computer programs are a specific way of describing an algorithm.
IF (MAJOR == ‘Computer Science’) AND (GPA > 3.00)
THEN PRINT job offer letter
ELSE DELETE application
4
5. What is Machine Learning / Artificial Intelligence
Machine Learning and AI are often used synonymously. We can think of them as a
special class of algorithms. These are often the source of algorithmic bias.
Machine Learning algorithms find patterns in data and use those to build
classifiers that make decisions on our behalf.
These classifiers can be simple sets of rules (IF THEN ELSE) or they might be
more complicated models where features are automatically assigned weights.
These algorithms are often very complex and very mathematical. Not easy to
understand what they are doing (even for experts).
5
6. What is Bias?
Whatever causes an unfair action or representation that often leads to harm.
Origins can be in prejudice, hate, or ignorance.
Real life is full of many examples.
But how does this relate to Algorithms?
Machine Learning is complex and mathematical, so isn’t it objective??
6
7. Machine Learning and Algorithmic Bias
IF (MAJOR == ‘Computer Science’) AND (GENDER == ‘Male’) AND (GPA > 3.00)
THEN PRINT job offer letter
ELSE DELETE application
Unreasonable? Unfair? Harmful? Biased? Yes. But a Machine Learning system
could easily learn this rule from your hiring history if your company has only
employed male programmers.
7
8. What is Algorithmic Bias?
Whatever causes an algorithm to produce unfair actions or representations.
The data that Machine Learning / AI rely on is often created by humans, or by
other algorithms!
Many many decisions along the way to developing a computer system where
humans and the data they create enter the process.
Biases that exist in a workplace, community, or culture can (easily) enter into the
process and be codified in programs and models.
Many examples …
8
9. Facial recognition systems that don’t “see” non-white faces
Joy Buolamwini / MIT
Twitter : @jovialjoy
How I'm Fighting Bias in Algorithms (TED talk) :
https://www.youtube.com/watch?v=UG_X_7g63rY
Gender Shades :
http://gendershades.org/
Nova :
https://www.pbs.org/wgbh/nova/article/ai-bias/
9
10. Risk assessment systems that overstate the odds of black
men being a flight risk or re-offending
Pro Publica investigation (focused on Broward County, Florida):
https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing
Wisconsin also has some history:
https://www.wisconsinwatch.org/2019/02/q-a-risk-assessments-explained/
10
11. Amazon Scraps Secret AI Recruiting Tool - Reuters story (Oct 2018) :
https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-re
cruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G
Hiring Algorithms are not Neutral - Harvard Business Review (Nov 2016) :
https://hbr.org/2016/12/hiring-algorithms-are-not-neutral
Resume screening systems that filter out women
11
12. Online advertising that systematically suggests that people
with “black” names are more likely to have criminal records
Latanya Sweeney / Harvard
http://latanyasweeney.org
CACM paper (April 2013):
https://queue.acm.org/detail.cfm?id=2460278
MIT Technology Review (Feb 2013):
https://www.technologyreview.com/s/510646/rac
ism-is-poisoning-online-ad-delivery-says-harvar
d-professor/
12
13. Search engines that rank hate speech, misinformation, and
pornography highly in response to neutral queries
Safiya Umoja Noble / USC Oxford U
Twitter : @safiyanoble
Algorithms of Oppression: How Search Engines
Reinforce Racism :
https://www.youtube.com/watch?v=Q7yFysTBpAo
13
14. Where does Algorithmic Bias come from?
Machine Learning isn’t magic. There is a lot of human engineering that goes into
these systems.
1) Create or collect training data
2) Decide what features in the data are relevant and important
3) Decide what you want to predict or classify and what you conclude from that
Bias can be introduced at any (or all) of these points
14
15. How does Bias affect Training Data?
Historical Bias - data captures bias and unfairness that has existed in society
Marginalized communities are over-policed, so there is more data about
searches, arrests, that leads to predictions of more of the same
Women are not well represented in computing, so there is little data about
hiring, success, that leads to predictions to keep doing more of the same
What if we add more training data??
Adding more training data just gives you more historical bias.
15
16. How does Bias affect Training Data?
Representational Bias - sample in training data is skewed or not representative of
entire possible population
Facial recognition system is trained on photographs of faces. 80% of faces
are white, 75% of those are male.
Fake profile detector trained on name database made up of First Last names
(John Smith, Mary Jones). Other names more likely to be considered “fake”.
If we are careful and add more representative data, this might help.
Can have high overall accuracy while doing poorly on smaller classes.
16
17. Features
What features do we decide to include in our data?
What information do we collect in surveys, applications, arrest reports, etc?
What information do we give to our Machine Learning algorithms?
We don’t collect information about race or gender!
Does that mean our system is free from racism or sexism?
What features can indirectly signal race or gender?
17
18. Proxies as Conclusions
We often want to predict outcomes that we can’t specifically measure. Proxies are
features that stand in for that outcome.
Will a student succeed in college?
Will a job candidate be a productive employee?
Does a search result satisfy a user query?
18
19. The Problem with Proxies
They often end up measuring something else, something that introduces bias.
Socio Economic Status
Race
Gender
Immigration Status
Religion
19
20. Why should we care?
Feedback loops
Algorithms are making decisions about us and for us, and those decisions
become data for the next round of learning algorithms. Biased decisions today
become the biased machine learning training data of tomorrow.
Machine Learning is great if you want the future to look like the past.
Two different kinds of harm (Kate Crawford & colleagues)
Resources are allocated based on algorithms
Representations are reinforced and amplified by algorithms.
20
21. What can we do about it? Say Something
Algorithmic Justice League - report bias
https://www.ajlunited.org/fight#report-bias
Share it, Tweet it
Screen shots and other documentation very important
21
22. What can we do? Learn More
Kate Crawford / Microsoft Research, AI Now Institute
Twitter : @katecrawford
The Trouble with Bias :
https://www.youtube.com/watch?v=fMym_BKWQzk
There is a Blind Spot in AI Research :
https://www.nature.com/news/there-is-a-blind-spot-in-ai-research-1.20805
22
23. What can we do? Learn More
Virginia Eubanks / U of Albany
Twitter : @PopTechWorks
Automating Inequality: How High-Tech Tools
Profile, Police, and Punish the Poor :
https://www.youtube.com/watch?v=TmRV17kAumc
23
24. What can we do? Learn More
Cathy O'Neil
Twitter : @mathbabedotorg
Weapons of Math Destruction
https://www.youtube.com/watch?v=TQHs8SA1qpk
24
25. Conclusion
Algorithms are not objective
Can be used to codify and harden biases under the guise of technology
Machine Learning is great if you want the future to look like the past
We should expect transparency and accountability from Algorithms
Why did it make this decision?
What consequences exist when decisions are biased?
25