The document discusses bias in artificial intelligence. It notes that AI systems inherit biases from human biases in the data used to train models. Word embeddings and machine translation tools often reflect common stereotypes like associating nurses with women and doctors with men. The bias can be introduced at each stage of developing AI systems from data collection and annotation to training models. Efforts are needed to increase awareness of biases, promote inclusion and diversity, and ensure explainability and accountability in AI.
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
The field of Artificial Intelligence (AI) has progressed rapidly in the past few years. AI systems are having a growing impact on society and concerns have been raised whether AI system can be trusted. A way to address these concerns is to employ ethically aligned design principles to the development of AI software. Yet these principles are still far away from practical application. This talk provides state-of-the-art empirical insight into what should researchers and professionals do today when the client wants ethics to be added to their system.
The Future of Humanity
Through our interaction with machines, we develop emotional, human expectations of them. Alexa, for example, comes alive when we speak with it. AI is and will be a representation of its cultural context, the values and ethics we apply to one another as humans.
This machinery is eerily familiar as it mirrors us, and eventually becomes even smarter than us mere mortals. We’re programming its advantages based on how we see ourselves and the world around us, and we’re doing this at an incredible pace. This shift is pervading culture from our perceptions of beauty and aesthetics to how we interact with one another – and our AI.
Infused with technology, we’re asking: what does it means to be human?
Our report examines:
• The evolution of our empathy from humans to animals and robots
• How we treat AI in its infancy like we do a child, allowing it space to grow
• The spectrum of our emotional comfort in a world embracing AI
• The cultural contexts fueling AI biases, such as gender stereotypes, that drive the direction of AI
• How we place an innate trust in machines, more than we do one another
Methodology
For this report, sparks & honey conducted US-focused research on the future of AI. Together with Heartbeat AI Technologies, we examined the emotional sentiment (feeling and emotions) around artificial intelligence in a Heartbeat AI Pulse Survey of 150 people in the US. Tapping into our Influencer Advisory Board and proprietary cultural intelligence system, we combed through thousands of signals to build a vision of the future of AI. We also interviewed leading experts in the field of artificial intelligence.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
Invited talk on fairness in AI systems at the 2nd Workshop on Interactive Natural Language Technology for Explainable AI co-located with the International Conference on Natural Language Generation, 18/12/2020.
The field of Artificial Intelligence (AI) has progressed rapidly in the past few years. AI systems are having a growing impact on society and concerns have been raised whether AI system can be trusted. A way to address these concerns is to employ ethically aligned design principles to the development of AI software. Yet these principles are still far away from practical application. This talk provides state-of-the-art empirical insight into what should researchers and professionals do today when the client wants ethics to be added to their system.
The Future of Humanity
Through our interaction with machines, we develop emotional, human expectations of them. Alexa, for example, comes alive when we speak with it. AI is and will be a representation of its cultural context, the values and ethics we apply to one another as humans.
This machinery is eerily familiar as it mirrors us, and eventually becomes even smarter than us mere mortals. We’re programming its advantages based on how we see ourselves and the world around us, and we’re doing this at an incredible pace. This shift is pervading culture from our perceptions of beauty and aesthetics to how we interact with one another – and our AI.
Infused with technology, we’re asking: what does it means to be human?
Our report examines:
• The evolution of our empathy from humans to animals and robots
• How we treat AI in its infancy like we do a child, allowing it space to grow
• The spectrum of our emotional comfort in a world embracing AI
• The cultural contexts fueling AI biases, such as gender stereotypes, that drive the direction of AI
• How we place an innate trust in machines, more than we do one another
Methodology
For this report, sparks & honey conducted US-focused research on the future of AI. Together with Heartbeat AI Technologies, we examined the emotional sentiment (feeling and emotions) around artificial intelligence in a Heartbeat AI Pulse Survey of 150 people in the US. Tapping into our Influencer Advisory Board and proprietary cultural intelligence system, we combed through thousands of signals to build a vision of the future of AI. We also interviewed leading experts in the field of artificial intelligence.
This collection of slides are meant as a starting point and tutorial for the ones who want to understand AI Ethics and in particular the challenges around bias and fairness. Furthermore, I have also included studies on how we as humans perceive AI influence in our private as well as working lives.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
We now live in a world where we trust intelligent systems blindly, believing in their rationality and objectivity. However, in reality this is far from the truth.
In this talk given at the City.AI Singapore chapter, we explored the nature, implications and handling strategies for Model Bias in AI.
Ethical Considerations in the Design of Artificial IntelligenceJohn C. Havens
A presentation for IEEE's Ethics Symposium happening in Vancouver, May 2016. Featuring presentations from John C. Havens, Mike Van der Loos, John P. Sullins, and Alan Mackworth.
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
Presentation - Racial and Gender Bias in AI by Gunay Kazimzade. Gunay Kazimzade is working at the Weizenbaum Institute for the Networked Society and she is also a Ph.D. student in Computer Science at the Technical University of Berlin. After Applied Mathematics and Computer Science degrees, she was involved in the education field and managed two social projects focused on women and children Computer Science education. Trained over 3000 women and children in Azerbaijan. Currently working with the Research Group "Criticality of Artificial Intelligence-based systems". Her main research directions are Gender and racial bias in AI, inclusiveness in AI and AI-enhanced education. She is a TEDx speaker participating and presenting in various conferences and summits happening in Europe.
Technology for everyone - AI ethics and BiasMarion Mulder
Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, it has become increasingly important to consider the ethical implications of this technology. AI has the potential to transform many industries and improve our lives in numerous ways, but it also raises important ethical questions.
In this presentation, the ethical concerns surrounding AI are explored and discussed, with a focus on the need for ethical guidelines to be developed for AI development and use. We will examine issues such as privacy, bias, transparency, accountability, and the impact on jobs and society as a whole.
Through this exploration, we will consider the various perspectives on these issues and weigh the benefits and drawbacks of different ethical approaches to AI. We will also examine some of the current efforts being made to address these concerns, including the development of ethical frameworks and best practices.
The most important goal of this presentation is to disseminate a deeper understanding of the ethical considerations surrounding AI and the need for ethical guidelines to ensure that this technology is developed and used in a way that benefits all of us while respecting our values and principles.
A Theory of Knowledge Lecture given by Mark Steed, Director of JESS Dubai on Monday 4th March 2019
The lecture explains how AI works and then looks at some of the ethical implications
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
What is Artificial Intelligence | Artificial Intelligence Tutorial For Beginn...Edureka!
** Machine Learning Engineer Masters Program: https://www.edureka.co/masters-program/machine-learning-engineer-training **
This tutorial on Artificial Intelligence gives you a brief introduction to AI discussing how it can be a threat as well as useful. This tutorial covers the following topics:
1. AI as a threat
2. What is AI?
3. History of AI
4. Machine Learning & Deep Learning examples
5. Dependency on AI
6.Applications of AI
7. AI Course at Edureka - https://goo.gl/VWNeAu
For more information, please write back to us at sales@edureka.co
Call us at IN: 9606058406 / US: 18338555775
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
GHC17 workshop material on consciously tackling unconscious bias. After introduction on what bias is and how it can affect you, we took Implicit Association Test (IAT) on gender-science at each table. We then shared our findings as a group and opened the floor for questions.
Tips and techniques for hyperparameter optimizationSigOpt
All machine learning and artificial intelligence pipelines - from reinforcement agents to deep neural nets - have tunable hyperparameters. Optimizing these hyperparameters can take a model from scrappy prototype to production-ready system. This presentation shows techniques for performing hyperparameter optimization from an engineer who builds advanced and widely used optimization tools.
Algorithmic Bias: Challenges and Opportunities for AI in HealthcareGregory Nelson
Gregory S. Nelson, VP, Analytics and Strategy – Vidant Health | Adjunct Faculty Duke University
The promise of AI is quickly becoming a reality for a number of industries including healthcare. For example, we have seen early successes in the augmenting clinical intelligence for diagnostic imaging and in early detection of pneumonia and sepsis. But what happens when the algorithms are biased? In this presentation, we will outline a framework for AI governance and discuss ways in which we can address algorithmic bias in machine learning.
Objective 1: Illustrate the issues of bias in AI through examples specific to healthcare.
Objective 2: Summarize the growing body of work in the legal, regulatory, and ethical oversight of AI models and the implications for healthcare.
Objective 3: Outline steps that we can take to establish an AI governance strategy for our organizations.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
How do we protect privacy of users when building large-scale AI based systems? How do we develop machine learned models and systems taking fairness, accountability, and transparency into account? With the ongoing explosive growth of AI/ML models and systems, these are some of the ethical, legal, and technical challenges encountered by researchers and practitioners alike. In this talk, we will first motivate the need for adopting a "fairness and privacy by design" approach when developing AI/ML models and systems for different consumer and enterprise applications. We will then focus on the application of fairness-aware machine learning and privacy-preserving data mining techniques in practice, by presenting case studies spanning different LinkedIn applications (such as fairness-aware talent search ranking, privacy-preserving analytics, and LinkedIn Salary privacy & security design), and conclude with the key takeaways and open challenges.
The impact of AI on society gets bigger and bigger - and it is not all good. We as Data Scientists have to really put in work to not end up in ML hell.
This presentation was given at the Dutch Data Science Week.
We now live in a world where we trust intelligent systems blindly, believing in their rationality and objectivity. However, in reality this is far from the truth.
In this talk given at the City.AI Singapore chapter, we explored the nature, implications and handling strategies for Model Bias in AI.
Ethical Considerations in the Design of Artificial IntelligenceJohn C. Havens
A presentation for IEEE's Ethics Symposium happening in Vancouver, May 2016. Featuring presentations from John C. Havens, Mike Van der Loos, John P. Sullins, and Alan Mackworth.
This presentation looks at how AI works, how it is being used presently in Education and then outline some concerns about how AI might be used in education in the future.
I argue that AI has a much greater part to play in Education – particularly in making education more widely available in the developing world and in reducing the cost of education.
The talk then moves on to discuss general ethical concerns about how AI is being used in society, looking at the issue of how we program autonomous vehicles as a case in point. I then outline five areas of concern about the use (and potential abuse) of AI in education arguing that we need to have a much more informed debate before things go too far. With this in mind, I close with some suggestions for courses and reading that might help colleagues to become better informed about the subject.
Presentation - Racial and Gender Bias in AI by Gunay Kazimzade. Gunay Kazimzade is working at the Weizenbaum Institute for the Networked Society and she is also a Ph.D. student in Computer Science at the Technical University of Berlin. After Applied Mathematics and Computer Science degrees, she was involved in the education field and managed two social projects focused on women and children Computer Science education. Trained over 3000 women and children in Azerbaijan. Currently working with the Research Group "Criticality of Artificial Intelligence-based systems". Her main research directions are Gender and racial bias in AI, inclusiveness in AI and AI-enhanced education. She is a TEDx speaker participating and presenting in various conferences and summits happening in Europe.
Technology for everyone - AI ethics and BiasMarion Mulder
Slides from my talk at #ToonTechTalks on 27 september 2018
We all see the great potential AI is bringing us. But is it really bringing it to everyone? How are we ensuring under-represented groups are included and vulnerable people are protected? What to do when our technology is unintended biased and discriminating against certain groups. And what if the data and AI is correct, but the by-effect of it is that some groups are put at risk? All questions we need to think about when we are advancing technology for the benefit of humanity.
Sharing what I've learned from my work in diversity, digital and from following great minds in this field such as Joanna Bryson, Virginia Dignum, Rumman Chowdhury, Juriaan van Diggelen, Valerie Frissen, Catelijne Muller, and many more.
As artificial intelligence (AI) continues to advance and become more integrated into our daily lives, it has become increasingly important to consider the ethical implications of this technology. AI has the potential to transform many industries and improve our lives in numerous ways, but it also raises important ethical questions.
In this presentation, the ethical concerns surrounding AI are explored and discussed, with a focus on the need for ethical guidelines to be developed for AI development and use. We will examine issues such as privacy, bias, transparency, accountability, and the impact on jobs and society as a whole.
Through this exploration, we will consider the various perspectives on these issues and weigh the benefits and drawbacks of different ethical approaches to AI. We will also examine some of the current efforts being made to address these concerns, including the development of ethical frameworks and best practices.
The most important goal of this presentation is to disseminate a deeper understanding of the ethical considerations surrounding AI and the need for ethical guidelines to ensure that this technology is developed and used in a way that benefits all of us while respecting our values and principles.
A Theory of Knowledge Lecture given by Mark Steed, Director of JESS Dubai on Monday 4th March 2019
The lecture explains how AI works and then looks at some of the ethical implications
AI Governance and Ethics - Industry StandardsAnsgar Koene
Presentation on the potential for Ethics based Industry Standards to function as vehicle to address socio-technical challenges from AI.
Presentation given at the the 1st Austrian IFIP forum ono "AI and future society".
Introduction to the ethics of machine learningDaniel Wilson
A brief introduction to the domain that is variously described as the ethics of machine learning, data science ethics, AI ethics and the ethics of big data. (Delivered as a guest lecture for COMPSCI 361 at the University of Auckland on May 29, 2019)
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (WS...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We will motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we will focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we will present open problems and research directions for the data mining / machine learning community.
Please cite as:
Sarah Bird, Ben Hutchinson, Krishnaram Kenthapadi, Emre Kiciman, and Margaret Mitchell. Fairness-Aware Machine Learning: Practical Challenges and Lessons Learned. WSDM 2019.
What is Artificial Intelligence | Artificial Intelligence Tutorial For Beginn...Edureka!
** Machine Learning Engineer Masters Program: https://www.edureka.co/masters-program/machine-learning-engineer-training **
This tutorial on Artificial Intelligence gives you a brief introduction to AI discussing how it can be a threat as well as useful. This tutorial covers the following topics:
1. AI as a threat
2. What is AI?
3. History of AI
4. Machine Learning & Deep Learning examples
5. Dependency on AI
6.Applications of AI
7. AI Course at Edureka - https://goo.gl/VWNeAu
For more information, please write back to us at sales@edureka.co
Call us at IN: 9606058406 / US: 18338555775
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
GHC17 workshop material on consciously tackling unconscious bias. After introduction on what bias is and how it can affect you, we took Implicit Association Test (IAT) on gender-science at each table. We then shared our findings as a group and opened the floor for questions.
Tips and techniques for hyperparameter optimizationSigOpt
All machine learning and artificial intelligence pipelines - from reinforcement agents to deep neural nets - have tunable hyperparameters. Optimizing these hyperparameters can take a model from scrappy prototype to production-ready system. This presentation shows techniques for performing hyperparameter optimization from an engineer who builds advanced and widely used optimization tools.
Learn from lessons learned building a business to help Mothers and parent organizations from Founder Niru Mallavarupu
MobileArq is a fundraising and communication platform for parent organizations to raise money and manage all of their activities in one place. MobileArq provides an app for parents that gives them their school at their fingertips!
Presentation Regression -Predictive analysis using R and Python on 8 December at GHCI16, Bangalore
http://ghcischedule.anitaborg.org/session/predictive-modeling-using-r-and-python/
Hello Watch! Build your First Apple Watch AppKristina Fox
Introduction tutorial to building an Apple Watch app focusing on both the watch interface and how to use various watchOS 3 features such as Watch Connectivity. Written in Swift 3. You'll need Xcode 8 to follow this tutorial.
Check out the final project code here: https://github.com/kristinathai/watchOS3Counter
As your organization builds multi-tier architecture consisting of several applications and technologies, higher vulnerabilities or availability issues between tiers are bound to surface. Failures in downstream system can start a dominoes effect to bring the entire application down and un-estimated load can make revival very challenging.
How do you ensure that failure at a tier remain isolated and doesn’t cascade?
What does it take to build a fault tolerant, self healing system that fails fast or degrades gracefully?
Basically, how will you make your system resilient and when will you call ‘Its done’?
Is it possible for a computer program to write its own programs? While this kind of idea could seem far-fetched, it may actually be closer than we think. This presentation introduces "AI Programmer", a machine learning system, which can automatically generate full software programs requiring only minimal human guidance. The system uses genetic algorithms coupled with a tightly constrained programming language. We’ll cover an overview of the system design and see examples of its software-generation capabilities. #GHC18
We explore the shortcomings of today's voice user interfaces (like Alexa and Google Home) to chart a course for where the future should take us. Presented on October 20, 2017 at Webdagene in Oslo, Norway.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
1. PAGE 1 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
#GHC17
AI581: Presentations: AI for Social
Good
Bias In Artificial Intelligence
Neelima Kumar | @Neelima_jadhav
2. PAGE 2 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
HUMAN BIAS
Picture a Nurse
3. PAGE 3 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Is AI BIAS?
4. PAGE 4 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Machine Learning : Learn from Data
5. PAGE 5 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
AI impacts lives
Transportation
Speech to Voice
Banking
Recruitment
Advertising
Predictive Policing
Health and Medicine
6. PAGE 6 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Word Embeddings
You shall know a word by the
company it keeps
-Firth, J.R. 1957:11
7. PAGE 7 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Associations Generated by Word2Vec
Man: Boy :: Women: x (x = Girl)
8. PAGE 8 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Stereotypes in word embeddings
Father : Doctor :: Mother : Nurse
Man : Programmer :: Woman : Homemaker
He: Realist :: She: Feminist
She: Pregnancy :: He: Kidney Stone
9. PAGE 9 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Stereotypes in Google Translate
10. PAGE 10 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Cultural Bias
11. PAGE 11 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Racial bias
12. PAGE 12 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
13. PAGE 13 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Class Discrimination( Who uses AI matters)
14. PAGE 14 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
How is bias introduced in AI?
Training data
is collected
and annoted
Model is
trained
Output
Margaret Mitchell, 2017
15. PAGE 15 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
How is bias introduced in AI?
Training data
is collected
and annoted
Model is
trained
Bias
Bias
Bias
Biased data created from process becomes new training data
Output
Margaret Mitchell, 2017
16. PAGE 16 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Hard things are hard
• Hard to get Clean Data
• Decisions not clearly understood
• Lack of Diversity
• Impact on Accuracy
17. PAGE 17 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Awareness and Inclusion
• Awareness of possible biases
• Design for inclusion and diversity
• Work with communities affected most
• More Women and minorities Developers
18. PAGE 18 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
Explainability and Accountability
• Explanation of individual decisions
• Characterize strengths & weaknesses
• Predict future behavior
• Transparency of Data used for training
• Record decisions to that they could be audited
• Validation and Testing
19. PAGE 19 | GRACE HOPPER CELEBRATION FOR WOMEN IN COMPUTING 2017
PRESENTED BY THE ANITA BORG INSTITUTE AND THE ASSOCIATION FOR COMPUTING MACHINERY #GHC17
20. FEEDBACK? RATE AND REVIEW THE SESSION ON OUR MOBILE APP
Download the GHC 17 app at http://bit.ly/ghc17app or search GHC 2017 in the app store
Thank you
Editor's Notes
Good afternoon everybody,
The topic of my talk today is “Bias in Ai”
Lets begin with a quick exercise , Close your eyes and
Picture a Nurse
Did any one picture some of like this and how about this?
No one??
We may not even know why but each one of us picked one image over the other
We are all affected by our unconscious bias
And while we all have our varying prejuides we are all just the same in having prejudices.
What about AI --?
IS artificial intelligence biased?
People tend to think of AI systems as mathematical models that are rational and immune to any biases.
The AI that I am referring to here is the field of machine learning, natural lang processing, neural nets and beyond.
These techniques learn about the worlds experiences based on the huge amounts of data that they are trained on.
The output of a system depends on its input and if the input is biased so will the output be. People tend to forget this, the thinking been that the vast amount of data could overwhelm any human biases but on the contrary AI systems will generate output with all its skews and biases intact.
This has potentiatial to cause harm to real people in the read world. I am not worried abt the temr.. But the fact that AI is used so much in our daily lives affecting lives and livelihoods
Ai is transforming the transportaing inductries. Its in our homes
Banking /Financial institutions are using it to decide who to give credit and how much loan is offered.
Its used by Hr to decide whom to fire/hire
Its used by Advertsing companies to decide what ad/recommendations to show you.
Its used by the justice depat to determine who goes to jail and for how long
It used in Health industry to determing what medications you should take and when someone shold be hospitalized.
Ai is affecting our core leag..
To set the stage let me walk you throu a few wgs
Cultural bias are observed even in simple google search. I was completely shocked to see the results of google search done for an everyday query. A women searching for a professional hair style gave the results on the Left hand side but when searched for unproffesional hairstyles for women they came back with results on the RHS.
Do you realize anything strage in this picture. This picture is an output of Google photo apps.
This is Joey a student of MIT studing computer vision. The face recognition software she worked on could recognize her face better when she used a white mask. She gave a great Ted talk explaining how she is fighting algorithmic bias. She explains who code matters-how,what they code
Bias can be introduced based on how the data is collected and who uses the system
the City of Boston used AI technology to predict where potholes are more likely to occur.
They analyze data collected from the Street Bump project, an app that allowed users to report potholes.
Surprisingly, the predictions showed significantly more potholes in upper middle-income neighborhoods. Yet a closer look at the data revealed a different picture the streets in those neighborhoods didn’t really have more potholes, the residents just reported them more often, due to their more frequent use of smartphones.
When ai systems only have a portion of the information needed to make correct assumptions, bias is implicitly added to the results.
This is simple picture of a simple machine learning model. Data is collected and annotated and feed as training data to the model. Once the model is trained it can make predictions on any new input data it receives.
Bias is introduced at all stages of this pipeline.
The data used to train can ne explicity biased based on what it represents and who it omiited
The process of collection and annotation can result in Sampling erros , reporting bias, selction bias and confirmation bias and so on
Implicit bias can be introduced because the model was not developed by a diverse community of developers and it can further be propogated in to how the output is predicted and used.
Biased data created from this process can become new training data and further amplify its effects.
So you can see an AI system is capable of amplifying human biases.
What can we do to address this issue?
Fixing the biases in AI is a hard problem to solve because
Its hard to get clean data that is free of any human bias. An Ai system that learns that there are indeed more female nurses to male nurses will always predict a nurse to be a female.
The decisions made by AI machine learning/deep learning systems are not clearly understood. Univeristy of Washington developed a system to distinguish huskies from wolves, They got about 90% accuracy but on further analyzing the system they found that the model had learned to distinguish this animals based on the snow surrounding the wolves and not its individual characteristics.
There is lack of diversity in the AI community causing biases to go undectected. There are very few women and people of color. If someone like Jackie was working in the photo applications group, he would have identified and caught the biases much earlier.
Correcting for biases might introduce new biases and impact the accuracy of the system. A predeictive policing application could use family history along with criminal background. If we were to adjust the system for fairness by removing family history it might have an impact on its prediction accuracy.
While its hard to fix biases we must take on these challenges.
1.First and foremost we need to start creating an Awareness in the community : We as Designers, developers and users of an AI system should be aware of the possible biases and its potential to harm individuals and society
2. We need to design for inclusion and diversity by providing access to the resources necessary for AI development such as datasets, computing resources, education, and training,
3. We need to work with representatives of minority communities who could be affected most so that they can participate in the design of such systems
4. We need to include opportunities for women to participate in development of AI.
Our models need to be explainable and accountable
Models must be capable of explaining the rational behind an individual decision like in the model developed to distinguish huskies to wolves where snow was the reason the model distinguished between huskies and wolves instead of individual animal features,
we must understand the models strengths and weaknesses and be able to determine how it will behave in the future to analyse who will be impacted most by any biases in the system
We need to be transparent about how the training data was collected and annotated to uncover any sampling errors or confirmation biases just as observed by the city of Boston. They solved the problem by putting sensors underneath garbage trucks that could collect data.
Models, algorithms and decisions must be recorded so that they can be audited incase any unfairness is suspected.
We should make available an API and any training data that allows third parties to query the algorithmic system and assess its response.
Validation and Testing: WE should use rigorous testing methods to validate our models and document its results by observing existence of any bias. This involves running the model with various trail data that change input variables with various permutations and combinations and observing the output for any biases.
Women in AI, AI4 All are a few organiztons trying to tackle the challenges
Darpa and Optiimizing Mind are trying to make AI mor explaininable.
as the AI ecosystem is still shaping up its an opputunity for all us women to play an active role in the development and +ve implications of Ai. To make the world a better place