From recruitment to car driving, we all know success and failures examples with AI. Let's focus on failing cases due to your own societal biases and how to taggle these new challenges
1) Current AI systems lack transparency and explainability, which reduces people's trust in applications like autonomous vehicles, financial management tools, and medical diagnoses.
2) For AI to be trustworthy, its decisions must be explained, fair, and free of bias. However, machine learning models are based on data patterns rather than formal logic, making explanations challenging.
3) Developing explainable AI requires techniques for understanding how models work, removing unfair biases, improving robustness, and making decisions transparent and traceable.
Interpretability beyond feature attribution quantitative testing with concept...MLconf
TCAV is a method for interpreting machine learning models by quantitatively measuring the importance of user-chosen concepts for a model's predictions, even if those concepts were not part of the model's training data or input features. It does this by learning concept activation vectors (CAVs) that represent concepts and using the CAVs to calculate a model's sensitivity or importance to each concept via directional derivatives. TCAV was shown to validate ground truths from sanity check experiments, uncover geographical biases in widely used models, and match domain expert concepts for diabetic retinopathy versus those a model may use, helping ensure models' values and knowledge are properly aligned and reflected.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
The more potent AI becomes, the more important it becomes to get it right. Todays most pressing problem is bias in AI. Here you can find an indepth analysis about the current status of bias mitigation algorithms and the exciting new findings that some bias can not be mitigates (impossibility theorem).
AI is already beginning to change the way every industry in the world works including scholarly publishing. What is AI, how will researcher behavior change because of it, and what opportunities and risks are just around the corner for publishers, societies, and institutions?
This document provides an overview of the CS760 Machine Learning course taught by David Page at the University of Wisconsin. The course will cover a broad survey of machine learning algorithms and applications over 30 class meetings. Topics will include both theoretical and practical aspects of supervised learning algorithms like naive Bayes, decision trees, neural networks, and support vector machines. Students will complete programming homework assignments applying various machine learning algorithms and a midterm exam. The primary goals of the course are to understand what learning systems should do and how existing systems work.
1) Current AI systems lack transparency and explainability, which reduces people's trust in applications like autonomous vehicles, financial management tools, and medical diagnoses.
2) For AI to be trustworthy, its decisions must be explained, fair, and free of bias. However, machine learning models are based on data patterns rather than formal logic, making explanations challenging.
3) Developing explainable AI requires techniques for understanding how models work, removing unfair biases, improving robustness, and making decisions transparent and traceable.
Interpretability beyond feature attribution quantitative testing with concept...MLconf
TCAV is a method for interpreting machine learning models by quantitatively measuring the importance of user-chosen concepts for a model's predictions, even if those concepts were not part of the model's training data or input features. It does this by learning concept activation vectors (CAVs) that represent concepts and using the CAVs to calculate a model's sensitivity or importance to each concept via directional derivatives. TCAV was shown to validate ground truths from sanity check experiments, uncover geographical biases in widely used models, and match domain expert concepts for diabetic retinopathy versus those a model may use, helping ensure models' values and knowledge are properly aligned and reflected.
Talk on Algorithmic Bias given at York University (Canada) on March 11, 2019. This is a shorter version of an interactive workshop presented at University of Minnesota, Duluth in Feb 2019.
The more potent AI becomes, the more important it becomes to get it right. Todays most pressing problem is bias in AI. Here you can find an indepth analysis about the current status of bias mitigation algorithms and the exciting new findings that some bias can not be mitigates (impossibility theorem).
AI is already beginning to change the way every industry in the world works including scholarly publishing. What is AI, how will researcher behavior change because of it, and what opportunities and risks are just around the corner for publishers, societies, and institutions?
This document provides an overview of the CS760 Machine Learning course taught by David Page at the University of Wisconsin. The course will cover a broad survey of machine learning algorithms and applications over 30 class meetings. Topics will include both theoretical and practical aspects of supervised learning algorithms like naive Bayes, decision trees, neural networks, and support vector machines. Students will complete programming homework assignments applying various machine learning algorithms and a midterm exam. The primary goals of the course are to understand what learning systems should do and how existing systems work.
Measures and mismeasures of algorithmic fairnessManojit Nandi
This document discusses various measures and challenges of achieving algorithmic fairness. It begins by defining algorithmic fairness and noting it is inherently a social concept. It then covers three main types of algorithmic biases: bias in allocation, representation, and weaponization. It outlines three families of fairness measures: anti-classification, classification parity, and calibration. It notes each approach has dangers and no single definition of fairness exists. The document concludes by discussing proposed standards for documenting datasets and models to improve algorithmic transparency and accountability.
Leading and leaning-in on Ai in Recruitment
● What is Ai and why does it matter?
● What value does Ai add to the recruitment life cycle?
● What risks should you be aware of?
● Key questions to ask to evaluate and mitigate risks
● The FAIR™ Framework
● The Power of intelligent chat to Hire with Heart
AI in the Real World: Challenges, and Risks and how to handle them?Srinath Perera
This document discusses challenges, risks, and how to handle them with AI in the real world. It covers:
- AI can perform tasks like driving a car faster and cheaper than humans, but can't fully explain how.
- Deploying and managing AI models at scale is complex, as is integrating models with user experiences. Bias and lack of transparency are also risks.
- When applying AI, such as in high-risk domains like medicine, it is important to audit models, gradually introduce them with trials, monitor outcomes, and find ways to identify and address errors or unfair impacts. With care and oversight, AI can be developed to help more people than it harms.
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
This document provides an overview of explainable AI techniques. It discusses how explainable AI aims to make AI models more transparent and understandable by providing explanations for their predictions. Various explanation methods are covered, including model-specific techniques like interpreting gradients in neural networks, as well as model-agnostic approaches like Shapley values from game theory. The document explains how explanations are important for building user trust in AI systems and can help with debugging, analyzing robustness, and extracting rules from complex models.
Improving How We Deliver Machine Learning Models (XCONF 2019)David Tan
In this talk, we share some better ways of working that help us with some common challenges faced in a ML project.
Repos:
1. https://github.com/ThoughtWorksInc/ml-app-template
2. https://github.com/ThoughtWorksInc/ml-cd-starter-kit
Demo videos:
1. Dockerised setup https://www.youtube.com/watch?v=S6kWaXQ530k
2. Installing cross-cutting services (e.g. GoCD, MLFlow, EFK): https://www.youtube.com/watch?v=p8jKTlcpnks
3. Rolling back harmful models: https://www.youtube.com/watch?v=rNfrgaRTz7c
Machine learning involves programming computers to optimize performance using example data or past experience. It is used when human expertise does not exist, humans cannot explain their expertise, solutions change over time, or solutions need to be adapted to particular cases. Learning builds general models from data to approximate real-world examples. There are several types of machine learning including supervised learning (classification, regression), unsupervised learning (clustering), and reinforcement learning. Machine learning has applications in many domains including retail, finance, manufacturing, medicine, web mining, and more.
Slides to a talk from @Chris_Betz, (https://data42.de) on AI, artificial intelligence, machine learning. What's driving AI hype and what's behind it. Understand general concepts and dig deep into explainability, debugging, verification, testing of machine learning solutions.
This document provides an overview of machine learning concepts and example algorithms. It discusses how machine learning systems can learn from experience without explicit programming. It then covers classification and regression problems and provides examples of random forests and Gaussian processes algorithms. The document also discusses feature learning with examples of autoencoders and PCA. Finally, it discusses practical considerations for applying machine learning, including the importance of data quality, data pipelines, managing error risk, and institutionalizing machine learning applications.
Scale your Testing and Quality with Automation Engineering and ML - Carlos Ki...QA or the Highway
Many teams and organizations struggle to scale their quality and testing strategies once they reach tens of teams and hundreds of developers and services across their systems. Traditional strategies and techniques, like testing phases and code freezes, do not work at scale and quickly add friction, reduce productivity, and make testing and quality harder.
In this presentation, we will cover different ideas and strategies to make things like BDD and TDD easier to adopt at the beginning, how to include observability and operability in your definition of quality, and how leveraging ML/AI can augment your devs and testers and reduce risk while accelerating value.
By the end, you will have some "low quality" indicators that you can use to identify patterns and practices that won\'t scale well. You will have new insights and ideas for how you can set up your teams and strategies for success long term, and you will see tangible, practical examples you can take to your team and company to start this transformation now.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
This document provides an introduction to machine learning. It discusses why machine learning is used, such as when human expertise does not exist or changes over time. It explains that machine learning builds general models from example data to approximate patterns in the data. Various applications of machine learning are presented, including retail, finance, manufacturing, and bioinformatics. Supervised learning techniques like classification and regression are covered. Unsupervised learning and reinforcement learning are also introduced. Finally, resources for datasets, journals, and conferences in machine learning are listed.
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
The document provides an overview of a presentation on enabling explainable artificial intelligence through Bayesian rule lists. Some key points:
- The presentation will cover challenges with model opacity, defining interpretability, and how Bayesian rule lists can be used to build naturally interpretable models through rule extraction.
- Bayesian rule lists work well for tabular datasets and generate human-understandable "if-then-else" rules. They aim to optimize over pre-mined frequent patterns to construct an ordered set of conditional statements.
- There is often a tension between model performance and interpretability. Bayesian rule lists can achieve accuracy comparable to more opaque models like random forests on benchmark datasets while maintaining interpretability.
Intelligent and Smart Systems define the cutting edge of information technology now. They are invisible yet ubiquitous. From identifying individual student’s lack of attention to suggesting remedial measures, from predicting financial failures to preventing future fraud, and from assisting noninvasive surgery to guiding missiles to moving targets, the Artificial Intelligence based applications are stepping into every domain.
Numerous concerns have emerged in parallel. Should they be permitted to run a completely human less system? Can they be assigned all cognitive non routine tasks that humans are good at? Are they effective communicators and consensus builders? What role should they play in decision making? How good are they in picking up data compared to human senses? These and many other questions have surfaced in many fora.
Data used in model building adds another dimension. How unbiased are the data sets used in training? Can a data set be ever unbiased? What are the consequences of data bias in models and algorithms?
This talk explores the issues of setting the boundary for use of AI technology. Areas of concern are delineated, and principles of restraint advocated. It aims to inspire researchers to keep the boundary in mind as they explore new frontiers in AI and to design stable boundary line interfaces.
This document discusses tools and frameworks for developing responsible AI solutions. It begins by outlining some of the costs of AI incidents, such as harm to human life, loss of trust, and fines. It then discusses defining responsible AI principles like respecting human rights, enabling human oversight, and transparency. The document provides examples of bias that can occur in AI systems and tools to detect and mitigate bias. It discusses the importance of a human-centric design approach and case studies of bias in systems. Finally, it outlines best practices for developing responsible AI like integrating tools and certifications.
Lecture1 introduction to machine learningUmmeSalmaM1
Machine Learning is a field of computer science which deals with the study of computer algorithms that improve automatically through experience. In this PPT we discuss the following concepts - Prerequisite, Definition, Introduction to Machine Learning (ML), Fields associated with ML, Need for ML, Difference between Artificial Intelligence, Machine Learning, Deep Learning, Types of learning in ML, Applications of ML, Limitations of Machine Learning.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
This document summarizes a presentation on machine learning models, adversarial attacks, and defense strategies. It discusses adversarial attacks on machine learning systems, including GAN-based attacks. It then covers various defense strategies against adversarial attacks, such as filter-based adaptive defenses and outlier-based defenses. The presentation also addresses issues around bias in AI systems and the need for explainable and accountable AI.
Machine Learning: Opening the Pandora's Box - Dhiana Deva @ QCon São Paulo 2019Dhiana Deva
The document discusses introducing machine learning and the challenges that come with it. It likens introducing machine learning to opening Pandora's box, as it brings problems like constraints, assumptions, risks, and issues. It recommends starting with simple approaches, addressing these challenges through iteration, and aiming high with vision while avoiding algorithmic bias. The overall message is to have fun on the journey of machine learning and focus on creating customer value.
What is data science ?
Data rules the world we live in, and in fact, has been dubbed the “oil” of the 21st century. In the past few years, the world has witnessed a steep and continuing upsurge in data. Thanks to the growth of social media, smartphones, and the Internet of Things, the amount of data at our disposal today is beyond imagination. As Alphabet’s Eric Schmidt claims, every 48 hours, we generate the amount of data humanity produced since the dawn of civilization until 15 years ago. So, how then, are we able to make sense of such massive amounts of data?
To put in simple terms, Data Science is a combination of mathematics, programming, statistics, data analysis, and machine learning. By combining all these, Data Science uses advanced algorithms and scientific methods to extract information and insights from large datasets – both structured and unstructured. The advent of Big Data and Machine Learning has further fuelled the growth of Data Science. Today, Data Science is being used across all parallels of various industries, including business, healthcare, finance, and education.
While the IoT is already a reality that connects smart devices, in the future, we might be looking forward to being a part of an Intelligent Digital Mesh – a connected hub of apps, devices, and people working together in sync.
Product marketing and customer service will be revolutionized by advanced chatbots, Virtual Reality (VR), and Augmented Reality (AR). We might be looking forward to a time when personalized customer experience will include live simulations, interactive demos, visualization of proposed solutions.
Blockchain might just go mainstream – it will not only be limited to the finance sector, but blockchain will apply to healthcare, banking, insurance and other industries.
Automated ML systems and Augmented Analytics together will transform Predictive Analytics and take it to the next level. Predictive Analytics will further help change the face of healthcare.
The job title of a ‘Data Scientist’ will undergo a massive transformation to include an array of diverse roles. As technology, Data Science, and AI continue to advance, Data Scientists will have to evolve to keep pace with the dynamic learning curve of Data Science.
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
More Related Content
Similar to When the AIs failures send us back to our own societal biases
Measures and mismeasures of algorithmic fairnessManojit Nandi
This document discusses various measures and challenges of achieving algorithmic fairness. It begins by defining algorithmic fairness and noting it is inherently a social concept. It then covers three main types of algorithmic biases: bias in allocation, representation, and weaponization. It outlines three families of fairness measures: anti-classification, classification parity, and calibration. It notes each approach has dangers and no single definition of fairness exists. The document concludes by discussing proposed standards for documenting datasets and models to improve algorithmic transparency and accountability.
Leading and leaning-in on Ai in Recruitment
● What is Ai and why does it matter?
● What value does Ai add to the recruitment life cycle?
● What risks should you be aware of?
● Key questions to ask to evaluate and mitigate risks
● The FAIR™ Framework
● The Power of intelligent chat to Hire with Heart
AI in the Real World: Challenges, and Risks and how to handle them?Srinath Perera
This document discusses challenges, risks, and how to handle them with AI in the real world. It covers:
- AI can perform tasks like driving a car faster and cheaper than humans, but can't fully explain how.
- Deploying and managing AI models at scale is complex, as is integrating models with user experiences. Bias and lack of transparency are also risks.
- When applying AI, such as in high-risk domains like medicine, it is important to audit models, gradually introduce them with trials, monitor outcomes, and find ways to identify and address errors or unfair impacts. With care and oversight, AI can be developed to help more people than it harms.
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
This document provides an overview of explainable AI techniques. It discusses how explainable AI aims to make AI models more transparent and understandable by providing explanations for their predictions. Various explanation methods are covered, including model-specific techniques like interpreting gradients in neural networks, as well as model-agnostic approaches like Shapley values from game theory. The document explains how explanations are important for building user trust in AI systems and can help with debugging, analyzing robustness, and extracting rules from complex models.
Improving How We Deliver Machine Learning Models (XCONF 2019)David Tan
In this talk, we share some better ways of working that help us with some common challenges faced in a ML project.
Repos:
1. https://github.com/ThoughtWorksInc/ml-app-template
2. https://github.com/ThoughtWorksInc/ml-cd-starter-kit
Demo videos:
1. Dockerised setup https://www.youtube.com/watch?v=S6kWaXQ530k
2. Installing cross-cutting services (e.g. GoCD, MLFlow, EFK): https://www.youtube.com/watch?v=p8jKTlcpnks
3. Rolling back harmful models: https://www.youtube.com/watch?v=rNfrgaRTz7c
Machine learning involves programming computers to optimize performance using example data or past experience. It is used when human expertise does not exist, humans cannot explain their expertise, solutions change over time, or solutions need to be adapted to particular cases. Learning builds general models from data to approximate real-world examples. There are several types of machine learning including supervised learning (classification, regression), unsupervised learning (clustering), and reinforcement learning. Machine learning has applications in many domains including retail, finance, manufacturing, medicine, web mining, and more.
Slides to a talk from @Chris_Betz, (https://data42.de) on AI, artificial intelligence, machine learning. What's driving AI hype and what's behind it. Understand general concepts and dig deep into explainability, debugging, verification, testing of machine learning solutions.
This document provides an overview of machine learning concepts and example algorithms. It discusses how machine learning systems can learn from experience without explicit programming. It then covers classification and regression problems and provides examples of random forests and Gaussian processes algorithms. The document also discusses feature learning with examples of autoencoders and PCA. Finally, it discusses practical considerations for applying machine learning, including the importance of data quality, data pipelines, managing error risk, and institutionalizing machine learning applications.
Scale your Testing and Quality with Automation Engineering and ML - Carlos Ki...QA or the Highway
Many teams and organizations struggle to scale their quality and testing strategies once they reach tens of teams and hundreds of developers and services across their systems. Traditional strategies and techniques, like testing phases and code freezes, do not work at scale and quickly add friction, reduce productivity, and make testing and quality harder.
In this presentation, we will cover different ideas and strategies to make things like BDD and TDD easier to adopt at the beginning, how to include observability and operability in your definition of quality, and how leveraging ML/AI can augment your devs and testers and reduce risk while accelerating value.
By the end, you will have some "low quality" indicators that you can use to identify patterns and practices that won\'t scale well. You will have new insights and ideas for how you can set up your teams and strategies for success long term, and you will see tangible, practical examples you can take to your team and company to start this transformation now.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
This document provides an introduction to machine learning. It discusses why machine learning is used, such as when human expertise does not exist or changes over time. It explains that machine learning builds general models from example data to approximate patterns in the data. Various applications of machine learning are presented, including retail, finance, manufacturing, and bioinformatics. Supervised learning techniques like classification and regression are covered. Unsupervised learning and reinforcement learning are also introduced. Finally, resources for datasets, journals, and conferences in machine learning are listed.
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
The document provides an overview of a presentation on enabling explainable artificial intelligence through Bayesian rule lists. Some key points:
- The presentation will cover challenges with model opacity, defining interpretability, and how Bayesian rule lists can be used to build naturally interpretable models through rule extraction.
- Bayesian rule lists work well for tabular datasets and generate human-understandable "if-then-else" rules. They aim to optimize over pre-mined frequent patterns to construct an ordered set of conditional statements.
- There is often a tension between model performance and interpretability. Bayesian rule lists can achieve accuracy comparable to more opaque models like random forests on benchmark datasets while maintaining interpretability.
Intelligent and Smart Systems define the cutting edge of information technology now. They are invisible yet ubiquitous. From identifying individual student’s lack of attention to suggesting remedial measures, from predicting financial failures to preventing future fraud, and from assisting noninvasive surgery to guiding missiles to moving targets, the Artificial Intelligence based applications are stepping into every domain.
Numerous concerns have emerged in parallel. Should they be permitted to run a completely human less system? Can they be assigned all cognitive non routine tasks that humans are good at? Are they effective communicators and consensus builders? What role should they play in decision making? How good are they in picking up data compared to human senses? These and many other questions have surfaced in many fora.
Data used in model building adds another dimension. How unbiased are the data sets used in training? Can a data set be ever unbiased? What are the consequences of data bias in models and algorithms?
This talk explores the issues of setting the boundary for use of AI technology. Areas of concern are delineated, and principles of restraint advocated. It aims to inspire researchers to keep the boundary in mind as they explore new frontiers in AI and to design stable boundary line interfaces.
This document discusses tools and frameworks for developing responsible AI solutions. It begins by outlining some of the costs of AI incidents, such as harm to human life, loss of trust, and fines. It then discusses defining responsible AI principles like respecting human rights, enabling human oversight, and transparency. The document provides examples of bias that can occur in AI systems and tools to detect and mitigate bias. It discusses the importance of a human-centric design approach and case studies of bias in systems. Finally, it outlines best practices for developing responsible AI like integrating tools and certifications.
Lecture1 introduction to machine learningUmmeSalmaM1
Machine Learning is a field of computer science which deals with the study of computer algorithms that improve automatically through experience. In this PPT we discuss the following concepts - Prerequisite, Definition, Introduction to Machine Learning (ML), Fields associated with ML, Need for ML, Difference between Artificial Intelligence, Machine Learning, Deep Learning, Types of learning in ML, Applications of ML, Limitations of Machine Learning.
Responsible AI in Industry: Practical Challenges and Lessons LearnedKrishnaram Kenthapadi
How do we develop machine learning models and systems taking fairness, accuracy, explainability, and transparency into account? How do we protect the privacy of users when building large-scale AI based systems? Model fairness and explainability and protection of user privacy are considered prerequisites for building trust and adoption of AI systems in high stakes domains such as hiring, lending, and healthcare. We will first motivate the need for adopting a “fairness, explainability, and privacy by design” approach when developing AI/ML models and systems for different consumer and enterprise applications from the societal, regulatory, customer, end-user, and model developer perspectives. We will then focus on the application of responsible AI techniques in practice through industry case studies. We will discuss the sociotechnical dimensions and practical challenges, and conclude with the key takeaways and open challenges.
This document summarizes a presentation on machine learning models, adversarial attacks, and defense strategies. It discusses adversarial attacks on machine learning systems, including GAN-based attacks. It then covers various defense strategies against adversarial attacks, such as filter-based adaptive defenses and outlier-based defenses. The presentation also addresses issues around bias in AI systems and the need for explainable and accountable AI.
Machine Learning: Opening the Pandora's Box - Dhiana Deva @ QCon São Paulo 2019Dhiana Deva
The document discusses introducing machine learning and the challenges that come with it. It likens introducing machine learning to opening Pandora's box, as it brings problems like constraints, assumptions, risks, and issues. It recommends starting with simple approaches, addressing these challenges through iteration, and aiming high with vision while avoiding algorithmic bias. The overall message is to have fun on the journey of machine learning and focus on creating customer value.
What is data science ?
Data rules the world we live in, and in fact, has been dubbed the “oil” of the 21st century. In the past few years, the world has witnessed a steep and continuing upsurge in data. Thanks to the growth of social media, smartphones, and the Internet of Things, the amount of data at our disposal today is beyond imagination. As Alphabet’s Eric Schmidt claims, every 48 hours, we generate the amount of data humanity produced since the dawn of civilization until 15 years ago. So, how then, are we able to make sense of such massive amounts of data?
To put in simple terms, Data Science is a combination of mathematics, programming, statistics, data analysis, and machine learning. By combining all these, Data Science uses advanced algorithms and scientific methods to extract information and insights from large datasets – both structured and unstructured. The advent of Big Data and Machine Learning has further fuelled the growth of Data Science. Today, Data Science is being used across all parallels of various industries, including business, healthcare, finance, and education.
While the IoT is already a reality that connects smart devices, in the future, we might be looking forward to being a part of an Intelligent Digital Mesh – a connected hub of apps, devices, and people working together in sync.
Product marketing and customer service will be revolutionized by advanced chatbots, Virtual Reality (VR), and Augmented Reality (AR). We might be looking forward to a time when personalized customer experience will include live simulations, interactive demos, visualization of proposed solutions.
Blockchain might just go mainstream – it will not only be limited to the finance sector, but blockchain will apply to healthcare, banking, insurance and other industries.
Automated ML systems and Augmented Analytics together will transform Predictive Analytics and take it to the next level. Predictive Analytics will further help change the face of healthcare.
The job title of a ‘Data Scientist’ will undergo a massive transformation to include an array of diverse roles. As technology, Data Science, and AI continue to advance, Data Scientists will have to evolve to keep pace with the dynamic learning curve of Data Science.
Similar to When the AIs failures send us back to our own societal biases (20)
Low power architecture of logic gates using adiabatic techniquesnooriasukmaningtyas
The growing significance of portable systems to limit power consumption in ultra-large-scale-integration chips of very high density, has recently led to rapid and inventive progresses in low-power design. The most effective technique is adiabatic logic circuit design in energy-efficient hardware. This paper presents two adiabatic approaches for the design of low power circuits, modified positive feedback adiabatic logic (modified PFAL) and the other is direct current diode based positive feedback adiabatic logic (DC-DB PFAL). Logic gates are the preliminary components in any digital circuit design. By improving the performance of basic gates, one can improvise the whole system performance. In this paper proposed circuit design of the low power architecture of OR/NOR, AND/NAND, and XOR/XNOR gates are presented using the said approaches and their results are analyzed for powerdissipation, delay, power-delay-product and rise time and compared with the other adiabatic techniques along with the conventional complementary metal oxide semiconductor (CMOS) designs reported in the literature. It has been found that the designs with DC-DB PFAL technique outperform with the percentage improvement of 65% for NOR gate and 7% for NAND gate and 34% for XNOR gate over the modified PFAL techniques at 10 MHz respectively.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
We have compiled the most important slides from each speaker's presentation. This year’s compilation, available for free, captures the key insights and contributions shared during the DfMAy 2024 conference.
Electric vehicle and photovoltaic advanced roles in enhancing the financial p...IJECEIAES
Climate change's impact on the planet forced the United Nations and governments to promote green energies and electric transportation. The deployments of photovoltaic (PV) and electric vehicle (EV) systems gained stronger momentum due to their numerous advantages over fossil fuel types. The advantages go beyond sustainability to reach financial support and stability. The work in this paper introduces the hybrid system between PV and EV to support industrial and commercial plants. This paper covers the theoretical framework of the proposed hybrid system including the required equation to complete the cost analysis when PV and EV are present. In addition, the proposed design diagram which sets the priorities and requirements of the system is presented. The proposed approach allows setup to advance their power stability, especially during power outages. The presented information supports researchers and plant owners to complete the necessary analysis while promoting the deployment of clean energy. The result of a case study that represents a dairy milk farmer supports the theoretical works and highlights its advanced benefits to existing plants. The short return on investment of the proposed approach supports the paper's novelty approach for the sustainable electrical system. In addition, the proposed system allows for an isolated power setup without the need for a transmission line which enhances the safety of the electrical network
DEEP LEARNING FOR SMART GRID INTRUSION DETECTION: A HYBRID CNN-LSTM-BASED MODELgerogepatton
As digital technology becomes more deeply embedded in power systems, protecting the communication
networks of Smart Grids (SG) has emerged as a critical concern. Distributed Network Protocol 3 (DNP3)
represents a multi-tiered application layer protocol extensively utilized in Supervisory Control and Data
Acquisition (SCADA)-based smart grids to facilitate real-time data gathering and control functionalities.
Robust Intrusion Detection Systems (IDS) are necessary for early threat detection and mitigation because
of the interconnection of these networks, which makes them vulnerable to a variety of cyberattacks. To
solve this issue, this paper develops a hybrid Deep Learning (DL) model specifically designed for intrusion
detection in smart grids. The proposed approach is a combination of the Convolutional Neural Network
(CNN) and the Long-Short-Term Memory algorithms (LSTM). We employed a recent intrusion detection
dataset (DNP3), which focuses on unauthorized commands and Denial of Service (DoS) cyberattacks, to
train and test our model. The results of our experiments show that our CNN-LSTM method is much better
at finding smart grid intrusions than other deep learning algorithms used for classification. In addition,
our proposed approach improves accuracy, precision, recall, and F1 score, achieving a high detection
accuracy rate of 99.50%.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Literature Review Basics and Understanding Reference Management.pptxDr Ramhari Poudyal
Three-day training on academic research focuses on analytical tools at United Technical College, supported by the University Grant Commission, Nepal. 24-26 May 2024
Embedded machine learning-based road conditions and driving behavior monitoringIJECEIAES
Car accident rates have increased in recent years, resulting in losses in human lives, properties, and other financial costs. An embedded machine learning-based system is developed to address this critical issue. The system can monitor road conditions, detect driving patterns, and identify aggressive driving behaviors. The system is based on neural networks trained on a comprehensive dataset of driving events, driving styles, and road conditions. The system effectively detects potential risks and helps mitigate the frequency and impact of accidents. The primary goal is to ensure the safety of drivers and vehicles. Collecting data involved gathering information on three key road events: normal street and normal drive, speed bumps, circular yellow speed bumps, and three aggressive driving actions: sudden start, sudden stop, and sudden entry. The gathered data is processed and analyzed using a machine learning system designed for limited power and memory devices. The developed system resulted in 91.9% accuracy, 93.6% precision, and 92% recall. The achieved inference time on an Arduino Nano 33 BLE Sense with a 32-bit CPU running at 64 MHz is 34 ms and requires 2.6 kB peak RAM and 139.9 kB program flash memory, making it suitable for resource-constrained embedded systems.
Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapte...University of Maribor
Slides from talk presenting:
Aleš Zamuda: Presentation of IEEE Slovenia CIS (Computational Intelligence Society) Chapter and Networking.
Presentation at IcETRAN 2024 session:
"Inter-Society Networking Panel GRSS/MTT-S/CIS
Panel Session: Promoting Connection and Cooperation"
IEEE Slovenia GRSS
IEEE Serbia and Montenegro MTT-S
IEEE Slovenia CIS
11TH INTERNATIONAL CONFERENCE ON ELECTRICAL, ELECTRONIC AND COMPUTING ENGINEERING
3-6 June 2024, Niš, Serbia
7. 7
Job posts gathered into a single position: prone to many biases. Data biases and Model biases
What do you think of combining : architect + dev+ testing + maintenance ?
The nature of AI into the work market
Data Scientist
Data Analyst
Data Engineer
Machine Learning Scientist
Machine Learning Engineer
Statistician
Find, clean and organize data for companies
Transform and manipulate large datasets to suit the analysis of companies
Perform batch processing or real time processing on gathered/stored data
Research new data approaches and algorithms
Create data funnels and deliver software solutions
Interpret, analyze and report statistical information
8. 8
● Bias (in statistics) : “the difference between the expectation of the sample estimator and the true population value, which
reduces the representativeness of the estimator by systematically distorting it”
● “The big takeaway is that we don’t know what we don’t know,” (Alice Popejoy)
What do you mean about biases ?
11. 11
● MIT study decision-making of self-driving car on
killing scenario
● Scenario with
○ high/low level of education
○ young/old
○ male/female
○ pets
○ traffic signal respect
http://moralmachine.mit.edu/
The Moral Machine
12. 12
● Results “rank” you on
○ number of death
○ law respect
○ gender,age,health, ...
● Need ethical choices at the government,
insurance ? manufacturer ? passengers ?
The Moral Machine
13. 13
● 3 different clusters
● Cultural differences
○ USA/Europe kill the oldest vs Japon saves the oldest
○ Colombia saves high educated people vs Finland doesn’t matter
○ South america/France saves women
The Moral Machine experiment, Edmond Awad et al. (2018), Nature
The Moral Machine
14. 14
Ethics in clinical investigations
● Technical committee to review scientific foundations and safety (in France, ANSM)
● Ethics committee on animal AND humans (in France, CPP)
○ Composed by medical professional and citizen
○ Review application form on
■ benefit/risk
■ information quality
■ resources to conduct the study
■ patient recruitment process
■ Patient consent modality
○ Follow the experiment process
15. 15
Applying ethics committee in AI at Google
● Advanced Technology External Advisory Council (ATEAC)
○ Ensure the white paper principles for AI at Google
■ Be socially beneficial
■ Avoid creating or reinforcing unfair bias
■ Be built and tested for safety
■ Be accountable to people
■ Incorporate privacy design principles
■ Uphold high standards of scientific excellence
■ Be made available for uses that accord with these principles
● Dissolved 1 week after creation
○ Ethics of some attendees were discussed …
○ Questioning themselves of the needs to represent every part of the society
17. 17
● In the 90s, “I was breaking all of [my classmates’] facial-recognition software
because apparently all the pictures they were taking were of people with
significantly less melanin than I have” (Charles Isbell)
● In 2015, the “gorilla mistake” in Google Photos
● 25 years, not the same learning model at all, but the same root cause
Face recognition from the 90s to nowadays
18. 18
● In 2014, Amazon develops an IA to “find” key success factor and hire these people
● Train on their 10 last years hired people
● 89% of the engineering workforce is male
● At the end, you have sexist recruitment AI !
● Representative data inside the company by not inside the society …
AI recruitment by Amazon
19. 19
Biases in the Features
Outcome outputFeatures
Model
Diploma Gender Hobbies
Nationality University ...
HIRED ?
Model
Outcome output
Features
● Lack of an appropriate set of features
● Lack of an appropriate dataset
● Imbalanced dataset or bias in the output
● Unawareness: remove sensitive features from your
data
20. 20
Tutorial: your first bias detector
Survival of passengers on the Titanic
● Decision Tree
● Leaves = class labels
● Nodes = splitting/conjunctions of features
● Important nodes = less deep, lot of observations
● Many Framework coexist:
FairML, “What If”, IBM Bias Assessment
● ‘No universal solution’: combine them
Summarize: Your chance of survival were
good iff:
- you were a woman
- you were a young boy with few siblings
21. 21
● Bad advice pointed out by IBM in internal in 2017
○ ex : Suggest a cancer patient with severe bleeding be given a drug that could cause the bleeding to worsen
● Started off by using real patient data
● Fed it with hypothetical data
● “Synthetic cases allow you to treat and train Watson on a variety of patient variables and conditions that might not be
present in random patient samples, but are important to treatment recommendations” (Edward Barbani)
● Pointed out the difficulties to collect representative data in medical
Unsafe medical recommendations by IBM Watson
22. 22
Data Augmentation
● Historically for images: rotation, flipping, adding noise…
● Object detection models : performance loss on corrupted images
● CNNs generalize poorly to novel distortion types, despite being trained on a variety of other distortions
23. 23
Generating new data with GANS
source: Medium, “Generative Adversarial with Maths” by Madhu
Sanjeevi
Discriminator
Generator
Generator
24. 24
Understanding biases and Generalization in Deep
Generative Models
Can we learn the distribution of data with GANS ?
What are the inductive biases of Deep Generative models ?
● Unbiased and consistent density estimation impossible
● Inductive biases
● Similar cognitive bias as humans: numerosity
● Weber’s law: relative change (ratio)
True data Generated data
28. 28
Group Fairness
● Demographic Parity
○ ex : gender independent
● Equal odds
○ ex : take into account the reality
statistics in one side
● Equal opportunity
○ ex : take into account the reality
statistics in both side
33. 33
Is it working ?
Examples of Explanations are usually cherry-picked !
34. 34
Individual Fairness for Explainability
The fact that 2 resulting saliency maps are different is fundamentally due to the network itself being fragile to
such perturbations
37. 37
● Experienced lane recognition algorithm
● Doesn’t expect that 3 stickers break the robustness
● Hacking the AI with thinks that doesn’t matter humans
Tesla autonomous driving hacked with stickers
39. 39
Hacking tutorial: design your own adversarial example
CAT DOG CAT DOG
if B is predicted as
a cat, retry to cross
the decision
boundary with B !
Linear Model NON Linear Model
A
A
B
B
40. 40
Hacking Tutorial: I don’t have access to the model
This is not a
car !
This is not a
car !
41. 41
Defending against adversarial attacks ?!
SUCCESS
FAILURES
2013
Discovery of
adversarial examples
2015
Fast Gradient
Sign
2015
DeepFool
2016
Carlini Wagner
2019
Unforeseen
Attack
2013
Adversarial Training
BRUTE FORCE: training with
adv
2015
Defensive Distillation
output probabilities rather than hard
decisions
2016
JPEG compression
against adv
42. 42
Why is it hard to defend against adversarial examples?
● Adaptative: block one type of attacks but leaves vulnerability open to an attacker who knows the defense being used
● Hard to defend because hard to understand the theory ?
43. 43
Adversarial Examples are not Bugs, they are
features
BIRD
DOG
CAT
“Bird”
“Dog”
“Cat”
create the label-target adversarial dataset
DOG
CAT
BIRD
“Dog”
“Cat”
“Bird”
46. 46
What about in the future ?
● Art project exposed racial bias in the biggest image dataset
ImageNet in Sept. 2019
● ImageNet will remove 600,000 images
● Number of publications per year with the name Imagenet in
the abstract…