An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.
To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.
Slide for Arithmer Seminar given by Dr. Daisuke Sato (Arithmer) at Arithmer inc.
The topic is on "explainable AI".
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.
To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.
Slide for Arithmer Seminar given by Dr. Daisuke Sato (Arithmer) at Arithmer inc.
The topic is on "explainable AI".
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Usage of AI and machine learning models is likely to become more commonplace as larger swaths of the economy embrace automation and data-driven decision-making. While these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric predictions with no accompanying explanations. Unfortunately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent weak AI and ML systems, but practitioners usually don’t have the right tools to pry open machine learning black-boxes and debug them.
This presentation introduces several new approaches to that increase transparency, accountability, and trustworthiness in machine learning models. If you are a data scientist or analyst and you want to explain a machine learning model to your customers or managers (or if you have concerns about documentation, validation, or regulatory requirements), then this presentation is for you!
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
Abstract –
Although industries have started to adopt AI and Machine Learning in almost every sector to solve complex business problems, but are these models always trustworthy? Machine Learning models are not any oracle but rather are scientific methods and mathematical models which best describes the data. But science is all about explaining complex natural phenomena in the simplest way possible! So, can we make ML and DL models more interpretable, so that any business user can understand these models and trust the results of these models?
In order to find out the answer, please join me in this session, in which I will take about concepts of Explainable AI and discuss its necessity and principles which help us demystify black-box AI models. I will be discussing about popular approaches like Feature Importance, Key Influencers, Decomposition trees used in classical Machine Learning interpretable. We will discuss about various techniques used for Deep Learning model interpretations like Saliency Maps, Grad-CAMs, Visual Attention Maps and finally go through more details about frameworks like LIME, SHAP, ELI5, SKATER, TCAV which helps us to make Machine Learning and Deep Learning models more interpretable, trustworthy and useful!
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Understanding how high powered ML models arrive at their predictions is an important aspect of Machine Learning, and SHAP is a powerful tool that enables practitioners to understand how different features combine to help a model arrive at a prediction.
This slidedeck is from a presentation given at pydata global on the theoretical foundations of SHAP as well as how to use its library. Link to the presentation can be found here: https://pydata.org/global2021/schedule/presentation/3/behind-the-black-box-how-to-understand-any-ml-model-using-shap/
This was presented at the London Artificial Intelligence & Deep Learning Meetup.
https://www.meetup.com/London-Artificial-Intelligence-Deep-Learning/events/245251725/
Enjoy the recording: https://youtu.be/CY3t11vuuOM.
- - -
Kasia discussed complexities of interpreting black-box algorithms and how these may affect some industries. She presented the most popular methods of interpreting Machine Learning classifiers, for example, feature importance or partial dependence plots and Bayesian networks. Finally, she introduced Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models - using breast cancer data as a specific case scenario.
Kasia Kulma is a Data Scientist at Aviva with a soft spot for R. She obtained a PhD (Uppsala University, Sweden) in evolutionary biology in 2013 and has been working on all things data ever since. For example, she has built recommender systems, customer segmentations, predictive models and now she is leading an NLP project at the UK’s leading insurer. In spare time she tries to relax by hiking & camping, but if that doesn’t work ;) she co-organizes R-Ladies meetups and writes a data science blog R-tastic (https://kkulma.github.io/).
https://www.linkedin.com/in/kasia-kulma-phd-7695b923/
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
Abstract:
Explainability in the age of the EU GDPR is becoming an increasingly pertinent consideration for Machine Learning. At QuantumBlack, we address the traditional Accuracy vs. Interpretability trade-off, by leveraging modern XAI techniques such as LIME and SHAP, to enable individualised explanations without necessary limiting the utility and performance of the otherwise ‘black-box’ models. The talk focuses on Shapley additive explanations (Lundberg et al. 2017) that integrate Shapley values from the Game Theory for consistent and locally accurate explanations; provides illustrative examples and touches upon the wider XAI theory.
Bio:
Dr Torgyn Shaikhina is a Data Scientist at QuantumBlack, STEM Ambassador, and the founder of the Next Generation Programmers outreach initiative. Her background is in decision support systems for Healthcare and Biomedical Engineering with a focus on Machine Learning with limited information.
Spark 2019: Equifax's SVP Data & Analytics, Peter Maynard, discusses the notion (and importance) of explainable AI in the financial services sector. He looks at the work Equifax have done to crack open the black box by creating patented AI technology that helps companies make smarter, explainable decisions using AI.
Feature Engineering in Machine LearningKnoldus Inc.
In this Knolx we are going to explore Data Preprocessing and Feature Engineering Techniques. We will also understand what is Feature Engineering and its importance in Machine Learning. How Feature Engineering can help in getting the best results from the algorithms.
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
As AI becomes more and more prevalent in our lives, the decisions it makes for us are becoming more and more impactful on our lives and those of others.
How can we help people to have trust in the models we're building? The field of Explainable AI focuses on making any machine learning model interpretable by non experts.
해당 자료는 풀잎스쿨 18기 중 "설명가능한 인공지능 기획!" 진행 중 Counterfactual Explanation 세션에 대해서 정리한 자료입니다.
논문, Youtube 및 하기 자료를 바탕으로 정리되었습니다.
https://christophm.github.io/interpretable-ml-book/
Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Scientists, physicians, researchers, and analyst that use these technologies for their important work have the right to trust and understand their models and the answers they generate. This talk is an overview of several techniques for interpreting deep learning and machine learning models and telling stories from their results.
Speaker: Patrick Hall is a Data Scientist and Product Engineer at H2O.ai. He’s also an Adjunct Professor at George Washington University for the Department of Decision Sciences. Prior to joining H2O, Patrick spent many years as a Senior Data Scientist SAS and has worked with many Fortune 500 companies on their data science and machine learning problems. https://www.linkedin.com/in/jpatrickhall
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding why certains predictions are made are very important in assessing trust, which is very important if one plans to take action based on a prediction. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. If the user does not trust the model they will never use it .
It's a well-known fact that the best explanation of a simple model is the model itself. But often we use complex models, such as ensemble methods or deep networks, so we cannot use the original model as its own best explanation because it is not easy to understand.
In the context of this topic, we will discuss how methods for interpreting model predictions work and will try to understand practical value of these methods.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we present open problems and research directions for the data mining / machine learning community.
The importance of model fairness and interpretability in AI systemsFrancesca Lazzeri, PhD
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them.
In this session, Francesca will go over a few methods and tools that enable you to "unpack” machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open-source fairness and interpretability packages, attendees will learn how to:
- Explain model prediction by generating feature importance values for the entire model and/or individual data points.
- Achieve model interpretability on real-world datasets at scale, during training and inference.
- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
- Leverage additional interactive visualizations to assess which groups of users might be negatively impacted by a model and compare multiple models in terms of their fairness and performance.
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Francesca Lazzeri, PhD
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them. In this session, Francesca will go over a few methods and tools that enable you to “unpack" machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open source fairness and interpretability packages, attendees will learn how to:
- Explain model prediction by generating feature importance values for the entire model and/or individual datapoints.
- Achieve model interpretability on real-world datasets at scale, during training and inference.
- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
- Leverage additional interactive visualizations to assess which groups of users might be negatively impacted by a model and compare multiple models in terms of their fairness and performance.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Usage of AI and machine learning models is likely to become more commonplace as larger swaths of the economy embrace automation and data-driven decision-making. While these predictive systems can be quite accurate, they have been treated as inscrutable black boxes in the past, that produce only numeric predictions with no accompanying explanations. Unfortunately, recent studies and recent events have drawn attention to mathematical and sociological flaws in prominent weak AI and ML systems, but practitioners usually don’t have the right tools to pry open machine learning black-boxes and debug them.
This presentation introduces several new approaches to that increase transparency, accountability, and trustworthiness in machine learning models. If you are a data scientist or analyst and you want to explain a machine learning model to your customers or managers (or if you have concerns about documentation, validation, or regulatory requirements), then this presentation is for you!
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
Abstract –
Although industries have started to adopt AI and Machine Learning in almost every sector to solve complex business problems, but are these models always trustworthy? Machine Learning models are not any oracle but rather are scientific methods and mathematical models which best describes the data. But science is all about explaining complex natural phenomena in the simplest way possible! So, can we make ML and DL models more interpretable, so that any business user can understand these models and trust the results of these models?
In order to find out the answer, please join me in this session, in which I will take about concepts of Explainable AI and discuss its necessity and principles which help us demystify black-box AI models. I will be discussing about popular approaches like Feature Importance, Key Influencers, Decomposition trees used in classical Machine Learning interpretable. We will discuss about various techniques used for Deep Learning model interpretations like Saliency Maps, Grad-CAMs, Visual Attention Maps and finally go through more details about frameworks like LIME, SHAP, ELI5, SKATER, TCAV which helps us to make Machine Learning and Deep Learning models more interpretable, trustworthy and useful!
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Understanding how high powered ML models arrive at their predictions is an important aspect of Machine Learning, and SHAP is a powerful tool that enables practitioners to understand how different features combine to help a model arrive at a prediction.
This slidedeck is from a presentation given at pydata global on the theoretical foundations of SHAP as well as how to use its library. Link to the presentation can be found here: https://pydata.org/global2021/schedule/presentation/3/behind-the-black-box-how-to-understand-any-ml-model-using-shap/
This was presented at the London Artificial Intelligence & Deep Learning Meetup.
https://www.meetup.com/London-Artificial-Intelligence-Deep-Learning/events/245251725/
Enjoy the recording: https://youtu.be/CY3t11vuuOM.
- - -
Kasia discussed complexities of interpreting black-box algorithms and how these may affect some industries. She presented the most popular methods of interpreting Machine Learning classifiers, for example, feature importance or partial dependence plots and Bayesian networks. Finally, she introduced Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models - using breast cancer data as a specific case scenario.
Kasia Kulma is a Data Scientist at Aviva with a soft spot for R. She obtained a PhD (Uppsala University, Sweden) in evolutionary biology in 2013 and has been working on all things data ever since. For example, she has built recommender systems, customer segmentations, predictive models and now she is leading an NLP project at the UK’s leading insurer. In spare time she tries to relax by hiking & camping, but if that doesn’t work ;) she co-organizes R-Ladies meetups and writes a data science blog R-tastic (https://kkulma.github.io/).
https://www.linkedin.com/in/kasia-kulma-phd-7695b923/
Achieving Algorithmic Transparency with Shapley Additive Explanations (H2O Lo...Sri Ambati
Abstract:
Explainability in the age of the EU GDPR is becoming an increasingly pertinent consideration for Machine Learning. At QuantumBlack, we address the traditional Accuracy vs. Interpretability trade-off, by leveraging modern XAI techniques such as LIME and SHAP, to enable individualised explanations without necessary limiting the utility and performance of the otherwise ‘black-box’ models. The talk focuses on Shapley additive explanations (Lundberg et al. 2017) that integrate Shapley values from the Game Theory for consistent and locally accurate explanations; provides illustrative examples and touches upon the wider XAI theory.
Bio:
Dr Torgyn Shaikhina is a Data Scientist at QuantumBlack, STEM Ambassador, and the founder of the Next Generation Programmers outreach initiative. Her background is in decision support systems for Healthcare and Biomedical Engineering with a focus on Machine Learning with limited information.
Spark 2019: Equifax's SVP Data & Analytics, Peter Maynard, discusses the notion (and importance) of explainable AI in the financial services sector. He looks at the work Equifax have done to crack open the black box by creating patented AI technology that helps companies make smarter, explainable decisions using AI.
Feature Engineering in Machine LearningKnoldus Inc.
In this Knolx we are going to explore Data Preprocessing and Feature Engineering Techniques. We will also understand what is Feature Engineering and its importance in Machine Learning. How Feature Engineering can help in getting the best results from the algorithms.
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
As AI becomes more and more prevalent in our lives, the decisions it makes for us are becoming more and more impactful on our lives and those of others.
How can we help people to have trust in the models we're building? The field of Explainable AI focuses on making any machine learning model interpretable by non experts.
해당 자료는 풀잎스쿨 18기 중 "설명가능한 인공지능 기획!" 진행 중 Counterfactual Explanation 세션에 대해서 정리한 자료입니다.
논문, Youtube 및 하기 자료를 바탕으로 정리되었습니다.
https://christophm.github.io/interpretable-ml-book/
Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Scientists, physicians, researchers, and analyst that use these technologies for their important work have the right to trust and understand their models and the answers they generate. This talk is an overview of several techniques for interpreting deep learning and machine learning models and telling stories from their results.
Speaker: Patrick Hall is a Data Scientist and Product Engineer at H2O.ai. He’s also an Adjunct Professor at George Washington University for the Department of Decision Sciences. Prior to joining H2O, Patrick spent many years as a Senior Data Scientist SAS and has worked with many Fortune 500 companies on their data science and machine learning problems. https://www.linkedin.com/in/jpatrickhall
Despite widespread adoption, machine learning models remain mostly black boxes. Understanding why certains predictions are made are very important in assessing trust, which is very important if one plans to take action based on a prediction. Such understanding also provides insights into the model, which can be used to transform an untrustworthy model or prediction into a trustworthy one. If the user does not trust the model they will never use it .
It's a well-known fact that the best explanation of a simple model is the model itself. But often we use complex models, such as ensemble methods or deep networks, so we cannot use the original model as its own best explanation because it is not easy to understand.
In the context of this topic, we will discuss how methods for interpreting model predictions work and will try to understand practical value of these methods.
Fairness-aware Machine Learning: Practical Challenges and Lessons Learned (KD...Krishnaram Kenthapadi
Researchers and practitioners from different disciplines have highlighted the ethical and legal challenges posed by the use of machine learned models and data-driven systems, and the potential for such systems to discriminate against certain population groups, due to biases in algorithmic decision-making systems. This tutorial presents an overview of algorithmic bias / discrimination issues observed over the last few years and the lessons learned, key regulations and laws, and evolution of techniques for achieving fairness in machine learning systems. We motivate the need for adopting a "fairness by design" approach (as opposed to viewing algorithmic bias / fairness considerations as an afterthought), when developing machine learning based models and systems for different consumer and enterprise applications. Then, we focus on the application of fairness-aware machine learning techniques in practice by presenting non-proprietary case studies from different technology companies. Finally, based on our experiences working on fairness in machine learning at companies such as Facebook, Google, LinkedIn, and Microsoft, we present open problems and research directions for the data mining / machine learning community.
The importance of model fairness and interpretability in AI systemsFrancesca Lazzeri, PhD
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them.
In this session, Francesca will go over a few methods and tools that enable you to "unpack” machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open-source fairness and interpretability packages, attendees will learn how to:
- Explain model prediction by generating feature importance values for the entire model and/or individual data points.
- Achieve model interpretability on real-world datasets at scale, during training and inference.
- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
- Leverage additional interactive visualizations to assess which groups of users might be negatively impacted by a model and compare multiple models in terms of their fairness and performance.
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...Francesca Lazzeri, PhD
Machine learning model fairness and interpretability are critical for data scientists, researchers and developers to explain their models and understand the value and accuracy of their findings. Interpretability is also important to debug machine learning models and make informed decisions about how to improve them. In this session, Francesca will go over a few methods and tools that enable you to “unpack" machine learning models, gain insights into how and why they produce specific results, assess your AI systems fairness and mitigate any observed fairness issues.
Using open source fairness and interpretability packages, attendees will learn how to:
- Explain model prediction by generating feature importance values for the entire model and/or individual datapoints.
- Achieve model interpretability on real-world datasets at scale, during training and inference.
- Use an interactive visualization dashboard to discover patterns in data and explanations at training time.
- Leverage additional interactive visualizations to assess which groups of users might be negatively impacted by a model and compare multiple models in terms of their fairness and performance.
Steering Model Selection with Visual Diagnostics: Women in Analytics 2019Rebecca Bilbro
Machine learning is ultimately a search for the best combination of features, algorithm, and hyperparameters that result in the best performing model. Oftentimes, this leads us to stay in our algorithmic comfort zones, or to resort to automated processes such as grid searches and random walks. Whether we stick to what we know or try many combinations, we are sometimes left wondering if we have actually succeeded.
By enhancing model selection with visual diagnostics, data scientists can inject human guidance to steer the search process. Visualizing feature transformations, algorithmic behavior, cross-validation methods, and model performance allows us a peek into the high dimensional realm that our models operate. As we continue to tune our models, trying to minimize both bias and variance, these glimpses allow us to be more strategic in our choices. The result is more effective modeling, speedier results, and greater understanding of underlying processes.
Visualization is an integral part of the data science workflow, but visual diagnostics are directly tied to machine learning transformers and models. The Yellowbrick library extends the scikit-learn API providing a Visualizer object, an estimator that learns from data and produces a visualization as a result. In this tutorial, we will explore feature visualizers, visualizers for classification, clustering, and regression, as well as model analysis visualizers. We'll work through several examples and show how visual diagnostics steer model selection, making machine learning more informed, and more effective.
A Unified Approach to Interpreting Model Predictions (SHAP)Rama Irsheidat
A Unified Approach to Interpreting Model Predictions.
Scott M. Lundberg, Su-In Lee.
Understanding why a model makes a certain prediction can be as crucial as the prediction's accuracy in many applications. However, the highest accuracy for large modern datasets is often achieved by complex models that even experts struggle to interpret, such as ensemble or deep learning models, creating a tension between accuracy and interpretability. In response, various methods have recently been proposed to help users interpret the predictions of complex models, but it is often unclear how these methods are related and when one method is preferable over another. To address this problem, we present a unified framework for interpreting predictions, SHAP (SHapley Additive exPlanations). SHAP assigns each feature an importance value for a particular prediction. Its novel components include: (1) the identification of a new class of additive feature importance measures, and (2) theoretical results showing there is a unique solution in this class with a set of desirable properties. The new class unifies six existing methods, notable because several recent methods in the class lack the proposed desirable properties. Based on insights from this unification, we present new methods that show improved computational performance and/or better consistency with human intuition than previous approaches.
AI Professionals use top machine learning algorithms to automate models that analyze more extensive and complex data which was not possible in older machine learning algos.
Networking is overrated! You must invest in your Reputation!
Speaker: Hayim Makabee, CTO at Dooiu
In this talk Hayim will share useful guidelines about how to manage and develop your personal reputation.
Hayim will focus on providing practical advice about how to create opportunities by generating value to the people in your professional network.
In general the goal of networking is to create new opportunities. These may be business opportunities, partnership opportunities or job opportunities. But what really creates new opportunities is our reputation.
Having a good reputation means that:
People will remember you. They will remember you for many years since they had the last interaction with you.
People will recommend you. They will introduce you to their own contacts whenever they think you may contribute.
People will constantly offer you new opportunities. They will invite you when they have a job opening, or when they need a partner or an adviser.
About the speaker:
Hayim Makabee is the CTO of Dooiu, an innovative Social Fintech. Hayim has over 25 years of experience in the Israeli high-tech industry, having held leadership roles as a Software Architect and Machine Learning specialist. He is also a mentor at Gvahim, where he helps new Olim to develop their professional careers in Israel. Hayim holds a M.Sc. in Computer Science from Technion and is the author of a book and several scientific publications.
About Dooiu:
Dooiu is a platform from which two or more people can make calls and exchange knowledge for money. Dooiu is a solution for communicating to each other and making payments in a simple and clear way. It allows those who sell their time as teachers, consultants, or experts to increase their income and grow professionally. Also, it enables any person who wants to consult on any topic to pay a fair price for the services received.
Applications of Machine Learning - INDT WebinarHayim Makabee
INDT Webinar about Applications of Machine Learning.
In these slides Hayim Makabee presents several applications of Machine Learning and their impact on our lives, including Recommender Systems and Autonomous Vehicles, with several examples of recent innovations in the fields of Industry, Health and Agriculture.
In these slides Hayim Makabee presents several applications of Machine Learning and their impact on our lives, including Recommender Systems and Autonomous Vehicles.
In these slides Hayim Makabee explains how we applied the Blue Ocean Strategy to plan the main features of the KashKlik platform and its business model.
Managing your Reputation Gvahim WebinarHayim Makabee
Useful guidelines about how to manage and develop your personal reputation. Practical advice about how to create opportunities by generating value to the people in your professional network. Presented by Hayim Makabee as a Gvahim Webinar on June 2020.
Useful guidelines about how to manage and develop your personal reputation. Practical advice about how to create opportunities by generating value to the people in your professional network.
The Story of a Young Oleh (Immigrant in Israel)Hayim Makabee
The Story of a Young Oleh (Immigrant in Israel) by Hayim Makabee
Presentation prepared for the Taglit groups (August 2018)
Taglit-Birthright Israel, also known as Birthright Israel or simply Birthright, is a not-for-profit educational organization that sponsors free ten-day heritage trips to Israel for young adults of Jewish heritage, aged 18–32.
Software Architecture for Agile DevelopmentHayim Makabee
Slides of a workshop given at Herzliya on June/2017, organized by ILTAM and IASA Israel. This workshop was dedicated to the topic of Software Architecture in the context of Agile Development. We answered the question: “How much Design Up Front should be done in an Agile project?” Hayim presented his approach of Adaptable Design Up Front (ADUF), describing its rationale, applications in practice and comparison to other approaches such as Emergent Design. He explained why adaptability is essential for the development of complex software systems using Agile methods. The concepts were illustrated through practical software architecture approaches such as micro-services and examples of real software systems that were developed in the past. The workshop also included an exercise on the definition and evolution of the design of an interesting system.
Adaptable Designs for Agile Software DevelopmentHayim Makabee
Abstract: This talk introduces the concept of Adaptable Software Design, and explains why adaptability is essential for the development of complex software systems using Agile methods. The concepts are illustrated through practical software architecture approaches such as micro-services.
The concept of Antifragility was introduced by Nassim Taleb to describe systems that benefit from impacts and volatility.
In this talk we will discuss how this concept may be applied in the field of Software Design with the goal of developing Change-Resilient Systems.
In particular we will address two patterns which frequently appear in Antifragile systems:
1) The Barbell Strategy and the importance of the separation between high-level abstract elements and concrete implementation details.
2) The Componentization Strategy and its applications in SOA, Microservices and Software Product Lines.
To document or not to document? An exploratory study on developers' motivatio...Hayim Makabee
Abstract: Technical debt represents the situation in a project where developers accept compromises in one dimension of a system in order to meet urgent demands in other dimensions. These compromises incur a “debt”, on which “interest” has to be paid to maintain the long-term health of the project. One of the elements of technical debt is documentation debt due to under-documentation of the evolving system. In this exploratory study, our goal is to examine the different aspects of developers' motivation to document code. Specifically, we aim to identify the motivating and hindering aspects of documentation as perceived by the developers. The motivating aspects of code documenting we find include improving code comprehensibility, order, and quality. The hindering aspects include developers’ perception of documenting as a tedious, difficult, and time consuming task that interrupts the coding process. These findings may serve as a basis for developing guidelines toward improving documentation practices and encouraging developers to document their code thus reducing documentation debt.
To document or not to document? An exploratory study on developers' motivatio...Hayim Makabee
Abstract: Technical debt represents the situation in a project where developers accept compromises in one dimension of a system in order to meet urgent demands in other dimensions. These compromises incur a “debt”, on which “interest” has to be paid to maintain the long-term health of the project. One of the elements of technical debt is documentation debt due to under-documentation of the evolving system. In this exploratory study, our goal is to examine the different aspects of developers' motivation to document code. Specifically, we aim to identify the motivating and hindering aspects of documentation as perceived by the developers. The motivating aspects of code documenting we find include improving code comprehensibility, order, and quality. The hindering aspects include developers’ perception of documenting as a tedious, difficult, and time consuming task that interrupts the coding process. These findings may serve as a basis for developing guidelines toward improving documentation practices and encouraging developers to document their code thus reducing documentation debt.
The SOLID Principles Illustrated by Design PatternsHayim Makabee
The goal of the SOLID design principles is to improve the Separation of Concerns, through weaker Coupling and stronger Cohesion. The main consequence should be software systems that are easier to maintain and to extend. However the definition of the SOLID principles is quite abstract, and some developers find it difficult to apply them in practice. In my talk I will show how well-known Design Patterns illustrate the application of the SOLID principles, and also show examples of how to follow these principles to Refactor and improve existing designs.
About the speaker:
Hayim Makabee was born in Rio de Janeiro. He immigrated to Israel in 1992 and completed his M.Sc. studies on Computer Sciences at the Technion. Since then he worked for several hi-tech companies, including also some start-ups. Currently he is a co-founder of the International Association of Software Architects (IASA) in Israel. Hayim is the author of a book about Object-Oriented Programming and has published papers in the fields of Software Engineering, Distributed Systems and Genetic Algorithms.
The Role of the Software Architect (short version)Hayim Makabee
Talk at the First Israeli Conference on Software Architecture
http://www.iltam.org/sw-arch2014/
Abstract:
In this talk Hayim will present the practical aspects of the role of the Software Architect, including the architect's contribution at the diverse stages of the software development life cycle, and the cooperation with the diverse stakeholders: Developers, Team Leaders, Project Managers, QA and Technical Writers.
Bio: Hayim Makabee was born in Rio de Janeiro. He immigrated to Israel in 1992 and completed his M.Sc. studies on Computer Sciences at the Technion. Since then he worked for several hi-tech companies, including also some start-ups. Currently he is a Research Engineer at Yahoo! Labs Haifa. He is also a co-founder of the International Association of Software Architects in Israel.
The quality of software systems may be expressed as a collection of Software Quality Attributes. When the system requirements are defined, it is essential also to define what is expected regarding these quality attributes, since these expectations will guide the planning of the system architecture and design.
Software quality attributes may be classified into two main categories: static and dynamic. Static quality attributes are the ones that reflect the system’s structure and organization. Examples of static attributes are coupling, cohesion, complexity, maintainability and extensibility. Dynamic attributes are the ones that reflect the behavior of the system during its execution. Examples of dynamic attributes are memory usage, latency, throughput, scalability, robustness and fault-tolerance.
Following the definitions of expectations regarding the quality attributes, it is essential to devise ways to measure them and verify that the implemented system satisfies the requirements. Some static attributes may be measured through static code analysis tools, while others require effective design and code reviews. The measuring and verification of dynamic attributes requires the usage of special non-functional testing tools such as profilers and simulators.
In this talk I will discuss the main Software Quality attributes, both static and dynamic, examples of requirements, and practical guidelines on how to measure and verify these attributes.
Title: The Role of the Software Architect
Speaker: Hayim Makabee, co-founder of the Israeli Chapter of the International Association of Software Architects (IASA)
Abstract:
In this talk Hayim will present the practical aspects of the role of the Software Architect, including:
- The four areas of expertise: Design, Domain, Technology and Methodology.
- The cooperation with stakeholders: Developers, Team Leaders, Project Managers, QA and Technical Writers.
Understanding the expected areas of expertise is essential for the architect to develop his/her professional skills.
Understanding how to cooperate with the diverse stakeholders is essential to improve the architect's impact and effectiveness.
Reducing Technical Debt: Using Persuasive Technology for Encouraging Software...Hayim Makabee
Technical debt is a metaphor for the gap between the current state of a software system and its hypothesized ‘ideal’ state. One of the significant and under-investigated elements of technical debt is documentation debt, which
may occur when code is created without supporting internal documentation, such as code comments. Studies have shown that outdated or lacking documentation is a considerable contributor to increased costs of software
systems maintenance. The importance of comments is often overlooked by software developers, resulting in a notably slower growth rate of comments compared to the growth rate of code in software projects. This research aims to explore and better understand developers’ reluctance to document code, and accordingly to propose efficient ways of using persuasive technology to
encourage programmers to document their code. The results may assist software practitioners and project managers to control and reduce documentation debt.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
6. Debugging:
Avoiding Errors
• In 2015 Flickr released an
automatic image tagging
capability that mistakenly
labeled a black man for
an ape.
• Soon after, Google came
up with a photo labeling
tool similar to Flickr,
which made similar
mistakes. Black men were
tagged as gorillas.
7. Debugging:
Avoiding Biases
The COMPAS algorithm for recidivism
prediction produces much higher false positive
rate for black people than white people.
10. FAT/ML:
Fairness,
Accountability,
and
Transparency
in Machine
Learning
Machine Learning raises novel challenges for ensuring
non-discrimination, due process, and understandability in
decision-making.
Policymakers, regulators, and advocates have expressed
fears about the potentially discriminatory impact of
Machine Learning.
Need further technical research into the dangers of
inadvertently encoding bias into automated decisions.
There is increasing alarm that the complexity of machine
learning may reduce the justification for consequential
decisions to “the algorithm made me do it.”
11. Examples:
Applications of
explainable ML
When interacting with algorithmic decisions, users will
expect and demand the same level of expressiveness
from AI.
A doctor diagnosing a patient may benefit from seeing
cases that are very similar or very different.
An applicant whose loan was denied will want to
understand the main reasons for the rejection and
what she can do to reverse the decision.
A regulatorwill want to understand the behavior of the
system as a whole to ensure that it complies with
regulations.
A developer may want to understand where the model
is more or less confident as a means of improving its
performance.
12.
13.
14. Interpretable X
Explainable ML
• Interpretability refers to the ability to
observe cause-and-effect situations in
a system, and, essentially, predict
which changes will cause what type of
shifts in the results (without
necessarily understanding how it
works).
• Explainability is basically the ability to
understand and explain ‘in human
terms’ what is happening with the
model; how exactly it works under the
hood.
15. Why do we need
interpretability?
Safety: system should provide sound
decisions
Curiosity: understand something
unexpected
Debugging: behaviour should be
predictable
Optimality: optimize for true objectives
16. When we may
not need
interpretability
Low risk: no significant
consequences
Awareness: problem is well-
studied
Vulnerability: prevent people
from gaming the system
17. Alternative 1:
Interpretable
Models
Use models that are intrinsically
interpretable and known to be
easy for humans to understand.
Examples: decision trees,
decision rules and linear
regression.
18. Alternative 2:
Interpreting
Black Box
Models
Train a black box model and apply
post-hoc interpretability
techniques to provide
explanations.
Focus: model-agnostic methods
Examples: feature importance and
accumulated local effects and
explaining individual predictions
with Shapley values and LIME.
23. LIME = Local Interpretable Model-agnostic
Explanations
The LIME approach provides explanation for:
• an instance prediction of a model = the target
• in terms of input features = the drivers
• using importance scores = the explanation family
• computed through local perturbations of the model input = the
estimator
25. LIME
• The black-box model’s complex
train function f (unknown to
LIME) is represented by the
blue/pink background, which
cannot be approximated well
by a linear model.
• The bold red cross is the
instance being explained. LIME
samples instances, gets
prediction using f, and weights
them by the proximity to the
instance being explained
(represented here by size).
• The dashed line is the learned
explanation that is locally (but
not globally) faithful.
26. LIME Algorithm
1. Random forest predictions
given features x1 and x2.
Predicted classes: 1 (dark) or 0
(light).
2. Instance of interest (big dot)
and data sampled from a
normal distribution (small
dots).
3. Assign higher weight to points
near the instance of interest.
4. Signs of the grid show the
classifications of the locally
learned model from the
weighted samples. The white
line marks the decision
boundary (P(class=1) = 0.5).
27. Producing an
Explanation -
LIME Model
for Images
Start with a normal image and use the black-box model to produce a
probability distribution over the classes.
Then perturb the input in some way. For images, this could be hiding pixels
by coloring them grey. Now run these through the black-box model to see
the how the probabilities for the class it originally predicted changed.
Use an interpretable (usually linear) model like a decision tree on this
dataset of perturbations and probabilities to extract the key features which
explain the changes. The model is locally weighted — meaning that we care
more about the perturbations that are most similar to the original image
we were using.
Output the features (in our case, pixels) with the greatest weights as our
explanation.
30. The top three predicted classes are "tree frog," "pool table," and "balloon"
31.
32.
33. Bad Explanation
• A husky (on the left) is
confused with a wolf,
because the pixels (on the
right) characterizing wolves
are those of the snowy
background.
• This artifact is due to a
learning base that was
insufficiently representative.
34. Scoped Rules
(Anchors)
Anchors explains individual predictions of any black-
box classification model by finding a decision rule that
“anchors” the prediction sufficiently.
A rule anchors a prediction if changes in other feature
values do not affect the prediction.
Anchors utilizes reinforcement learning techniques in
combination with a graph search algorithm to reduce
the number of model calls (and hence the required
runtime) to a minimum while still being able to
recover from local optima.
35. ALIME =
Anchor LIME
Using Anchor LIME, researchers are trying to
create local explanations where they can tie if-
then rules for more precise explanations.
In the example below, you see many features
being taken into account with various weights
in the LIME process.
But in the ALIME process, these are distilled
down into exemplar rules, or anchors, which
are used to distinguish what it is that’s actually
driving the explanation for the given model.
36. ALIME
• By learning the line (or
slope), LIME explains the
prediction result. Different
from LIME, Anchors uses
“local region” to learn how
to explain the model. The
“local region” refer to a
better construction of
generated data set for
explanation.
41. Shapley Values
A prediction can be explained by assuming that each feature value of the
instance is a “player” in a game where the prediction is the payout.
Shapley values – a method from coalitional game theory – tells us how to fairly
distribute the “payout” among the features.
42. Shapley Values
The predicted price for a 50
m2 2nd floor apartment with a
nearby park and cat ban is
€300,000.
Our goal is to explain how
each of these feature values
contributed to the prediction.
43. Definition: Shapley Value
The Shapley value is the average marginal contribution of a feature value across
all possible coalitions.
Both the magnitude and the sign of the contributions are important:
• If a feature has a larger contribution than another, it has a larger influence on
the model’s prediction for the observation of interest.
• The sign of the contribution indicates whether the feature contributes towards
increasing (if positive) or decreasing (if negative) the model’s output.
44. Contribution
One sample repetition to
estimate the contribution of
cat-banned to the prediction
when added to the coalition of
park-nearby and area-50.
46. SHAP =
SHapley
Additive
exPlanations
The goal of SHAP is to explain the prediction of an
instance x by computing the contribution of each
feature to the prediction.
The SHAP explanation method computes Shapley
values from coalitional game theory.
The feature values of a data instance act as players in a
coalition. Shapley values tell us how to fairly distribute
the “payout” (= the prediction) among the features.
A player can be an individual feature value, e.g. for
tabular data. A player can also be a group of feature
values.
47. SHAP Code Example
import shap
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(max_depth=6, random_state=0, n_estimators=10)
model.fit(X_train, Y_train)
shap_values = shap.TreeExplainer(model).shap_values(X_train)
shap.summary_plot(shap_values, X_train, plot_type="bar")
50. ELI5
ELI5 is a Python library which allows to
visualize and debug various Machine
Learning models using unified API. It has
built-in support for several ML
frameworks and provides a way to
explain black-box models.
51. Example: Who
survived in the
Titanic?
Features:
• Age: Age
• Cabin: Cabin
• Embarked: Port of Embarkation (C = Cherbourg;
Q = Queenstown; S = Southampton)
• Fare: Passenger Fare
• Name: Name
• Parch: Number of Parents/Children Aboard
• Pclass: Passenger Class (1 = 1st; 2 = 2nd; 3 = 3rd)
• Sex: Sex
• Sibsp: Number of Siblings/Spouses Aboard
• Survived: Survival (0 = No; 1 = Yes)
• Ticket: Ticket Number
52. ELI5 Code Example
from eli5 import show_weights
clf = XGBClassifier()
clf.fit(train_xs, train_ys)
show_weights(clf, vec=vec)
53. Show_weights()
Output example
Weight Feature
0.4278 Sex=female
0.1949 Pclass=3
0.0665 Embarked=S
0.0510 Pclass=2
0.0420 SibSp
0.0417 Cabin=
0.0385 Embarked=C
0.0358 Ticket=1601
0.0331 Age
0.0323 Fare
0.0220 Pclass=1
0.0143 Parch