Slide for Arithmer Seminar given by Dr. Daisuke Sato (Arithmer) at Arithmer inc.
The topic is on "explainable AI".
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
Spark 2019: Equifax's SVP Data & Analytics, Peter Maynard, discusses the notion (and importance) of explainable AI in the financial services sector. He looks at the work Equifax have done to crack open the black box by creating patented AI technology that helps companies make smarter, explainable decisions using AI.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Introductory presentation to Explainable AI, defending its main motivations and importance. We describe briefly the main techniques available in March 2020 and share many references to allow the reader to continue his/her studies.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
Spark 2019: Equifax's SVP Data & Analytics, Peter Maynard, discusses the notion (and importance) of explainable AI in the financial services sector. He looks at the work Equifax have done to crack open the black box by creating patented AI technology that helps companies make smarter, explainable decisions using AI.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Explainable Artificial Intelligence (XAI)
Presented at Lightning Talk session at ICACCI'18 on 20th September 208
An Explainable AI (XAI) or Transparent AI is an artificial intelligence (AI) whose actions can be easily understood by humans. It contrasts with the concept of the "black box" in machine learning, meaning the "interpretability" of the workings of complex algorithms, where even their designers cannot explain why the AI arrived at a specific decision.
https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence
These slides were presented at a meetup in Kansas City by Bahador Khaleghi of H2O.ai.
More details can be viewed here: https://www.meetup.com/Kansas-City-Artificial-Intelligence-Deep-Learning/events/265662978/
https://github.com/telecombcn-dl/lectures-all/
These slides review techniques for interpreting the behavior of deep neural networks. The talk reviews basic techniques such as the display of filters and tensors, as well as more advanced ones that try to interpret which part of the input data is responsible for the predictions, or generate data that maximizes the activation of certain neurons.
It's a well-known fact that the best explanation of a simple model is the model itself. But often we use complex models, such as ensemble methods or deep networks, so we cannot use the original model as its own best explanation because it is not easy to understand.
In the context of this topic, we will discuss how methods for interpreting model predictions work and will try to understand practical value of these methods.
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
Abstract –
Although industries have started to adopt AI and Machine Learning in almost every sector to solve complex business problems, but are these models always trustworthy? Machine Learning models are not any oracle but rather are scientific methods and mathematical models which best describes the data. But science is all about explaining complex natural phenomena in the simplest way possible! So, can we make ML and DL models more interpretable, so that any business user can understand these models and trust the results of these models?
In order to find out the answer, please join me in this session, in which I will take about concepts of Explainable AI and discuss its necessity and principles which help us demystify black-box AI models. I will be discussing about popular approaches like Feature Importance, Key Influencers, Decomposition trees used in classical Machine Learning interpretable. We will discuss about various techniques used for Deep Learning model interpretations like Saliency Maps, Grad-CAMs, Visual Attention Maps and finally go through more details about frameworks like LIME, SHAP, ELI5, SKATER, TCAV which helps us to make Machine Learning and Deep Learning models more interpretable, trustworthy and useful!
Interpretable Machine Learning describes the process of revealing causes of predictions and explaining a derived decision in a way that is understandable to humans. The ability to understand the causes that lead to a certain prediction enables data scientists to ensure that the model is consistent to the domain knowledge of an expert. Furthermore, interpretability is critical to obtain trust in a model and to be able to tackle problems like unfair biases or discrimination against particular subgroups. This talk covers an introduction to the concept of interpretability and an overview of popular interpretability techniques.
Speaker: Marcel Spitzer, inovex
Event: Kaggle Munich Meetup, 20.11.2018
Mehr Tech-Vorträge: www.inovex.de/vortraege
Mehr Tech-Artikel: www.inovex.de/blog
As AI becomes more and more prevalent in our lives, the decisions it makes for us are becoming more and more impactful on our lives and those of others.
How can we help people to have trust in the models we're building? The field of Explainable AI focuses on making any machine learning model interpretable by non experts.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
And then there were ... Large Language ModelsLeon Dohmen
It is not often even in the ICT world that one witnesses a revolution. The rise of the Personal Computer, the rise of mobile telephony and, of course, the rise of the Internet are some of those revolutions. So what is ChatGPT really? Is ChatGPT also such a revolution? And like any revolution, does ChatGPT have its winners and losers? And who are they? How do we ensure that ChatGPT contributes to a positive impulse for "Smart Humanity?".
During a key note om April 3 and 13 2023 Piek Vossen explained the impact of Large Language Models like ChatGPT.
Prof. PhD. Piek Th.J.M. Vossen, is Full professor of Computational Lexicology at the Faculty of Humanities, Department of Language, Literature and Communication (LCC) at VU Amsterdam:
What is ChatGPT? What technology and thought processes underlie it? What are its consequences? What choices are being made? In the presentation, Piek will elaborate on the basic principles behind Large Language Models and how they are used as a basis for Deep Learning in which they are fine-tuned for specific tasks. He will also discuss a specific variant GPT that underlies ChatGPT. It covers what ChatGPT can and cannot do, what it is good for and what the risks are.
Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Scientists, physicians, researchers, and analyst that use these technologies for their important work have the right to trust and understand their models and the answers they generate. This talk is an overview of several techniques for interpreting deep learning and machine learning models and telling stories from their results.
Speaker: Patrick Hall is a Data Scientist and Product Engineer at H2O.ai. He’s also an Adjunct Professor at George Washington University for the Department of Decision Sciences. Prior to joining H2O, Patrick spent many years as a Senior Data Scientist SAS and has worked with many Fortune 500 companies on their data science and machine learning problems. https://www.linkedin.com/in/jpatrickhall
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
Presenting the landscape of AI/ML in 2023 by introducing a quick summary of the last 10 years of its progress, current situation, and looking at things happening behind the scene.
해당 자료는 풀잎스쿨 18기 중 "설명가능한 인공지능 기획!" 진행 중 Counterfactual Explanation 세션에 대해서 정리한 자료입니다.
논문, Youtube 및 하기 자료를 바탕으로 정리되었습니다.
https://christophm.github.io/interpretable-ml-book/
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
Explainable Artificial Intelligence (XAI)
Presented at Lightning Talk session at ICACCI'18 on 20th September 208
An Explainable AI (XAI) or Transparent AI is an artificial intelligence (AI) whose actions can be easily understood by humans. It contrasts with the concept of the "black box" in machine learning, meaning the "interpretability" of the workings of complex algorithms, where even their designers cannot explain why the AI arrived at a specific decision.
https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence
These slides were presented at a meetup in Kansas City by Bahador Khaleghi of H2O.ai.
More details can be viewed here: https://www.meetup.com/Kansas-City-Artificial-Intelligence-Deep-Learning/events/265662978/
https://github.com/telecombcn-dl/lectures-all/
These slides review techniques for interpreting the behavior of deep neural networks. The talk reviews basic techniques such as the display of filters and tensors, as well as more advanced ones that try to interpret which part of the input data is responsible for the predictions, or generate data that maximizes the activation of certain neurons.
It's a well-known fact that the best explanation of a simple model is the model itself. But often we use complex models, such as ensemble methods or deep networks, so we cannot use the original model as its own best explanation because it is not easy to understand.
In the context of this topic, we will discuss how methods for interpreting model predictions work and will try to understand practical value of these methods.
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
Abstract –
Although industries have started to adopt AI and Machine Learning in almost every sector to solve complex business problems, but are these models always trustworthy? Machine Learning models are not any oracle but rather are scientific methods and mathematical models which best describes the data. But science is all about explaining complex natural phenomena in the simplest way possible! So, can we make ML and DL models more interpretable, so that any business user can understand these models and trust the results of these models?
In order to find out the answer, please join me in this session, in which I will take about concepts of Explainable AI and discuss its necessity and principles which help us demystify black-box AI models. I will be discussing about popular approaches like Feature Importance, Key Influencers, Decomposition trees used in classical Machine Learning interpretable. We will discuss about various techniques used for Deep Learning model interpretations like Saliency Maps, Grad-CAMs, Visual Attention Maps and finally go through more details about frameworks like LIME, SHAP, ELI5, SKATER, TCAV which helps us to make Machine Learning and Deep Learning models more interpretable, trustworthy and useful!
Interpretable Machine Learning describes the process of revealing causes of predictions and explaining a derived decision in a way that is understandable to humans. The ability to understand the causes that lead to a certain prediction enables data scientists to ensure that the model is consistent to the domain knowledge of an expert. Furthermore, interpretability is critical to obtain trust in a model and to be able to tackle problems like unfair biases or discrimination against particular subgroups. This talk covers an introduction to the concept of interpretability and an overview of popular interpretability techniques.
Speaker: Marcel Spitzer, inovex
Event: Kaggle Munich Meetup, 20.11.2018
Mehr Tech-Vorträge: www.inovex.de/vortraege
Mehr Tech-Artikel: www.inovex.de/blog
As AI becomes more and more prevalent in our lives, the decisions it makes for us are becoming more and more impactful on our lives and those of others.
How can we help people to have trust in the models we're building? The field of Explainable AI focuses on making any machine learning model interpretable by non experts.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
And then there were ... Large Language ModelsLeon Dohmen
It is not often even in the ICT world that one witnesses a revolution. The rise of the Personal Computer, the rise of mobile telephony and, of course, the rise of the Internet are some of those revolutions. So what is ChatGPT really? Is ChatGPT also such a revolution? And like any revolution, does ChatGPT have its winners and losers? And who are they? How do we ensure that ChatGPT contributes to a positive impulse for "Smart Humanity?".
During a key note om April 3 and 13 2023 Piek Vossen explained the impact of Large Language Models like ChatGPT.
Prof. PhD. Piek Th.J.M. Vossen, is Full professor of Computational Lexicology at the Faculty of Humanities, Department of Language, Literature and Communication (LCC) at VU Amsterdam:
What is ChatGPT? What technology and thought processes underlie it? What are its consequences? What choices are being made? In the presentation, Piek will elaborate on the basic principles behind Large Language Models and how they are used as a basis for Deep Learning in which they are fine-tuned for specific tasks. He will also discuss a specific variant GPT that underlies ChatGPT. It covers what ChatGPT can and cannot do, what it is good for and what the risks are.
Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Scientists, physicians, researchers, and analyst that use these technologies for their important work have the right to trust and understand their models and the answers they generate. This talk is an overview of several techniques for interpreting deep learning and machine learning models and telling stories from their results.
Speaker: Patrick Hall is a Data Scientist and Product Engineer at H2O.ai. He’s also an Adjunct Professor at George Washington University for the Department of Decision Sciences. Prior to joining H2O, Patrick spent many years as a Senior Data Scientist SAS and has worked with many Fortune 500 companies on their data science and machine learning problems. https://www.linkedin.com/in/jpatrickhall
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
Presenting the landscape of AI/ML in 2023 by introducing a quick summary of the last 10 years of its progress, current situation, and looking at things happening behind the scene.
해당 자료는 풀잎스쿨 18기 중 "설명가능한 인공지능 기획!" 진행 중 Counterfactual Explanation 세션에 대해서 정리한 자료입니다.
논문, Youtube 및 하기 자료를 바탕으로 정리되었습니다.
https://christophm.github.io/interpretable-ml-book/
By popular demand, here is a case study of my first Kaggle competition from about a year ago. Hope you find it useful. Thank you again to my fantastic team.
Process the sentiments of NLP with Naive Bayes Rule, Random Forest, Support Vector Machine, and much more.
Thanks, for your time, if you enjoyed this short slide there are tons of topics in advanced analytics, data science, and machine learning available in my medium repo. https://medium.com/@bobrupakroy
ChatGPT
Data analysis is the process of inspecting, cleaning, transforming, and modeling data to discover useful information, draw conclusions, and support decision-making. It involves applying various techniques and methods to extract insights from data sets, often with the goal of uncovering patterns, trends, relationships, or making predictions.
Here's an overview of the key steps and techniques involved in data analysis:
Data Collection: The first step in data analysis is gathering relevant data from various sources. This can include structured data from databases, spreadsheets, or surveys, as well as unstructured data such as text documents, social media posts, or sensor readings.
Data Cleaning and Preprocessing: Once the data is collected, it often needs to be cleaned and preprocessed to ensure its quality and suitability for analysis. This involves handling missing values, removing duplicates, addressing inconsistencies, and transforming data into a suitable format for analysis.
Exploratory Data Analysis (EDA): EDA involves examining and understanding the data through summary statistics, visualizations, and statistical techniques. It helps identify patterns, distributions, outliers, and potential relationships between variables. EDA also helps in formulating hypotheses and guiding further analysis.
Data Modeling and Statistical Analysis: In this step, various statistical techniques and models are applied to the data to gain deeper insights. This can include descriptive statistics, inferential statistics, hypothesis testing, regression analysis, time series analysis, clustering, classification, and more. The choice of techniques depends on the nature of the data and the research questions being addressed.
Data Visualization: Data visualization plays a crucial role in data analysis. It involves creating meaningful and visually appealing representations of data through charts, graphs, plots, and interactive dashboards. Visualizations help in communicating insights effectively and spotting trends or patterns that may be difficult to identify in raw data.
Interpretation and Conclusion: Once the analysis is performed, the findings need to be interpreted in the context of the problem or research objectives. Conclusions are drawn based on the results, and recommendations or insights are provided to stakeholders or decision-makers.
Reporting and Communication: The final step is to present the results and findings of the data analysis in a clear and concise manner. This can be in the form of reports, presentations, or interactive visualizations. Effective communication of the analysis results is crucial for stakeholders to understand and make informed decisions based on the insights gained.
Data analysis is widely used in various fields, including business, finance, marketing, healthcare, social sciences, and more. It plays a crucial role in extracting value from data, supporting evidence-based decision-making, and driving actionable insig
Using R in Kaggle Competitions.
Kaggle has been the most popular data science platform linking close to half a million of data scientists worldwide. How to get yourself a decent ranking on Kaggle competitions with R programming, eXtreme Gradient BOOSTing, and a laptop. Great machine learning tools for all levels to get started and learn. Find out how to perform features engineering, tuning XGB models, selecting a sizable cross validations and performing model ensembles.
本スライドは、弊社のArithmerDXソリューションの基礎的な技術についてご理解いただけるよう制作した資料となります。
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
This is slides used at Arithmer seminar given by Dr. Masaaki Uesaka at Arithmer inc.
It is a summary of recent methods for quality assurance of machine learning model.
Arithmer Seminar is weekly held, where professionals from within our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社のArithmerDXチームによりArithmer Seminarで使用されたものです。
弊社のArithmerDXソリューションの基礎的な技術についてご理解いただけるよう制作した資料となります。
Arithmer Seminar is weekly held, where professionals from within our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社のArithmerR3チームによりArithmer Seminarで使用されたものです。
弊社のArithmerR3ソリューションの基礎的な技術についてご理解いただけるよう制作した資料となります。
Arithmer Seminar is weekly held, where professionals from within our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Weakly supervised semantic segmentation of 3D point cloudArithmer Inc.
Slide for study session given by Dr. Daisuke Sato at Arithmer inc.
It is a summary of methods for semantic segmentation for 3D pointcloud using 2D weakly-supervised learning.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社のNLPチームによりArithmer Seminarで使用されたものです。
弊社のArithmer NLPソリューションの基礎的な技術についてご理解いただけるよう制作した資料となります。
Arithmer Seminar is weekly held, where professionals from within our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社のRoboチームによりArithmer Seminarで使用されたものです。
弊社のArithmer Roboソリューションの基礎的な技術についてご理解いただけるよう制作した資料となります。
Arithmer Seminar is weekly held, where professionals from within our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社のNLPソリューション・AIチャットボットについて、2021年7月20日にwebセミナーを行った際の資料になります。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社のR3チーム(3Dソリューション開発チーム)によりArithmer Seminarで使用されたものです。
弊社のArithmerR3ソリューションの基礎的な技術についてご理解いただけるよう制作した資料となります。
Arithmer Seminar is weekly held, where professionals from within our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
VIBE: Video Inference for Human Body Pose and Shape EstimationArithmer Inc.
These slides were prepared for study session given by Christian Saravia at Arithmer inc.
It is a summary of recent methods for human pose/shape estimation from movie.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社のInspectionチーム(画像認識開発チーム)によりArithmer Seminarで使用されたものです。
弊社のArithmerInspectionソリューションの基礎的な技術についてご理解いただけるよう制作した資料となります。
Arithmer Seminar is weekly held, where professionals from within our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社の梅本により弊社内の技術勉強会で使用されたものです。
近年注目を集めるアーキテクチャーである「Transformer」の解説スライドとなっております。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社の鈴木により2021年3月25日のArithmer Seminarで使用されたものです。
弊社のNLPソリューションの基礎的な部分について、営業メンバー・非DBエンジニアに理解してもらう目的で作った資料になります。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Introduction of Quantum Annealing and D-Wave MachinesArithmer Inc.
This slide was used for Arithmer seminar in April 2021, by Dr. Yuki Bando.
It is for introduction of quantum computer, D-wave series, and its application to optimization problems in industry.
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社の石橋により2021年1月14日のArithmer Seminarで使用されたものです。
弊社のOCRソリューションの基礎的な部分について、営業メンバー・非DBエンジニアに理解してもらう目的で作った資料になります。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社の上坂により2021年1月21日のArithmer Seminarで使用されたものです。
弊社のDynamicsソリューション(動画解析)の基礎的な部分について、営業メンバー・非Dynamicsエンジニアに理解してもらう目的で作った資料になります。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
本スライドは、弊社の曽根により2021年1月28日のArithmer Seminarで使用されたものです。
弊社のDBソリューションの基礎的な部分について、営業メンバー・非DBエンジニアに理解してもらう目的で作った資料になります。
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Slide for study session given by Christian Saravia at Arithmer inc.
It is a summary of methods for video summarization using attention mechanism.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Slide for study session given by Dr. Enrico Rinaldi at Arithmer inc.
It is a summary of established methods for parametric modeling of 3D human body "SMPL", which has many possible applications in apparel/health care industry.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Slide for study session given by Dr. Enrico Rinaldi at Arithmer inc.
It is a summary of recent methods for real-time instance segmentation "YOLACT", which is especially useful in robotics.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Unsubscribed: Combat Subscription Fatigue With a Membership Mentality by Head...
Explainable AI
1. 2 0 2 0 / 0 7 / 1 1
Explainable AI (XAI)
Daisuke Sato
Reference (Survey on XAI)
• A. Adadi, M. Berrada,
https://ieeexplore.ieee.org/document/84665
90/
• Finale Doshi-Velez, Been Kim,
arXiv:1702.08608
Codes
Notebook in Kaggle by Daisuke Sato
• “Random forest/XGBoost, and LIME/SHAP with
Titanic”
• “Predict Housing price”
2. Self-Introduction
• Daisuke Sato, Ph.D.
• Graduate school
• Kyoto university
• Research topic: Theoretical Physics (Quark-Gluon Plasma)
• Previous jobs
• Postdoc (US., Italy, Germany)
• Current position
• CTO
3. Contents
• Concept/Motivation
• Recent trends on XAI
• Method 1: LIME/SHAP
• Example: Classification
• Example: Regression
• Example: Image classification
• Method 2: ABN for image classification
4. Concept/Motivation
We want AI to be explainable because…
Generally speaking, AI is a blackbox.
1. Users should trust AI to actually use it (prediction itself, or model)
Ex: diagnosis/medical check, credit screening
People want to know why they were rejected by AI screening,
and what they should do in order to pass the screening.
G. Tolomei, et. al., arXiv:1706.06691
5. 2. It helps to choose a model from some candidates
Classifier of text to “Christianity” or “Atheism” (無神論)
Green: Christianity
Magenta: Atheism
Both model give correct classification,
but it is apparent that model 1 is better than model 2.
Concept/Motivation
6. Cf: Famous example of “husky or wolf”
3. It is useful to find overfitting, when train data is different from
test data
Training dataset contains pictures of
wolfs with snowy background.
Then, the classifier trained on that dataset outputs
“wolf” if the input image contains snow.
Concept/Motivation
7. Contents
• Concept/Motivation
• Recent trends on XAI
• Method 1: LIME/SHAP
• Example: Classification
• Example: Regression
• Example: Image classification
• Method 2: ABN for image classification
8. Recent trends
# of papers which includes
one of explanation-related words (“intelligible”, “interpretable”,…)
AND
one of AI-related words (“Machine learning”, “deep learning”,…)
FROM
7 repositories (arXiv, Google scholar, …)
Recently, researchers are studying XAI more and more.
9. Contents
• Concept/Motivation
• Recent trends on XAI
• Method 1: LIME/SHAP
• Example: Classification
• Example: Regression
• Example: Image classification
• Method 2: ABN for image classification
10. Method 1: Local Interpretable Model-agnostic Explanations (LIME)
Objects: Classifier or Regressor
M. Ribeiro, S. Singh, C. Guestrin, arXiv:1602.04938
Basic idea: Approximate original ML model with interpretable model
(linear model/DT), in the vicinity of specific features.
𝑥: specific input
Interpretable model
11. prediction
𝑥
∈
ℝ 𝑑
features
ℝ
𝑓(𝑥)
∈
𝑓
Original model
prediction
𝑧′
∈
{0,1} 𝑑′
Absence/presenc
e of features
ℝ
𝑔(𝑧′)
∈
𝑔
Interpretable model
Weights of
sampling 𝜋 𝑥 𝑧 = exp(−𝐷(𝑥, 𝑥′)2/𝜎2)
Linear model
Complexity of model
(depth of threes for DT/
# of non-zero weights for linear model)
Distance of predictions
of original model (f)
and its approximation
(g)
Method 1: Local Interpretable Model-agnostic Explanations (LIME)
12. Explaining individual explanation
1. Original model predicts from features (sneeze, headache,…) whether the
patient is flu or not.
2. LIME approximates the model with linear model in the vicinity of the specific
patient.
3. The weights of the linear model for each feature give “explanation” of the
prediction
13. desirable features of LIME
• Interpretable
• Local fidelity
• Model-agnostic
(Original model is not
affected by LIME at all)
• Global perspective
(sample different inputs
and its predictions)
14. Shapley Additive exPlanations (SHAP)
Actually, they are utilizing concepts of cooperative game theory:
• Shapley regression values
• Shapley sampling values
• Quantitative Input Influence
S. Lundberg, S-I. Lee, arXiv:1705.07874
Generalization of methods for XAI,
• LIME
• DeepLIFT
• Layer-Wise Relevance Propagation
A. Shrikumar et. al., arXiv:1605.01713
Sebastian Bach et al. In: PloS One 10.7 (2015), e0130140
15. Example: Classification (Titanic dataset)
Variable Definition Key
survival Survival 0 = No, 1 = Yes
pclass Ticket class 1 = 1st, 2 = 2nd, 3 =
3rd
sex Sex
Age Age in years
sibsp # of siblings /
spouses aboard the
Titanic
parch # of parents /
children aboard the
Titanic
ticket Ticket number
fare Passenger fare
cabin Cabin number
embarked Port of Embarkation C = Cherbourg, Q =
Queenstown, S =
Southampton
Label
Features
# of training samples = 891
# of test samples = 418
All values are unique
77% are NaN
→Train : Validation = 8 : 2
16. GBDT
Decision Tree (DT)
Selection of features and its threshold is
done, so that the mutual information is
maximized.
Entropy at a node is maximized when the samples in that node are uniformly
distributed to all the classes.
→Child nodes tends to be purely distributed
Mutual information: 𝐼 𝐻 𝐷 𝑝 − 𝐼 𝐻 𝐷𝑙𝑒𝑓𝑡 − 𝐼 𝐻 𝐷𝑟𝑖𝑔ℎ𝑡
Entropy in
parent node Entropy in child nodes
“Boyness” value
• Simple to understand and interpret
• Can represent interaction among features unlike linear model (at least up to # of
depth_tree)
• No need to NaN preprocessing/standardization
• Tends to overlearn (if trees are deep)
17. GBDT
Gradient Boosting Decision Tree (GBDT) T. Chen, C. Guestrin, arXiv:1603.02754
In ensemble method, the final
value is calculated from
average of many trees.
How to decide the structure of
tree ensemble (forest):
1. Start with one tree
2. Add another tree so that
the loss function is
minimized (gradient boost) Cf: In random forest, all the trees are generated in parallel, at
once.
Compared with it, GBDT can learn from the misprediction in the
previous tree, so the result is expected to be better.
Very popular method in Kaggle
• Overlearning problem is improved
• Less interpretability
• Many hyperparameters to be tuned
18. GBDT
I used one of GBDT (XGBoost) as a ML model, under the following conditions:
• No standardization of numerical features (as DT does not need it)
• No postprocessing of NaN (GBDT treats NaN as it is)
• No feature engineering
• Hyperparameter tuning for n_estimators and max_depth (optuna is excellent)
Not satisfying…
Cf: Baseline (all women survive, all men die): 0.76555
Test score: 0.77033
Best parameters: {'n_estimators': 20, 'max_depth': 12}
Validation score: 0.8659217877094972
Results:
Review of know-how on feature engineering by experts:
Kaggle notebook “How am I doing with my score?”
Cf: Reported best score with ensemble method: 0.84210
19. GBDT
Cf: Test score at Kaggle competition
Cheaters… (Memorizing all the names of the survivors? Repeat trying with GA?)
20. GBDT
Linear correlation between features and label
Fare has large correlation with Pclass, as you
can easily expect (~0.4-0.6).
Also with Embarked_C (~0.27), which I do not
know why.
Sex_female 0.543351
Pclass_3 0.322308
Pclass_1 0.285904
Fare 0.257307
Embarked_C 0.168240
Embarked_S 0.155660
Correlation between Survived and features
The method which you try first, to select important features.
You can work when the number of features is small.
It is likely that the model trained with this data put large importance on the
features with large correlation.
(explanation of data itself, not model)
21. GBDT
Importance of features in GBDT
(explanation of whole model, not for specific
sample)
Women had high priority for getting on
boats
Passengers in 3rd class are not likely to survive.
(Sign of feature’s contribution can not be read, only the
magnitude)
The average gain (for mutual information, or inpurity) of splits which use the
feature
Top 3 agrees with that in linear correlation.
22. GBDT
However, this method has a problem of “inconsistency” (when a model is
changed such that a feature has a higher impact on the output, the importance
of that feature can decrease)
This problem is overcome by LIME/SHAP.
23. LIME
(explanation of model prediction for specific
sample)
Explanation by LIME
https://qiita.com/fufufukakaka/items/d0081cd38251d22ffebf
My code did not work on Kaggle kernel, because of a bug in LIME package…
So, here I quote results from other person.
As LIME approximates the model with linear function locally,
the weights of the features are different depending on
sample.
In this sample, the top 3 features are Sex, Age, and Embarked.
24. SHAP
Results of SHAP
Top 3 does not agree with that in linear correlation/XGBoost score (Age enters).
Red points(Sex_female=1:female) have positive SHAP
values, while the blue points (Sex_female=0:male)
negative.
Women are likely to survive
Passengers in 3rd class
are likely to be killed
Sorted with mean
of |SHAP value|
SHAP is consistent (unlike feature importance of XGBoost) and has local
fidelity (unlike linear correlation), I would trust SHAP result than the other two.
25. Example: Regression (Ames Housing dataset)
Variable Definition Key
SalePrice the property's sale
price in dollars
-
OverallQual Rates the overall
material and finish of
the house
10:Very Excellent
9:Excellent
…
1:Very Poor
GarageCars Size of garage in car
capacity
-
KitchenAbvGr Kitchens above grade Ex:Excellent
Gd:Good
TA:Typical/Average
Fa:Fair
Po:Poor
…(80 features in total) … …
Label
Features
Dataset describing the sale of individual residential property in Ames, Iowa, from 2006 to 2010.
# of training samples = 1460
# of test samples =1459
→Train : Validation =75 : 25
26. Example: Regression (Ames Housing dataset)
Model: XGBoost
Test: 0.16332
Validation : 0.13830
Results (metric is Root Mean Squared Error of Log (RMSEL) )
Kaggle notebook “Stacked Regressions : Top 4% on LeaderBoard”
Cf: Score with the same method (XGBoost), but with feature engineering: 0.11420
This time, it is much better than simple baseline.
Cf: Baseline (linear regression in terms of
OverallQual): 1.1608
It is discretized
because OverallQual is
[1,…,10].
Prediction/actual value for validation data
27. Example: Regression (Ames Housing dataset)
Linear correlation between features and label
(explanation of data itself, not model)
OverallQual 0.790982
GrLivArea 0.708624
GarageCars 0.640409
GarageArea 0.623431
TotalBsmtSF 0.613581
1stFlrSF 0.605852
FullBath 0.560664
TotRmsAbvGrd 0.533723
YearBuilt 0.522897
YearRemodAdd 0.50710
…
Correlation between SalePrice and features
Rates the overall material and finish of the
houseAbove grade (ground) living area square feet
Size of garage in car capacity
28. Example: Regression (Ames Housing dataset)
Importance of features in GBDT
(explanation of whole model, not for specific
sample)
Top 3 is different from linear correlation (KitchenAbvGr).
29. Example: Regression (Ames Housing dataset)
(explanation of model prediction for specific sample)
Explanation by SHAP
GrLivArea: Above grade (ground) living area square feet
TotalBsmtSF: Total square feet of basement area
Top 3 is different from linear correlation/LIME.
30. Example: Image Classification
Results of LIME
The model seems to focus on the right places.
However, there are models which can not be approximated with LIME.
Ex: Classifier whether the image is “retro” or not considering the
values of the whole pixels (sepia?)
32. Example: Image Classification
• Anchor
Gives range of features which does not change the prediction
M. Ribeiro, et. al., Thirty-Second AAAI Conference on Artificial Intelligence. 2018.
• Influence
Gives training data which the prediction is based on
P. Koh, P. Liang, arXiv:1703.04730
Training images which affect the prediction of the test imageTest image
Other approaches
33. Contents
• Concept/Motivation
• Recent trends on XAI
• Method 1: LIME/SHAP
• Example: Classification
• Example: Regression
• Example: Image classification
• Method 2: ABN for image classification
34. Method 2: ABN for image classification
B. Zhou, et. al., arXiv:1512.04150
Class Activation Mapping (CAM)
• Decrease classification accuracy because fully-
connected (Fc) layer is replaced with GAP.
𝑓𝑘(𝑥, 𝑦)
𝑥, 𝑦: spatial coordinate
𝑘: channel
𝑓1(𝑥, 𝑦)
𝑆𝑐 =
𝑘
𝑤 𝑘
𝑐
𝐹𝑘 =
𝑥,𝑦
𝑀𝑐(𝑥, 𝑦)
Score for class c
𝐹𝑘 =
𝑥,𝑦
𝑓𝑘(𝑥, 𝑦)
Global Average Pooling
(GAP) for channel c
𝑴 𝒄(𝒙, 𝒚) =
𝒌
𝒘 𝒌
𝒄
𝒇 𝒌(𝒙, 𝒚)
35. Method 2: ABN for image classification
Attention Branch Network (ABN) H. Fukui, et. al., arXiv:1812.10025
No Fc layer…
Can add Fc layers!!
Basic idea: Divide attention branch from classification branch so that Fc
layers can be used in the latter branch.
36. Method 2: ABN for image classification
• Improved classification accuracy because it can use Fc layers.
• Actually, using attention map in the input of perception/loss function improves accuracy.
Perception branch uses also attention map
not only features from feature extractor, as
its input. Loss function has attention/perception
terms.
Both consists of cross entropy for
classification task.
37. Method 2: ABN for image classification
Results of ABN
As I do not have domain knowledge,
I can not judge whether the model is focusing on correct features…
Are the highlighted parts characteristic for each maker/model?
It is interesting that, the attention is paid to different parts depending on the task.
Attention map improves accuracy!
38. Method 2: ABN for image classification
The model seems to be focusing on correct features.
39. Method 2: ABN for image classification
M. Mitsuhara et. al., arXiv: 1905.03540ABN and human-in-the-loop
Basic idea: By modifying attention map in ABN using human knowledge,
try to improve the accuracy of image classifier.
40. Method 2: ABN for image classification
Modifying attention map manually helps classification
Perception branch uses also attention map not only features from feature extractor,
as its input.
41. Method 2: ABN for image classification
Results
Modifying the attention map improved the classification accuracy
42. Method 2: ABN for image classification
ABN with HITL
Feature extractor is not
updated (fine-tuning)
43. Method 2: ABN for image classification
Results
HITL improves the attention map (not very apparent),
and also classification accuracy.
44. Method 2: ABN for image classification
HITL improves the attention map (focusing on relevant parts, not the whole body),
and also classification accuracy.
45. Method 2: ABN for image classification
So maybe, being model-agnostic is not always useful:
Sometimes it is better to be able to touch the model.
HITL is possible because the explanation (attention map) is given as a
part of their model in ABN, and it is reused as input of perception branch.
Unlike LIME, ABN is not model-agnostic.
46. Summary
I introduced a few recent methods on XAI.
1. LIME
for
• Classification of structured data
• Regression of structured data
• Classification of image
2. ABN
for
• Classification of image
I also introduced application of human-in-the-loop to ABN