This document discusses machine learning interpretability and explainability. It begins with introducing the problem of making black box machine learning models more interpretable and defining key concepts. Next, it reviews popular interpretability methods like LIME, LRP, DeepLIFT and SHAP. It then describes the authors' proposed model CAMEL, which uses clustering to learn local interpretable models without sampling. The document concludes by discussing evaluation of interpretability models and important considerations like the tradeoff between performance and interpretability.
Interpretable Machine Learning describes the process of revealing causes of predictions and explaining a derived decision in a way that is understandable to humans. The ability to understand the causes that lead to a certain prediction enables data scientists to ensure that the model is consistent to the domain knowledge of an expert. Furthermore, interpretability is critical to obtain trust in a model and to be able to tackle problems like unfair biases or discrimination against particular subgroups. This talk covers an introduction to the concept of interpretability and an overview of popular interpretability techniques.
Speaker: Marcel Spitzer, inovex
Event: Kaggle Munich Meetup, 20.11.2018
Mehr Tech-Vorträge: www.inovex.de/vortraege
Mehr Tech-Artikel: www.inovex.de/blog
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
https://github.com/telecombcn-dl/lectures-all/
These slides review techniques for interpreting the behavior of deep neural networks. The talk reviews basic techniques such as the display of filters and tensors, as well as more advanced ones that try to interpret which part of the input data is responsible for the predictions, or generate data that maximizes the activation of certain neurons.
This was presented at the London Artificial Intelligence & Deep Learning Meetup.
https://www.meetup.com/London-Artificial-Intelligence-Deep-Learning/events/245251725/
Enjoy the recording: https://youtu.be/CY3t11vuuOM.
- - -
Kasia discussed complexities of interpreting black-box algorithms and how these may affect some industries. She presented the most popular methods of interpreting Machine Learning classifiers, for example, feature importance or partial dependence plots and Bayesian networks. Finally, she introduced Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models - using breast cancer data as a specific case scenario.
Kasia Kulma is a Data Scientist at Aviva with a soft spot for R. She obtained a PhD (Uppsala University, Sweden) in evolutionary biology in 2013 and has been working on all things data ever since. For example, she has built recommender systems, customer segmentations, predictive models and now she is leading an NLP project at the UK’s leading insurer. In spare time she tries to relax by hiking & camping, but if that doesn’t work ;) she co-organizes R-Ladies meetups and writes a data science blog R-tastic (https://kkulma.github.io/).
https://www.linkedin.com/in/kasia-kulma-phd-7695b923/
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.
To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.
It's a well-known fact that the best explanation of a simple model is the model itself. But often we use complex models, such as ensemble methods or deep networks, so we cannot use the original model as its own best explanation because it is not easy to understand.
In the context of this topic, we will discuss how methods for interpreting model predictions work and will try to understand practical value of these methods.
Interpretable Machine Learning describes the process of revealing causes of predictions and explaining a derived decision in a way that is understandable to humans. The ability to understand the causes that lead to a certain prediction enables data scientists to ensure that the model is consistent to the domain knowledge of an expert. Furthermore, interpretability is critical to obtain trust in a model and to be able to tackle problems like unfair biases or discrimination against particular subgroups. This talk covers an introduction to the concept of interpretability and an overview of popular interpretability techniques.
Speaker: Marcel Spitzer, inovex
Event: Kaggle Munich Meetup, 20.11.2018
Mehr Tech-Vorträge: www.inovex.de/vortraege
Mehr Tech-Artikel: www.inovex.de/blog
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
https://github.com/telecombcn-dl/lectures-all/
These slides review techniques for interpreting the behavior of deep neural networks. The talk reviews basic techniques such as the display of filters and tensors, as well as more advanced ones that try to interpret which part of the input data is responsible for the predictions, or generate data that maximizes the activation of certain neurons.
This was presented at the London Artificial Intelligence & Deep Learning Meetup.
https://www.meetup.com/London-Artificial-Intelligence-Deep-Learning/events/245251725/
Enjoy the recording: https://youtu.be/CY3t11vuuOM.
- - -
Kasia discussed complexities of interpreting black-box algorithms and how these may affect some industries. She presented the most popular methods of interpreting Machine Learning classifiers, for example, feature importance or partial dependence plots and Bayesian networks. Finally, she introduced Local Interpretable Model-Agnostic Explanations (LIME) framework for explaining predictions of black-box learners – including text- and image-based models - using breast cancer data as a specific case scenario.
Kasia Kulma is a Data Scientist at Aviva with a soft spot for R. She obtained a PhD (Uppsala University, Sweden) in evolutionary biology in 2013 and has been working on all things data ever since. For example, she has built recommender systems, customer segmentations, predictive models and now she is leading an NLP project at the UK’s leading insurer. In spare time she tries to relax by hiking & camping, but if that doesn’t work ;) she co-organizes R-Ladies meetups and writes a data science blog R-tastic (https://kkulma.github.io/).
https://www.linkedin.com/in/kasia-kulma-phd-7695b923/
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.
To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.
It's a well-known fact that the best explanation of a simple model is the model itself. But often we use complex models, such as ensemble methods or deep networks, so we cannot use the original model as its own best explanation because it is not easy to understand.
In the context of this topic, we will discuss how methods for interpreting model predictions work and will try to understand practical value of these methods.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Slide for Arithmer Seminar given by Dr. Daisuke Sato (Arithmer) at Arithmer inc.
The topic is on "explainable AI".
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Provides a brief overview of what machine learning is, how it works (theory), how to prepare data for a machine learning problem, an example case study, and additional resources.
An introductory course on building ML applications with primary focus on supervised learning. Covers the typical ML application cycle - Problem formulation, data definitions, offline modeling, platform design. Also, includes key tenets for building applications.
Note: This is an old slide deck. The content on building internal ML platforms is a bit outdated and slides on the model choices do not include deep learning models.
Delta Analytics is a 501(c)3 non-profit in the Bay Area. We believe that data is powerful, and that anybody should be able to harness it for change. Our teaching fellows partner with schools and organizations worldwide to work with students excited about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our mission or provide feedback, take a look at www.deltanalytics.org.
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
** AI & Deep Learning with Tensorflow Training: https://www.edureka.co/ai-deep-learning-with-tensorflow **
This Edureka PPT on "Restricted Boltzmann Machine" will provide you with detailed and comprehensive knowledge of Restricted Boltzmann Machines, also known as RBM. You will also get to know about the layers in RBM and their working.
This PPT covers the following topics:
1. History of RBM
2. Difference between RBM & Autoencoders
3. Introduction to RBMs
4. Energy-Based Model & Probabilistic Model
5. Training of RBMs
6. Example: Collaborative Filtering
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Scientists, physicians, researchers, and analyst that use these technologies for their important work have the right to trust and understand their models and the answers they generate. This talk is an overview of several techniques for interpreting deep learning and machine learning models and telling stories from their results.
Speaker: Patrick Hall is a Data Scientist and Product Engineer at H2O.ai. He’s also an Adjunct Professor at George Washington University for the Department of Decision Sciences. Prior to joining H2O, Patrick spent many years as a Senior Data Scientist SAS and has worked with many Fortune 500 companies on their data science and machine learning problems. https://www.linkedin.com/in/jpatrickhall
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
Supervised Unsupervised and Reinforcement Learning Aakash Chotrani
This presentation describes various categories of machine learning techniques.It starts with importance of Machine learning and difference between ML and traditional AI. Examples and in-depth explanation of different learning techniques in ML.
Pregel In Graphs - Models and InstancesChase Zhang
An introduction to Google's large scale graph computing model Pregel. Tons of system design graphs are provided along with a real world application instance.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
An Introduction to XAI! Towards Trusting Your ML Models!Mansour Saffar
Machine learning (ML) is currently disrupting almost every industry and is being used as the core component in many systems. The decisions made by these systems may have a great impact on society and specific individuals and thus the decision-making process has to be clear and explainable so humans can trust it. Explainable AI (XAI) is a rather new field in ML in which researchers try to develop models that are able to explain the decision-making process behind ML models. In this talk, we'll learn about the fundamentals of XAI and discuss why we need to start to integrate XAI with our ML models!
Presented in Edmonton DataScience Meetup on October 2nd, 2019. Learn more: https://youtu.be/gEkPXOsDt_w
Slide for Arithmer Seminar given by Dr. Daisuke Sato (Arithmer) at Arithmer inc.
The topic is on "explainable AI".
"Arithmer Seminar" is weekly held, where professionals from within and outside our company give lectures on their respective expertise.
The slides are made by the lecturer from outside our company, and shared here with his/her permission.
Arithmer株式会社は東京大学大学院数理科学研究科発の数学の会社です。私達は現代数学を応用して、様々な分野のソリューションに、新しい高度AIシステムを導入しています。AIをいかに上手に使って仕事を効率化するか、そして人々の役に立つ結果を生み出すのか、それを考えるのが私たちの仕事です。
Arithmer began at the University of Tokyo Graduate School of Mathematical Sciences. Today, our research of modern mathematics and AI systems has the capability of providing solutions when dealing with tough complex issues. At Arithmer we believe it is our job to realize the functions of AI through improving work efficiency and producing more useful results for society.
Provides a brief overview of what machine learning is, how it works (theory), how to prepare data for a machine learning problem, an example case study, and additional resources.
An introductory course on building ML applications with primary focus on supervised learning. Covers the typical ML application cycle - Problem formulation, data definitions, offline modeling, platform design. Also, includes key tenets for building applications.
Note: This is an old slide deck. The content on building internal ML platforms is a bit outdated and slides on the model choices do not include deep learning models.
Delta Analytics is a 501(c)3 non-profit in the Bay Area. We believe that data is powerful, and that anybody should be able to harness it for change. Our teaching fellows partner with schools and organizations worldwide to work with students excited about the power of data to do good.
Welcome to the course! These modules will teach you the fundamental building blocks and the theory necessary to be a responsible machine learning practitioner in your own community. Each module focuses on accessible examples designed to teach you about good practices and the powerful (yet surprisingly simple) algorithms we use to model data.
To learn more about our mission or provide feedback, take a look at www.deltanalytics.org.
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
** AI & Deep Learning with Tensorflow Training: https://www.edureka.co/ai-deep-learning-with-tensorflow **
This Edureka PPT on "Restricted Boltzmann Machine" will provide you with detailed and comprehensive knowledge of Restricted Boltzmann Machines, also known as RBM. You will also get to know about the layers in RBM and their working.
This PPT covers the following topics:
1. History of RBM
2. Difference between RBM & Autoencoders
3. Introduction to RBMs
4. Energy-Based Model & Probabilistic Model
5. Training of RBMs
6. Example: Collaborative Filtering
Follow us to never miss an update in the future.
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Interpreting deep learning and machine learning models is not just another regulatory burden to be overcome. Scientists, physicians, researchers, and analyst that use these technologies for their important work have the right to trust and understand their models and the answers they generate. This talk is an overview of several techniques for interpreting deep learning and machine learning models and telling stories from their results.
Speaker: Patrick Hall is a Data Scientist and Product Engineer at H2O.ai. He’s also an Adjunct Professor at George Washington University for the Department of Decision Sciences. Prior to joining H2O, Patrick spent many years as a Senior Data Scientist SAS and has worked with many Fortune 500 companies on their data science and machine learning problems. https://www.linkedin.com/in/jpatrickhall
An overview of Deep Learning With Neural Networks. Use cases of Deep learning and it's development. Basic introduction tp the layers of Neural Networks.
Explainable AI (XAI) is becoming Must-Have NFR for most AI enabled product or solution deployments. Keen to know viewpoints and collaboration opportunities.
Supervised Unsupervised and Reinforcement Learning Aakash Chotrani
This presentation describes various categories of machine learning techniques.It starts with importance of Machine learning and difference between ML and traditional AI. Examples and in-depth explanation of different learning techniques in ML.
Pregel In Graphs - Models and InstancesChase Zhang
An introduction to Google's large scale graph computing model Pregel. Tons of system design graphs are provided along with a real world application instance.
An investigation into the building blocks for Neural Networks and modern day machine learning. This investigation touches on the evolution of the most basic of neural networks to more modern day concepts, particularly in methodologies that allow better training of these networks to produce more accurate real-life models.
https://telecombcn-dl.github.io/2018-dlai/
Deep learning technologies are at the core of the current revolution in artificial intelligence for multimedia data analysis. The convergence of large-scale annotated datasets and affordable GPU hardware has allowed the training of neural networks for data analysis tasks which were previously addressed with hand-crafted features. Architectures such as convolutional neural networks, recurrent neural networks or Q-nets for reinforcement learning have shaped a brand new scenario in signal processing. This course will cover the basic principles of deep learning from both an algorithmic and computational perspectives.
Introduction to MATLAB Programming and Numerical Methods for Engineers 1st Ed...AmeryWalters
Full download : https://alibabadownload.com/product/introduction-to-matlab-programming-and-numerical-methods-for-engineers-1st-edition-siauw-solutions-manual/ Introduction to MATLAB Programming and Numerical Methods for Engineers 1st Edition Siauw Solutions Manual , Introduction to MATLAB Programming and Numerical Methods for Engineers,Siauw,1st Edition,Solutions Manual
Camp IT: Making the World More Efficient Using AI & Machine LearningKrzysztof Kowalczyk
Slides from the introductory lecture I gave for students at Camp IT 2019. I tried to cover artificial inteligence, machine learning, most popular algorithms and their applications to business as broadly as possible - for in-depth materials on the given topics, see links and references in the presentation.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
Democratizing Fuzzing at Scale by Abhishek Aryaabh.arya
Presented at NUS: Fuzzing and Software Security Summer School 2024
This keynote talks about the democratization of fuzzing at scale, highlighting the collaboration between open source communities, academia, and industry to advance the field of fuzzing. It delves into the history of fuzzing, the development of scalable fuzzing platforms, and the empowerment of community-driven research. The talk will further discuss recent advancements leveraging AI/ML and offer insights into the future evolution of the fuzzing landscape.
Final project report on grocery store management system..pdfKamal Acharya
In today’s fast-changing business environment, it’s extremely important to be able to respond to client needs in the most effective and timely manner. If your customers wish to see your business online and have instant access to your products or services.
Online Grocery Store is an e-commerce website, which retails various grocery products. This project allows viewing various products available enables registered users to purchase desired products instantly using Paytm, UPI payment processor (Instant Pay) and also can place order by using Cash on Delivery (Pay Later) option. This project provides an easy access to Administrators and Managers to view orders placed using Pay Later and Instant Pay options.
In order to develop an e-commerce website, a number of Technologies must be studied and understood. These include multi-tiered architecture, server and client-side scripting techniques, implementation technologies, programming language (such as PHP, HTML, CSS, JavaScript) and MySQL relational databases. This is a project with the objective to develop a basic website where a consumer is provided with a shopping cart website and also to know about the technologies used to develop such a website.
This document will discuss each of the underlying technologies to create and implement an e- commerce website.
Event Management System Vb Net Project Report.pdfKamal Acharya
In present era, the scopes of information technology growing with a very fast .We do not see any are untouched from this industry. The scope of information technology has become wider includes: Business and industry. Household Business, Communication, Education, Entertainment, Science, Medicine, Engineering, Distance Learning, Weather Forecasting. Carrier Searching and so on.
My project named “Event Management System” is software that store and maintained all events coordinated in college. It also helpful to print related reports. My project will help to record the events coordinated by faculties with their Name, Event subject, date & details in an efficient & effective ways.
In my system we have to make a system by which a user can record all events coordinated by a particular faculty. In our proposed system some more featured are added which differs it from the existing system such as security.
Hybrid optimization of pumped hydro system and solar- Engr. Abdul-Azeez.pdffxintegritypublishin
Advancements in technology unveil a myriad of electrical and electronic breakthroughs geared towards efficiently harnessing limited resources to meet human energy demands. The optimization of hybrid solar PV panels and pumped hydro energy supply systems plays a pivotal role in utilizing natural resources effectively. This initiative not only benefits humanity but also fosters environmental sustainability. The study investigated the design optimization of these hybrid systems, focusing on understanding solar radiation patterns, identifying geographical influences on solar radiation, formulating a mathematical model for system optimization, and determining the optimal configuration of PV panels and pumped hydro storage. Through a comparative analysis approach and eight weeks of data collection, the study addressed key research questions related to solar radiation patterns and optimal system design. The findings highlighted regions with heightened solar radiation levels, showcasing substantial potential for power generation and emphasizing the system's efficiency. Optimizing system design significantly boosted power generation, promoted renewable energy utilization, and enhanced energy storage capacity. The study underscored the benefits of optimizing hybrid solar PV panels and pumped hydro energy supply systems for sustainable energy usage. Optimizing the design of solar PV panels and pumped hydro energy supply systems as examined across diverse climatic conditions in a developing country, not only enhances power generation but also improves the integration of renewable energy sources and boosts energy storage capacities, particularly beneficial for less economically prosperous regions. Additionally, the study provides valuable insights for advancing energy research in economically viable areas. Recommendations included conducting site-specific assessments, utilizing advanced modeling tools, implementing regular maintenance protocols, and enhancing communication among system components.
Saudi Arabia stands as a titan in the global energy landscape, renowned for its abundant oil and gas resources. It's the largest exporter of petroleum and holds some of the world's most significant reserves. Let's delve into the top 10 oil and gas projects shaping Saudi Arabia's energy future in 2024.
4. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Introduction
Fundamental definition
Fundamental definition
ML Explanation definition
• It must be human understandable, what ever his/her background is .
• It should give us an insightful justification for a given prediction
• It is often instance/example based
• the terms ”Interpretability” and ”Explainability” are used
interchangeably in Machine learning
Objectifs
- Build low risk applications (sensitive domains such as health).
- Advanced debugging and therefore performances improvement for
ML researchers and engineers.
- Build robust and reliable models.
- AI fairness.
4/30
5. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Introduction
Fundamental definition
Fundamental definitions
General notions
• There are some AI system that don’t require an interpretability !
• We have 2 models (”ML” and ”EXplainability”)
• The features are not necessarly the same for both the ML model and
the EX model, for instance in NLP we have words embeddings for
ML and one hot encoding for EX
ML models are splitted into 2 categories
– White box :Models that are interpretable by essence, but they are
less expressive and less performant
– Black box : Models that are more performant, more complex, and
more expressive, But their decisions are impossible to understand,
hence the Explainability
5/30
7. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Introduction
Fundamental definition
Fundamental definitions
Explainability models are splitted into 2 categories
• Model-Agnostic : a model that works for all ML models
considering it as a ”black box”
• Specific : a model which explains a specific family of models by
definition, for example : ”Neural Networks”, ”Tree-Based”, ...
The interpretability scope : Global vs Local
• Local : explain a certain prediction for a specific instance
• Global : explain the global behaviour of the ML model
7/30
11. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
State of the art
Interpretability Models
LIME
LIME (Local Interpretable Model-agnostic Explanations)
Idea : for a specific instance, we approximate the ML model locally
Framework
• an instance x ∈ Rd and its EX representation x′ ∈ {0, 1}d′
• The ML Model f : Rd → R and the EX Model g : {0, 1}d → R
• Sampled instances zi ∈ Rd ,their representation z′
i , and a proximity kernel πx(z)
• Optimization problem : ξ(x) = arg ming∈G [L(f, g, πx) + Ω(g)]
• Custom MSE (weighted by a kernel) : L(f, g, πx) =
∑
i πx(zi)(f(zi) − g(zi
′))2
11/30
13. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
State of the art
Interpretability Models
LRP
LRP (Layer-wise Relevance Propagation)
Idea : Backprop the outputs signals to the input layer in Neural Nets
Framework
• aj is the neuron j output which is defined as the non-linearity g : aj = g(
∑
i wij ∗ ai + b)
• an instance x, an ML model f, a layer l, a dimension p, a relevance score Rl
p, as
f(x) ≈
∑
p R
(1)
p (features contribution is summed)
• features p with R
(1)
p < 0 contribute negatively to activate the output neuron and
reversely (R
(1)
p > 0)
Source : (LRP with Local Renormalization Layers paper )
13/30
14. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
State of the art
Interpretability Models
LRP
Details ...
• the relevance score for :
– the output neurons is : R(L) = f(x)
– the intermediate neurons (back-prop) : R
(l)
i =
∑
j∈(l+1) R
(l,l+1)
i←j
– the input neurons (l = 1) where they are considered as the last intermediate
neurons R
(1)
i
• All the variations of the EX model are based on R
(l,l+1)
i←j formula
14/30
15. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
State of the art
Interpretability Models
DeepLIFT
DeepLIFT (Deep Learning Important FeaTures)
Idea : Replace gradients by differences during the backprop
Détails
• Gradients raise 2 major problems : Saturation, ReLU(virer les Negatifs, Discontinuité)
• We are not interested in the gradient (how y changes when x changes infinitesimally)
• We are interested in the slope (how y changes when x vary from its reference xref)
• gradient = ∂y
∂x
⇒ slope =
y−yref
x−xref
= ∆y
∆x
• We keep the ”Chain rule” : ∂y
∂x
= ∂y
∂z
∗ ∂z
∂x
⇒ ∆y
∆x
= ∆y
∆z
∗ ∆z
∆x
• Featurei Importance : xi × ∂y
∂xi
⇒ (xi − xref
i ) × ∆y
∆xi
• Problem ? how to get the reference ?
– inputs neurons : handcrafted by domain experts (Ex: MNIST => images
initialized with 0)
– intermediate and outputs neurons, we just need to forward the references inputs
15/30
17. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
State of the art
Interpretability Models
SHAP
SHAP (SHapley Additive exPlanations )
Idea : Unify all the previous methods and many more under the game
theory paradigm
Framework
• la The shapley value is a method of attributing individual rewards to players based on
their contribution to the total reward .
• Players are features values,The total reward for an instance z is : f(z) − E(f(z))
• the shapley value is the average marginal contribution of a feature value for all possible
coalitions.
17/30
18. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
State of the art
Interpretability Models
SHAP
SHAP EX Models
• KernelSHAP (LIME+Shapley values) : Lime uses heuristics to seek a kernels , SHAP
demonstrates that the best kernel is unique and it is : πx(z′) = M−1(
M
|z′|
)
∗|z′|∗(M−|z′|)
• DeepSHAP (DeepLIFT+Shapley values) : Adapted for Neural Networks
• TreeSHAP(Decision Tree+Shapley values): Adapted for Tree-based models
18/30
22. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Our Model
Description
Architecture choisie
Idea and pseudo-algorithm
• No sampling we use only our
datapoints, since the model learns a
boundary from existing points !
• We apply a clustering algorithms
based on the dataset distribution
such as K-means or DBSCAN or ...
(see clusters in Yellow)
• We learn an interpretable model
locally in each cluster with a Kernel
SHAP (see Green approximators)
• we could extend it to a global
explanation
22/30
23. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Our Model
Evaluation protocol
Evaluation
Protocole
• Dataset : Cervical Cancer (Risk Factors)
• As any other EX model cited previously we will need human
volunteers to evaluate the quality of our explanations
• Each instance should be reviewed by different persons
• Then, we could build comparative tables and plots with other models
Hyper parameters to explore
• The clustering algorithm itself
• the number of clusters C
• TreeDepth for Tree-based models and K − Lasso for linear models
• others : for instance weighted? datapoints or not etc ...
23/30
26. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Références
Réferences I
Bach, S., Binder, A., Montavon, G., Klauschen, F.,
Müller, K.-R., and Samek, W.
On Pixel-Wise Explanations for Non-Linear Classifier Decisions by
Layer-Wise Relevance Propagation.
PLOS ONE 10, 7 (2015), 1–46.
Binder, A., Montavon, G., Lapuschkin, S., Müller,
K.-R., and Samek, W.
Layer-Wise Relevance Propagation for Neural Networks with Local
Renormalization Layers.
In Artificial Neural Networks and Machine Learning – ICANN 2016
(Cham, 2016), A. E. Villa, P. Masulli, and A. J. Pons Rivero, Eds.,
Springer International Publishing, pp. 63–71.
26/30
27. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Références
Réferences II
Fernandes, K., Cardoso, J. S., and Fernandes, J.
Transfer learning with partial observability applied to cervical cancer
screening.
In Iberian conference on pattern recognition and image analysis
(2017), Springer, pp. 243–250.
Goodfellow, I. J., Shlens, J., and Szegedy, C.
Explaining and harnessing adversarial examples.
arXiv preprint arXiv:1412.6572 (2014).
Lundberg, S. M., and Lee, S.-I.
A unified approach to interpreting model predictions.
In Advances in Neural Information Processing Systems (2017),
pp. 4765–4774.
27/30
28. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Références
Réferences III
Molnar, C.
Interpretable Machine Learning.
2019.
https://christophm.github.io/interpretable-ml-book/.
Montavon, G., Samek, W., and Müller, K.-R.
Methods for interpreting and understanding deep neural networks.
Digital Signal Processing 73 (2018), 1 – 15.
Shrikumar, A., Greenside, P., and Kundaje, A.
Learning important features through propagating activation
differences.
In Proceedings of the 34th International Conference on Machine
Learning-Volume 70 (2017), JMLR. org, pp. 3145–3153.
28/30
29. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Machine Learning Interpretability
Références
Réferences IV
Shrikumar, A., Greenside, P., Shcherbina, A., and
Kundaje, A.
Not just a black box: Learning important features through
propagating activation differences.
ArXiv abs/1605.01713 (2016).
Simonyan, K., Vedaldi, A., and Zisserman, A.
Deep inside convolutional networks: Visualising image classification
models and saliency maps.
arXiv preprint arXiv:1312.6034 (2013).
Tulio Ribeiro, M., Singh, S., and Guestrin, C.
” Why Should I Trust You?”: Explaining the Predictions of Any
Classifier.
arXiv preprint arXiv:1602.04938 (2016).
29/30