Companies that understand how to apply AI will scale and win their respective markets over the next decade. That said, delivering on this promise and managing machine learning projects is much harder than most people anticpate. Many organizations hire teams of PhDs and data scientists, then fail to ship products that move business metrics. The root cause is often a lack of product strategy for AI, or the failure to adapt their product development processes to the needs of machine learning systems. This talk will cover some of the common ways machine learning fails in practice, the tactical responsibilities of AI product managers, and how to approach product strategy for AI.
Peter Skomoroch, former Head of Data Products at Workday and LinkedIn, will describe how you can navigate these challenges to ship metric moving AI products that matter to your business.
Peter will provide practical advice on:
* The role of an AI Product Manager
* How to evaluate and prioritize your AI projects
* The ways AI product management differs from traditional product management
* Bridging the worlds of design and machine learning
* Making trade offs between data quality and other business metrics
Explore how different industries are embracing the utility of AI to create and deliver new value for their customers and organisation
* Discuss the state of maturity of AI across industries
* Get an appreciation of business posture to AI projects
We also review the utility of AI across several industries including:
* Healthcare
* Newsroom & Journalism
* Travel
* Finance
* Supply Chain / eCommerce / Retail
* Streaming & Gaming
* Transportation
* Logistics
* Manufacturing
* Agriculture
* Defense & Cybersecurity
Part of the What Matters in AI series as published on www.andremuscat.com
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
Generative AI models, such as ChatGPT and Stable Diffusion, can create new and original content like text, images, video, audio, or other data from simple prompts, as well as handle complex dialogs and reason about problems with or without images. These models are disrupting traditional technologies, from search and content creation to automation and problem solving, and are fundamentally shaping the future user interface to computing devices. Generative AI can apply broadly across industries, providing significant enhancements for utility, productivity, and entertainment. As generative AI adoption grows at record-setting speeds and computing demands increase, on-device and hybrid processing are more important than ever. Just like traditional computing evolved from mainframes to today’s mix of cloud and edge devices, AI processing will be distributed between them for AI to scale and reach its full potential.
In this presentation you’ll learn about:
- Why on-device AI is key
- Full-stack AI optimizations to make on-device AI possible and efficient
- Advanced techniques like quantization, distillation, and speculative decoding
- How generative AI models can be run on device and examples of some running now
- Qualcomm Technologies’ role in scaling on-device generative AI
Companies that understand how to apply AI will scale and win their respective markets over the next decade. That said, delivering on this promise and managing machine learning projects is much harder than most people anticpate. Many organizations hire teams of PhDs and data scientists, then fail to ship products that move business metrics. The root cause is often a lack of product strategy for AI, or the failure to adapt their product development processes to the needs of machine learning systems. This talk will cover some of the common ways machine learning fails in practice, the tactical responsibilities of AI product managers, and how to approach product strategy for AI.
Peter Skomoroch, former Head of Data Products at Workday and LinkedIn, will describe how you can navigate these challenges to ship metric moving AI products that matter to your business.
Peter will provide practical advice on:
* The role of an AI Product Manager
* How to evaluate and prioritize your AI projects
* The ways AI product management differs from traditional product management
* Bridging the worlds of design and machine learning
* Making trade offs between data quality and other business metrics
Explore how different industries are embracing the utility of AI to create and deliver new value for their customers and organisation
* Discuss the state of maturity of AI across industries
* Get an appreciation of business posture to AI projects
We also review the utility of AI across several industries including:
* Healthcare
* Newsroom & Journalism
* Travel
* Finance
* Supply Chain / eCommerce / Retail
* Streaming & Gaming
* Transportation
* Logistics
* Manufacturing
* Agriculture
* Defense & Cybersecurity
Part of the What Matters in AI series as published on www.andremuscat.com
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)Krishnaram Kenthapadi
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, as well as critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we will first motivate the need for model interpretability and explainability in AI from societal, legal, customer/end-user, and model developer perspectives. [Note: Due to time constraints, we will not focus on techniques/tools for providing explainability as part of AI/ML systems.] Then, we will focus on the real-world application of explainability techniques in industry, wherein we present practical challenges / implications for using explainability techniques effectively and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning application domains such as search and recommendation systems, sales, lending, and fraud detection. Finally, based on our experiences in industry, we will identify open problems and research directions for the research community.
Generative AI models, such as ChatGPT and Stable Diffusion, can create new and original content like text, images, video, audio, or other data from simple prompts, as well as handle complex dialogs and reason about problems with or without images. These models are disrupting traditional technologies, from search and content creation to automation and problem solving, and are fundamentally shaping the future user interface to computing devices. Generative AI can apply broadly across industries, providing significant enhancements for utility, productivity, and entertainment. As generative AI adoption grows at record-setting speeds and computing demands increase, on-device and hybrid processing are more important than ever. Just like traditional computing evolved from mainframes to today’s mix of cloud and edge devices, AI processing will be distributed between them for AI to scale and reach its full potential.
In this presentation you’ll learn about:
- Why on-device AI is key
- Full-stack AI optimizations to make on-device AI possible and efficient
- Advanced techniques like quantization, distillation, and speculative decoding
- How generative AI models can be run on device and examples of some running now
- Qualcomm Technologies’ role in scaling on-device generative AI
Supercharge Your Project Management Skills with CHATGPT practical - UK.pdfPMIUKChapter
Ready to revolutionize your project management skills? Join us as we dive into the world of ChatGPT, an incredibly powerful language model that's about to become your new best friend in assisting you in managing projects! We'll start with a warm introduction to ChatGPT, giving you the fundamentals on what it is and how it works. Then, we'll take you on a journey to explore its incredible abilities it can help with in project management in seconds with the correct prompts.
We'll show you how ChatGPT can make the foundation of project documentation like a business case or project charter a breeze for you to customize like a pro saving you endless hours. We'll also demonstrate how it can help you create summarizations, decision making analysis, critical paths, cause-and-effect diagrams, Earned value management, power/interest grids, agile user stories and more in no time.
But wait, there's more! We'll spill the beans on clever tips and tricks to make the most of ChatGPT in your daily project management tasks as well as using ChatGPT to help pass the PMP exam.
To wrap it all up, we'll host a fun and interactive learning session for you to review everything we've covered and get some hands-on experience with ChatGPT. So come along and discover how you can level up your project management skills with this game-changing AI tool!
Valencian Summer School 2015
Day 1
Lecture 1
State of the Art in Machine Learning
Poul Petersen (BigML)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2015
AIOps is becoming imperative to the management of today’s complex IT systems and their ability to support changing business conditions. This slide explains the role that AIOps can and will play in the enterprise of the future, how the scope of AIOps platforms will expand, and what new functionality may be deployed.
Watch the webinar here. https://www.moogsoft.com/resources/aiops/webinar/aiops-the-next-five-years
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
Revolutionizing your Business with AI (AUC VLabs).pdfOmar Maher
"Revolutionizing your Business with AI" is a comprehensive yet digestible overview of Artificial Intelligence and Machine Learning. This presentation elucidates their fundamental concepts, showcases real-world applications, and equips attendees with strategic tools like the AI Idea Canvas and Evaluation Template. Whether you're a business leader or an intrigued learner, this presentation simplifies AI, aiding you in confidently navigating its transformative landscape.
In the US, people are already implementing the use of converstaionl AI, ChatGPT in everydy mundane tasks. Implementation is not only limited to that. Various industries are also using this revolutionary technology for maintaining a superior customer experience. People are also criticizing ChatGPT for creating employment threats and also being unethical in it's answers. The technology is being widely applauded but everything has certain pain points associated with it.
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
Session 1
👉This first session will cover an introduction to Generative AI & harnessing the power of large language models. The following topics will be discussed:
Introduction to Generative AI & harnessing the power of large language models.
What’s generative AI & what’s LLM.
How are we using it in our document understanding & communication mining models?
How to develop a trustworthy and unbiased AI model using LLM & GenAI.
Personal Intelligent Assistant
Speakers:
📌George Roth - AI Evangelist at UiPath
📌Sharon Palawandram - Senior Machine Learning Consultant @ Ashling Partners & UiPath MVP
📌Russel Alfeche - Technology Leader RPA @qBotica & UiPath MVP
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/responsible-ai-tools-and-frameworks-for-developing-ai-solutions-a-presentation-from-intel/
Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel, presents the “Responsible AI: Tools and Frameworks for Developing AI Solutions” tutorial at the May 2023 Embedded Vision Summit.
Over 90% of businesses using AI say trustworthy and explainable AI is critical to business, according to Morning Consult’s IBM Global AI Adoption Index 2021. If not designed with responsible considerations of fairness, transparency, preserving privacy, safety and security, AI systems can cause significant harm to people and society and result in financial and reputational damage for companies.
How can we take a human-centric approach to design AI solutions? How can we identify different types of bias and what tools can we use to mitigate those? What are model cards, and how can we use them to improve transparency? What tools can we use to preserve privacy and improve security? In this talk, Karvir discusses practical approaches to adoption of responsible AI principles. She highlights relevant tools and frameworks and explores industry case studies. She also discusses building a well-defined response plan to help address an AI incident efficiently.
Our report will provide a look into the technology landscape of the future, including:
- Importance of AI in enabling innovation
- Catalysts of future innovations
- Top technology trends in 2023-2024
- Main benefits of AI adoption
- Steps to prepare for future disruptions.
Download your free copy now and implement the key findings to improve your business.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
Supercharge Your Project Management Skills with CHATGPT practical - UK.pdfPMIUKChapter
Ready to revolutionize your project management skills? Join us as we dive into the world of ChatGPT, an incredibly powerful language model that's about to become your new best friend in assisting you in managing projects! We'll start with a warm introduction to ChatGPT, giving you the fundamentals on what it is and how it works. Then, we'll take you on a journey to explore its incredible abilities it can help with in project management in seconds with the correct prompts.
We'll show you how ChatGPT can make the foundation of project documentation like a business case or project charter a breeze for you to customize like a pro saving you endless hours. We'll also demonstrate how it can help you create summarizations, decision making analysis, critical paths, cause-and-effect diagrams, Earned value management, power/interest grids, agile user stories and more in no time.
But wait, there's more! We'll spill the beans on clever tips and tricks to make the most of ChatGPT in your daily project management tasks as well as using ChatGPT to help pass the PMP exam.
To wrap it all up, we'll host a fun and interactive learning session for you to review everything we've covered and get some hands-on experience with ChatGPT. So come along and discover how you can level up your project management skills with this game-changing AI tool!
Valencian Summer School 2015
Day 1
Lecture 1
State of the Art in Machine Learning
Poul Petersen (BigML)
https://bigml.com/events/valencian-summer-school-in-machine-learning-2015
AIOps is becoming imperative to the management of today’s complex IT systems and their ability to support changing business conditions. This slide explains the role that AIOps can and will play in the enterprise of the future, how the scope of AIOps platforms will expand, and what new functionality may be deployed.
Watch the webinar here. https://www.moogsoft.com/resources/aiops/webinar/aiops-the-next-five-years
Data Con LA 2020
Description
More and more organizations are embracing AI technology by infusing it in their products and services to to differentiate themselves against their competitors. AI is being utilized in some sensitive areas of human life. In this session let's look at some of principles governing adoption of AI in a responsible manner. Why companies are accelerating adoption of AI?
Increasingly organization are accelerating adoption of AI to differentiate their product and services in the market. Outcomes of this digital transformation that we have seen in the areas of optimizing operations, engaging customers, empowering employees and transforming their products and services.
*List some of the sensitive use cases where AI is being applied
*Why governing AI is important and what are those principles?
*How Microsoft is approaching it?
Speaker
Suresh Paulraj, Microsoft, Principal Cloud Solution Architect Data & AI
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
Revolutionizing your Business with AI (AUC VLabs).pdfOmar Maher
"Revolutionizing your Business with AI" is a comprehensive yet digestible overview of Artificial Intelligence and Machine Learning. This presentation elucidates their fundamental concepts, showcases real-world applications, and equips attendees with strategic tools like the AI Idea Canvas and Evaluation Template. Whether you're a business leader or an intrigued learner, this presentation simplifies AI, aiding you in confidently navigating its transformative landscape.
In the US, people are already implementing the use of converstaionl AI, ChatGPT in everydy mundane tasks. Implementation is not only limited to that. Various industries are also using this revolutionary technology for maintaining a superior customer experience. People are also criticizing ChatGPT for creating employment threats and also being unethical in it's answers. The technology is being widely applauded but everything has certain pain points associated with it.
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
AI and ML Series - Introduction to Generative AI and LLMs - Session 1DianaGray10
Session 1
👉This first session will cover an introduction to Generative AI & harnessing the power of large language models. The following topics will be discussed:
Introduction to Generative AI & harnessing the power of large language models.
What’s generative AI & what’s LLM.
How are we using it in our document understanding & communication mining models?
How to develop a trustworthy and unbiased AI model using LLM & GenAI.
Personal Intelligent Assistant
Speakers:
📌George Roth - AI Evangelist at UiPath
📌Sharon Palawandram - Senior Machine Learning Consultant @ Ashling Partners & UiPath MVP
📌Russel Alfeche - Technology Leader RPA @qBotica & UiPath MVP
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
[Video recording available at https://www.youtube.com/playlist?list=PLewjn-vrZ7d3x0M4Uu_57oaJPRXkiS221]
Artificial Intelligence is increasingly playing an integral role in determining our day-to-day experiences. Moreover, with proliferation of AI based solutions in areas such as hiring, lending, criminal justice, healthcare, and education, the resulting personal and professional implications of AI are far-reaching. The dominant role played by AI models in these domains has led to a growing concern regarding potential bias in these models, and a demand for model transparency and interpretability. In addition, model explainability is a prerequisite for building trust and adoption of AI systems in high stakes domains requiring reliability and safety such as healthcare and automated transportation, and critical industrial applications with significant economic implications such as predictive maintenance, exploration of natural resources, and climate change modeling.
As a consequence, AI researchers and practitioners have focused their attention on explainable AI to help them better trust and understand models at scale. The challenges for the research community include (i) defining model explainability, (ii) formulating explainability tasks for understanding model behavior and developing solutions for these tasks, and finally (iii) designing measures for evaluating the performance of models in explainability tasks.
In this tutorial, we present an overview of model interpretability and explainability in AI, key regulations / laws, and techniques / tools for providing explainability as part of AI/ML systems. Then, we focus on the application of explainability techniques in industry, wherein we present practical challenges / guidelines for effectively using explainability techniques and lessons learned from deploying explainable models for several web-scale machine learning and data mining applications. We present case studies across different companies, spanning application domains such as search & recommendation systems, hiring, sales, and lending. Finally, based on our experiences in industry, we identify open problems and research directions for the data mining / machine learning community.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2023/09/responsible-ai-tools-and-frameworks-for-developing-ai-solutions-a-presentation-from-intel/
Mrinal Karvir, Senior Cloud Software Engineering Manager at Intel, presents the “Responsible AI: Tools and Frameworks for Developing AI Solutions” tutorial at the May 2023 Embedded Vision Summit.
Over 90% of businesses using AI say trustworthy and explainable AI is critical to business, according to Morning Consult’s IBM Global AI Adoption Index 2021. If not designed with responsible considerations of fairness, transparency, preserving privacy, safety and security, AI systems can cause significant harm to people and society and result in financial and reputational damage for companies.
How can we take a human-centric approach to design AI solutions? How can we identify different types of bias and what tools can we use to mitigate those? What are model cards, and how can we use them to improve transparency? What tools can we use to preserve privacy and improve security? In this talk, Karvir discusses practical approaches to adoption of responsible AI principles. She highlights relevant tools and frameworks and explores industry case studies. She also discusses building a well-defined response plan to help address an AI incident efficiently.
Our report will provide a look into the technology landscape of the future, including:
- Importance of AI in enabling innovation
- Catalysts of future innovations
- Top technology trends in 2023-2024
- Main benefits of AI adoption
- Steps to prepare for future disruptions.
Download your free copy now and implement the key findings to improve your business.
[Video available at https://sites.google.com/view/ResponsibleAITutorial]
Artificial Intelligence is increasingly being used in decisions and processes that are critical for individuals, businesses, and society, especially in areas such as hiring, lending, criminal justice, healthcare, and education. Recent ethical challenges and undesirable outcomes associated with AI systems have highlighted the need for regulations, best practices, and practical tools to help data scientists and ML developers build AI systems that are secure, privacy-preserving, transparent, explainable, fair, and accountable – to avoid unintended and potentially harmful consequences and compliance challenges.
In this tutorial, we will present an overview of responsible AI, highlighting model explainability, fairness, and privacy in AI, key regulations/laws, and techniques/tools for providing understanding around AI/ML systems. Then, we will focus on the application of explainability, fairness assessment/unfairness mitigation, and privacy techniques in industry, wherein we present practical challenges/guidelines for using such techniques effectively and lessons learned from deploying models for several web-scale machine learning and data mining applications. We will present case studies across different companies, spanning many industries and application domains. Finally, based on our experiences in industry, we will identify open problems and research directions for the AI community.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
Innovations in technology has revolutionized financial services to an extent that large financial institutions like Goldman Sachs are claiming to be technology companies! It is no secret that technological innovations like Data science and AI are changing fundamentally how financial products are created, tested and delivered. While it is exciting to learn about technologies themselves, there is very little guidance available to companies and financial professionals should retool and gear themselves towards the upcoming revolution.
In this master class, we will discuss key innovations in Data Science and AI and connect applications of these novel fields in forecasting and optimization. Through case studies and examples, we will demonstrate why now is the time you should invest to learn about the topics that will reshape the financial services industry of the future!
AI in Finance
GDG Cloud Southlake #17: Meg Dickey-Kurdziolek: Explainable AI is for EveryoneJames Anderson
If Artificial Intelligence (AI) is a black-box, how can a human comprehend and trust the results of Machine Learning (ML) alogrithms? Explainable AI (XAI) tries to shed light into that AI black-box so humans can trust what is going on. Our speaker Meg Dickey-Kurdziolek is currently a UX Researcher for Google Cloud AI and Industry Solutions, where she focuses her research on Explainable AI and Model Understanding. Recording of the presentation: https://youtu.be/6N2DNN_HDWU
Explainable AI makes the algorithms to be transparent where they interpret, visualize, explain and integrate for fair, secure and trustworthy AI applications.
Walk through of azure machine learning studio new featuresLuca Zavarella
The session is mostly a demo one that will guide you into the new Azure Machine Learning Service world, focusing on the new features like the Designer (no code ML), Automated ML and ML Interpretability.
You can find the webinar in Italian language here: https://bit.ly/2w0EsNK
Directive Explanations for Monitoring the Risk of Diabetes Onset - ACM IUI 2023Aditya Bhattacharya
This slide presents a short summary of my talk at ACM IUI 2023. You can download the full paper from this link - https://arxiv.org/abs/2302.10671.
Paper Title: Directive Explanations for Monitoring the Risk of Diabetes Onset: Introducing Directive Data-Centric Explanations and Combinations to Support What-If Explorations
Abstract: Explainable artificial intelligence is increasingly used in machine learning (ML) based decision-making systems in healthcare. However, little research has compared the utility of different explanation methods in guiding healthcare experts for patient care. Moreover, it is unclear how useful, understandable, actionable and trustworthy these methods are for healthcare experts, as they often require technical ML knowledge. This paper presents an explanation dashboard that predicts the risk of diabetes onset and explains those predictions with data-centric, feature-importance, and example-based explanations. We designed an interactive dashboard to assist healthcare experts, such as nurses and physicians, in monitoring the risk of diabetes onset and recommending measures to minimize risk. We conducted a qualitative study with 11 healthcare experts and a mixed-methods study with 45 healthcare experts and 51 diabetic patients to compare the different explanation methods in our dashboard in terms of understandability, usefulness, actionability, and trust. Results indicate that our participants preferred our representation of data-centric explanations that provide local explanations with a global overview over other methods. Therefore, this paper highlights the importance of visually directive data-centric explanation method for assisting healthcare experts to gain actionable insights from patient health records. Furthermore, we share our design implications for tailoring the visual representation of different explanation methods for healthcare experts.
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
Abstract –
Although industries have started to adopt AI and Machine Learning in almost every sector to solve complex business problems, but are these models always trustworthy? Machine Learning models are not any oracle but rather are scientific methods and mathematical models which best describes the data. But science is all about explaining complex natural phenomena in the simplest way possible! So, can we make ML and DL models more interpretable, so that any business user can understand these models and trust the results of these models?
In order to find out the answer, please join me in this session, in which I will take about concepts of Explainable AI and discuss its necessity and principles which help us demystify black-box AI models. I will be discussing about popular approaches like Feature Importance, Key Influencers, Decomposition trees used in classical Machine Learning interpretable. We will discuss about various techniques used for Deep Learning model interpretations like Saliency Maps, Grad-CAMs, Visual Attention Maps and finally go through more details about frameworks like LIME, SHAP, ELI5, SKATER, TCAV which helps us to make Machine Learning and Deep Learning models more interpretable, trustworthy and useful!
Accelerating Data Science and Machine Learning Workflow with Azure Machine Le...Aditya Bhattacharya
Accelerating Data Science and Machine Learning Workflow with Microsoft Azure Machine Learning
Microsoft User Group Hyderabad AIML Day 2020
https://aditya-bhattacharya.net/
https://www.eventbrite.com/e/microsoft-user-group-hyderabad-aiml-day-2020-tickets-123940376001
MUGH
For more details please follow:
https://medium.com/datadriveninvestor/a-powerful-tool-for-demand-planning-segmentation-a66bfa729360
https://towardsdatascience.com/effective-approaches-for-time-series-anomaly-detection-9485b40077f1
https://aditya-bhattacharya.net/2020/07/20/sales-and-demand-forecast-analysis/
By Aditya Bhattacharya
For this talk, I will be discussing about various approaches to accelerate deep learning solutions from notebooks or research environment to production environment and how these solutions can be transformed as an enterprise level end to end Deep Learning Solution, which can be consumed as a service by any software application, with a practical use-case example.
NIT Silchar ML Hackathon 2019 Session on Computer Vision with Deep Learning.
Targeted Audience: Pre-requisite: Basic knowledge on Machine Learning and Deep Learning
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Opendatabay - Open Data Marketplace.pptxOpendatabay
Opendatabay.com unlocks the power of data for everyone. Open Data Marketplace fosters a collaborative hub for data enthusiasts to explore, share, and contribute to a vast collection of datasets.
First ever open hub for data enthusiasts to collaborate and innovate. A platform to explore, share, and contribute to a vast collection of datasets. Through robust quality control and innovative technologies like blockchain verification, opendatabay ensures the authenticity and reliability of datasets, empowering users to make data-driven decisions with confidence. Leverage cutting-edge AI technologies to enhance the data exploration, analysis, and discovery experience.
From intelligent search and recommendations to automated data productisation and quotation, Opendatabay AI-driven features streamline the data workflow. Finding the data you need shouldn't be a complex. Opendatabay simplifies the data acquisition process with an intuitive interface and robust search tools. Effortlessly explore, discover, and access the data you need, allowing you to focus on extracting valuable insights. Opendatabay breaks new ground with a dedicated, AI-generated, synthetic datasets.
Leverage these privacy-preserving datasets for training and testing AI models without compromising sensitive information. Opendatabay prioritizes transparency by providing detailed metadata, provenance information, and usage guidelines for each dataset, ensuring users have a comprehensive understanding of the data they're working with. By leveraging a powerful combination of distributed ledger technology and rigorous third-party audits Opendatabay ensures the authenticity and reliability of every dataset. Security is at the core of Opendatabay. Marketplace implements stringent security measures, including encryption, access controls, and regular vulnerability assessments, to safeguard your data and protect your privacy.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
2. About Me
E x p l a i n a b l e A I : M a k i n g M L a n d D L m o d e l s m o r e i n t e r p r e t a b l e
2
Aditya
Bhattacharya
I am currently working as an Explainable AI Researcher at KU Leuven,
Belgium with an overall experience of 7 years in Data Science, Machine
Learning, IoT & Software Engineering. Prior to his current role, Aditya has
worked in various roles in organizations like West Pharma, Microsoft & Intel
to democratize AI adoption for industrial solutions. As the AI Lead at West
Pharma, he had contributed to forming the AI Centre of Excellence,
managing & leading a global team of 10+ members focused on building AI
products.
Apart from my day job, I am an AI Researcher, Executive Member, Faculty
at an NGO called MUST Research https://must.co.in/. I am also a content
creator, adjunct faculty at UpGrad https://www.upgrad.com/ .
Website : https://aditya-bhattacharya.net/
LinkedIn: https://www.linkedin.com/in/aditya-bhattacharya-b59155b6/
3. Key Topics
3
1. Conceptual understanding of XAI methods
• Understanding the need for XAI
• Dimensions of explainability
• Approaches for explainability
• Different types of explainability methods
2. Discussions on existing Python frameworks for explainability
3. ENDURANCE - End User Centric Artificial Intelligence
• Understanding open challenges of XAI
• Industry best practices,
• Bridging the XAI-end user gaps
8. 8
• Data: Data centric explanation methods revolves around the
underlying data that is being modeled. Understanding the data,
identifying its limitations and using conventional data analytics to
generate actionable insights.
• Model: Model-based interpretability techniques often help us to
understand how the input data is mapped to the output predictions
using certain approximation methods.
• Outcomes: The outcome of explainability is about understanding
why and how a certain prediction or decision is made by an ML
model and how does the change in input changes the output of the
model.
• End users: The final dimension of explainability is all about
creating the right level of abstraction and including the right
amount of details for the final consumers of the ML models so that
the outcomes are reliable and trustworthy for any non-technical
end-user
Applied Machine Learning Explainability Techniques, A. Bhattacharya
16. Popular frameworks for XAI
1 6
LIME
• Local
Interpretable
Model-agnostic
Explanations is
interpretability
framework that
works on structured
data, text and
image classifiers.
SHAP
• SHAP (SHapley
Additive
exPlanations) is a
game theoretic
approach to explain
the output of any
machine learning
model.
DALEX
• moDel Agnostic
Language for
Exploration
(DALEX) xrays any
model and helps to
explore and explain
its behaviour.
Explainer
dashboards
• Explainerdashboa
rd makes it
convenient to
quickly deploy a
dashboard web app
that explains ML
model.
TCAV
• Testing with
Concept Activation
Vectors (TCAV) is a
new interpretability
method to
understand what
signals your neural
networks models
uses for prediction.
17. 1 7
• Behind the workings of LIME lies the assumption that every complex model is linear on a local scale.
LIME tries to fit a simple model around a single observation that will mimic how the global model
behaves at that locality.
• Create the perturbed data and predict the output on the perturbed data
• Create discretized features and find the Euclidean distance of perturbed data to the original
observation
• Convert distance to similarity score and select the top n features for the model
• Create a linear model and explain the prediction
18. 1 8
The lime package is on PyPI.
`pip install lime`
20. 2 0
There is a high-speed exact algorithm for tree ensemble methods (Tree SHAP arXiv paper). Fast C++
implementations are supported for XGBoost, LightGBM, CatBoost, and scikit-learn tree models!
• SHAP assigns each feature an importance
value for a particular prediction.
• Its novel components include: the identification
of a new class of additive feature importance
measures, and theoretical results showing
there is a unique solution in this class with a
set of desirable properties.
• Typically, SHAP values try to explain the output
of a model (function) as a sum of the effects of
each feature being introduced into a
conditional expectation. Importantly, for non-
linear functions the order in which features are
introduced matters.
SHAP can be installed from PyPI
22. 2 2
The following figure from the KDD 18 paper, Consistent Individualized
Feature Attribution for Tree Ensembles summarizes this in a nice way!
SHAP Summary Plot
SHAP Dependence Plots
SHAP Gradient Explainer for Images
Applied Machine Learning Explainability Techniques, A. Bhattacharya
25. 2 5
Testing with Concept Activation Vectors (TCAV) is
a new interpretability method to understand what
signals your neural networks models uses for
prediction.
What's special about TCAV
compared to other methods?
TCAV instead shows importance of high
level concepts (e.g., color, gender, race)
for a prediction class - this is how humans
communicate!
TCAV gives an explanation that is generally true for a class of interest, beyond one image (global
explanation).
For example, for a given class, we can show how much race or gender was important for classifications in
InceptionV3. Even though neither race nor gender labels were part of the training input!
pip install tcav https://github.com/tensorflow/tcav
26. 2 6
The Concept Activation Vectors (CAVs) provide an interpretation of a neural net’s internal state
in terms of human-friendly concepts. TCAV uses directional derivatives to quantify the degree to
which a user-defined idea is vital to a classification result–for example, how sensitive a prediction
of “zebra” is to the presence of stripes.
TCAV essentially learns ‘concepts’
from examples. For instance, TCAV
needs a couple of examples of ‘female’,
and something ‘not female’ to learn a
“gender” concept. The goal of TCAV is
to determine how much a concept (e.g.,
gender, race) was necessary for a
prediction in a trained model even if the
concept was not part of the training.
Applied Machine Learning Explainability Techniques, A. Bhattacharya
28. All these frameworks are great
and can bring explainability to a
great extent, but can non-expert
consumers of AI models
interpret these explanation
methods?
2 8
30. 3 0
Applied Machine Learning Explainability Techniques, A. Bhattacharya
• Shifting focus between the model developer and the end user
• Lack of stakeholder participation
• Application-specific challenges
• Lack of quantitative evaluation metrics
• Lack of actionable explanations
• Lack of contextual explanations
32. 3 2
Applied Machine Learning Explainability Techniques, A. Bhattacharya
• Identify the target audience of XAI and their usability context
• Shortlisting the XAI techniques based on the user's needs
• Human-centered XAI: An iterative process of translating and
evaluating XAI in specific domains involving the end user
• The importance of the feedback loop in XAI
• The importance of scalability in the design process
• Toggling between the data, the interface, and actionable insights
33. 3 3
Applied Machine Learning Explainability Techniques, A. Bhattacharya
Adopting a data-first approach for explainability
34. 3 4
Applied Machine Learning Explainability Techniques, A. Bhattacharya
Emphasizing prescriptive insights for explainability
35. 3 5
Applied Machine Learning Explainability Techniques, A. Bhattacharya
Emphasizing interactive machine learning for explainability
36. Summary
1. Conceptual understanding of XAI methods
2. Discussions on existing Python frameworks for
explainability
3. ENDURANCE - End User Centric Artificial Intelligence
3 6
https://github.com/PacktPublishing/Applied-Machine-
Learning-Explainability-Techniques
Make ML models explainable and trustworthy for practical
applications using LIME, SHAP and more
Amazon link - https://amzn.to/3OAZZPf