The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
In this talk, we will provide an overview explaining the key Responsible AI aspects: Explainability, Bias, and Accountability. We will then outline the Gen AI usage patterns and show how the three aspects can be integrated at different stages of the LLMOps (MLOps for LLM) pipeline. We summarize the learnings in the form of Gen AI design patterns that can be readily applied to enterprise use-cases.
Regulating Generative AI - LLMOps pipelines with TransparencyDebmalya Biswas
The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
Rather than trying to understand and regulate all types of AI, we recommend a different (and practical) approach in this talk based on AI Transparency —
to transparently outline the capabilities of the AI system based on its training methodology and set realistic expectations with respect to what it can (and cannot) do.
We outline LLMOps architecture patterns and show how the proposed approach can be integrated at different stages of the LLMOps pipeline capturing the model's capabilities. In addition, the AI system provider also specifies scenarios where (they believe that) the system can make mistakes, and recommends a ‘safe’ approach with guardrails for those scenarios.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
This is an article about Generative AI. It discusses what it is and the different techniques used to create it. It also goes into the potential uses of Generative AI. Some of the important points from this article are that Generative AI is still in its early stages but has already shown promising results. It is also important to note that Generative AI can be used to create fake data that is indistinguishable from real data.
https://www.ltimindtree.com/wp-content/uploads/2023/01/DeepPoV-Generative-AI.pdf
Regulating Generative AI - LLMOps pipelines with TransparencyDebmalya Biswas
The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
Rather than trying to understand and regulate all types of AI, we recommend a different (and practical) approach in this talk based on AI Transparency —
to transparently outline the capabilities of the AI system based on its training methodology and set realistic expectations with respect to what it can (and cannot) do.
We outline LLMOps architecture patterns and show how the proposed approach can be integrated at different stages of the LLMOps pipeline capturing the model's capabilities. In addition, the AI system provider also specifies scenarios where (they believe that) the system can make mistakes, and recommends a ‘safe’ approach with guardrails for those scenarios.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
This is an article about Generative AI. It discusses what it is and the different techniques used to create it. It also goes into the potential uses of Generative AI. Some of the important points from this article are that Generative AI is still in its early stages but has already shown promising results. It is also important to note that Generative AI can be used to create fake data that is indistinguishable from real data.
https://www.ltimindtree.com/wp-content/uploads/2023/01/DeepPoV-Generative-AI.pdf
AI has for long remained an exciting area for the scientists and a fuzzy area for all the rest. We talk about Artificial General Intelligence and Artificial Narrow Intelligence in the same vein sometimes. This is an attempt to explain the tech behind ANI in simple layman's terms with a focus on the business applications of it and what to use when.
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
Produced by Nathan Benaich and Air Street Capital team
How QA Ensures that Enterprise AI Initiatives SucceedCognizant
The euphoria around artificial intelligence (AI) focuses primarily on what it can do, leaving the hard work for expert teams to sort through. A curated quality assurance (QA) strategy, focused on parameters such as data, algorithm, biases and digital ethics can ensure that AI initiatives deliver.
Machine learning and artificial intelligence are two of the most rapidly growing and transformative technologies of our time. These technologies are revolutionizing the way businesses operate, improving healthcare outcomes, and transforming the way we live our daily lives. Learn more about it in the PPT below!
In today's tech-driven world, the integration of artificial intelligence (AI) into applications has become increasingly prevalent. From personalized recommendations to intelligent chatbots, AI enhances user experiences and optimizes processes. However, building an AI app can seem daunting to those unfamiliar with the process. Fear not! This guide aims to demystify the journey, offering step-by-step insights into how to build an AI app from scratch.
Did you know that a recent study by McKinsey & Company highlighted that 84% of organizations are concerned about bias in their AI algorithms? However, there's a solution to this problem. Upholding best practices can significantly mitigate biases in AI for enterprises, particularly given the challenges posed by compliance and the rapid dissemination of information through digital media.
In this E42 Blog post, we delve into an array of best practices to mitigate bias and hallucinations in AI models. A few of these best practices include:
Model optimization: This practice focuses on enhancing model performance and reducing bias through various optimization techniques
Understanding model architecture: This involves a deep dive into the structure of AI models to identify and rectify biases
Human interactions: This emphasizes on the critical role of human feedback in the training loop in ensuring unbiased AI outcomes
On-premises large language models: This practice involves utilizing on-premises LLMs to maintain control over data and model training
Future of Machine Learning: Ways ML and AI Will Drive Innovation & ChangePixel Crayons
Did you know? By 2022, the global ML market is expected to be worth $8.81 billion.
It is true that machine learning and AI will drive innovation in various industries in the years to come.
Want to know how? Or What will be the future of machine learning and AI? Here are some points that say what’s in store for machine learning as it continues its growth trajectory.
It is a good idea to hire AI developers to develop innovative solutions with machine learning.
Hiring a top-notch machine learning development company in India can help corporations streamline their operations and stay competitive in the marketplace.
https://bit.ly/3zl85FF
User Experience of AI - How to marry the two for ultimate success?Koru UX Design
Want UX and AI to mean more than just buzz words? Download our whitepaper on how to combine the two to create scalable enterprise products at a fast pace. Learn from real-life examples on how smart adoption solved crucial business challenges across various industries. Download now using this link, https://www.koruux.com/uxfreebies/ux-of-ai/
As more and more companies in a range of industries adopt machine learning and more advanced AI algorithms, the ability to provide understandable explanations for different stakeholders becomes critical. If people don’t know why an AI system made a decision, they may not trust the outcome.
Data scientists have a duty to ensure they analyze data and train machine learning models responsibly; respecting individual privacy, mitigating bias, and ensuring transparency. This module explores some considerations and techniques for applying responsible machine learning principles.
State of AI Report 2023 - ONLINE presentationssuser2750ef
State of AI Report 2023 - ONLINE.pptx
When conducting a PEST analysis for the Syrian conflict, it's important to consider the political, economic, socio-cultural, and technological factors that have influenced and continue to impact the situation in Syria. Here's a high-level overview of a PEST analysis for the Syrian conflict:
1. Political Factors:
- Government Instability: Ongoing civil war and conflict have led to political instability and a complex power struggle between various factions and international players.
- Foreign Intervention: Involvement of external powers and regional actors has exacerbated the conflict and added geopolitical complexities to the situation.
- International Relations: Relations with global powers like the United States, Russia, and regional players like Iran and Turkey significantly impact the conflict dynamics.
2. Economic Factors:
- Humanitarian Crisis: The conflict has resulted in a severe humanitarian crisis, causing widespread displacement, destruction of infrastructure, and economic decline.
- Sanctions and Trade Barriers: International sanctions and disrupted trade have further worsened the economic situation in Syria, affecting the livelihoods of the population.
- Resource Depletion: Conflict-driven resource depletion, including loss of agricultural lands and disruption of industries, has weakened the economy.
3. Socio-cultural Factors:
- Civilian Suffering: The conflict has led to a significant loss of life, displacement of populations, and severe trauma among civilians, impacting social cohesion and community structures.
- Ethnic and Religious Divisions: Deep-seated ethnic and religious divisions have fueled the conflict, leading to sectarian tensions and societal fragmentation.
- Refugee Crisis: The conflict has triggered a massive refugee crisis, with millions of Syrians seeking asylum in neighboring countries and beyond, straining regional stability.
4. Technological Factors:
- Communication and Propaganda: Technology, including social media, has been used for communication, mobilization, and spreading propaganda by various actors in the conflict.
- Warfare Technology: Advancements in warfare technology and the use of drones, cyber warfare, and other advanced weaponry have transformed the nature of conflict in Syria.
- Cybersecurity Concerns: The conflict has also raised concerns about cybersecurity threats, misinformation campaigns, and digital vulnerabilities in the region.
This analysis provides a broad understanding of the multifaceted nature of the Syrian conflict, highlighting the diverse factors at play and the complex challenges facing Syria and the international community.
Understanding the New World of Cognitive ComputingDATAVERSITY
Cognitive Computing is a rapidly developing technology that has reached practical application and implementation. So what is it? Do you need it? How can it benefit your business?
In this webinar a panel of experts in Cognitive Computing will discuss the technology, the current practical applications, and where this technology is going. The discussion will start with a review of a recent survey produced by DATAVERSITY on how Cognitive Computing is currently understood by your peers. The panel will also review many components of the technology including:
Cognitive Analytics
Machine Learning
Deep Learning
Reasoning
And next generation artificial intelligence (AI)
And get involved in the discussion with your own questions to present to the panel.
In the Dark? Understanding Big Data & AI: Talent Acquisition Strategies for 2018Yoh Staffing Solutions
Big Data and AI have changed the way companies acquire people. Is your organization one of them? Shed some light on this innovation with these valuable tips and gain a better understanding of the implications Big Data and AI can have on your talent acquisition strategy.
Constraints Enabled Autonomous Agent Marketplace: Discovery and MatchmakingDebmalya Biswas
The recent advances in Generative AI have renewed the discussion around Auto-GPT, a form of autonomous agent that can execute complex tasks, e.g., make a sale, plan a trip, etc. We focus on the discovery aspect of agents, i.e., identifying the agent(s) capable of executing a given task. This implies that there exists a
marketplace with a registry of agents - with a well-defined description of the agent capabilities and constraints.
In this paper, we outline a constraints based model to specify agent services. We show how the constraints of a composite agent can be derived and described in a manner consistent with respect to the constraints of its component agents. Finally, we discuss approximate matchmaking, and show how the notion of bounded inconsistency can be exploited to discover agents more efficiently.
Enterprise adoption of AI/ML services has significantly accelerated in recent years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., prediction, classification. In this talk, we emphasize on the compositionality aspect that enables seamless composition / orchestration of existing data and models addressing complex multi-domain use-cases. This enables reuse, agility, and efficiency in model development and maintenance efforts. We then extend this concept to the Generative AI world, discussing the different LLMOps architectural patterns enabling composition of Large Language Models (LLMs) and AI Agents.
More Related Content
Similar to Responsible Generative AI Design Patterns
AI has for long remained an exciting area for the scientists and a fuzzy area for all the rest. We talk about Artificial General Intelligence and Artificial Narrow Intelligence in the same vein sometimes. This is an attempt to explain the tech behind ANI in simple layman's terms with a focus on the business applications of it and what to use when.
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
Produced by Nathan Benaich and Air Street Capital team
How QA Ensures that Enterprise AI Initiatives SucceedCognizant
The euphoria around artificial intelligence (AI) focuses primarily on what it can do, leaving the hard work for expert teams to sort through. A curated quality assurance (QA) strategy, focused on parameters such as data, algorithm, biases and digital ethics can ensure that AI initiatives deliver.
Machine learning and artificial intelligence are two of the most rapidly growing and transformative technologies of our time. These technologies are revolutionizing the way businesses operate, improving healthcare outcomes, and transforming the way we live our daily lives. Learn more about it in the PPT below!
In today's tech-driven world, the integration of artificial intelligence (AI) into applications has become increasingly prevalent. From personalized recommendations to intelligent chatbots, AI enhances user experiences and optimizes processes. However, building an AI app can seem daunting to those unfamiliar with the process. Fear not! This guide aims to demystify the journey, offering step-by-step insights into how to build an AI app from scratch.
Did you know that a recent study by McKinsey & Company highlighted that 84% of organizations are concerned about bias in their AI algorithms? However, there's a solution to this problem. Upholding best practices can significantly mitigate biases in AI for enterprises, particularly given the challenges posed by compliance and the rapid dissemination of information through digital media.
In this E42 Blog post, we delve into an array of best practices to mitigate bias and hallucinations in AI models. A few of these best practices include:
Model optimization: This practice focuses on enhancing model performance and reducing bias through various optimization techniques
Understanding model architecture: This involves a deep dive into the structure of AI models to identify and rectify biases
Human interactions: This emphasizes on the critical role of human feedback in the training loop in ensuring unbiased AI outcomes
On-premises large language models: This practice involves utilizing on-premises LLMs to maintain control over data and model training
Future of Machine Learning: Ways ML and AI Will Drive Innovation & ChangePixel Crayons
Did you know? By 2022, the global ML market is expected to be worth $8.81 billion.
It is true that machine learning and AI will drive innovation in various industries in the years to come.
Want to know how? Or What will be the future of machine learning and AI? Here are some points that say what’s in store for machine learning as it continues its growth trajectory.
It is a good idea to hire AI developers to develop innovative solutions with machine learning.
Hiring a top-notch machine learning development company in India can help corporations streamline their operations and stay competitive in the marketplace.
https://bit.ly/3zl85FF
User Experience of AI - How to marry the two for ultimate success?Koru UX Design
Want UX and AI to mean more than just buzz words? Download our whitepaper on how to combine the two to create scalable enterprise products at a fast pace. Learn from real-life examples on how smart adoption solved crucial business challenges across various industries. Download now using this link, https://www.koruux.com/uxfreebies/ux-of-ai/
As more and more companies in a range of industries adopt machine learning and more advanced AI algorithms, the ability to provide understandable explanations for different stakeholders becomes critical. If people don’t know why an AI system made a decision, they may not trust the outcome.
Data scientists have a duty to ensure they analyze data and train machine learning models responsibly; respecting individual privacy, mitigating bias, and ensuring transparency. This module explores some considerations and techniques for applying responsible machine learning principles.
State of AI Report 2023 - ONLINE presentationssuser2750ef
State of AI Report 2023 - ONLINE.pptx
When conducting a PEST analysis for the Syrian conflict, it's important to consider the political, economic, socio-cultural, and technological factors that have influenced and continue to impact the situation in Syria. Here's a high-level overview of a PEST analysis for the Syrian conflict:
1. Political Factors:
- Government Instability: Ongoing civil war and conflict have led to political instability and a complex power struggle between various factions and international players.
- Foreign Intervention: Involvement of external powers and regional actors has exacerbated the conflict and added geopolitical complexities to the situation.
- International Relations: Relations with global powers like the United States, Russia, and regional players like Iran and Turkey significantly impact the conflict dynamics.
2. Economic Factors:
- Humanitarian Crisis: The conflict has resulted in a severe humanitarian crisis, causing widespread displacement, destruction of infrastructure, and economic decline.
- Sanctions and Trade Barriers: International sanctions and disrupted trade have further worsened the economic situation in Syria, affecting the livelihoods of the population.
- Resource Depletion: Conflict-driven resource depletion, including loss of agricultural lands and disruption of industries, has weakened the economy.
3. Socio-cultural Factors:
- Civilian Suffering: The conflict has led to a significant loss of life, displacement of populations, and severe trauma among civilians, impacting social cohesion and community structures.
- Ethnic and Religious Divisions: Deep-seated ethnic and religious divisions have fueled the conflict, leading to sectarian tensions and societal fragmentation.
- Refugee Crisis: The conflict has triggered a massive refugee crisis, with millions of Syrians seeking asylum in neighboring countries and beyond, straining regional stability.
4. Technological Factors:
- Communication and Propaganda: Technology, including social media, has been used for communication, mobilization, and spreading propaganda by various actors in the conflict.
- Warfare Technology: Advancements in warfare technology and the use of drones, cyber warfare, and other advanced weaponry have transformed the nature of conflict in Syria.
- Cybersecurity Concerns: The conflict has also raised concerns about cybersecurity threats, misinformation campaigns, and digital vulnerabilities in the region.
This analysis provides a broad understanding of the multifaceted nature of the Syrian conflict, highlighting the diverse factors at play and the complex challenges facing Syria and the international community.
Understanding the New World of Cognitive ComputingDATAVERSITY
Cognitive Computing is a rapidly developing technology that has reached practical application and implementation. So what is it? Do you need it? How can it benefit your business?
In this webinar a panel of experts in Cognitive Computing will discuss the technology, the current practical applications, and where this technology is going. The discussion will start with a review of a recent survey produced by DATAVERSITY on how Cognitive Computing is currently understood by your peers. The panel will also review many components of the technology including:
Cognitive Analytics
Machine Learning
Deep Learning
Reasoning
And next generation artificial intelligence (AI)
And get involved in the discussion with your own questions to present to the panel.
In the Dark? Understanding Big Data & AI: Talent Acquisition Strategies for 2018Yoh Staffing Solutions
Big Data and AI have changed the way companies acquire people. Is your organization one of them? Shed some light on this innovation with these valuable tips and gain a better understanding of the implications Big Data and AI can have on your talent acquisition strategy.
Similar to Responsible Generative AI Design Patterns (20)
Constraints Enabled Autonomous Agent Marketplace: Discovery and MatchmakingDebmalya Biswas
The recent advances in Generative AI have renewed the discussion around Auto-GPT, a form of autonomous agent that can execute complex tasks, e.g., make a sale, plan a trip, etc. We focus on the discovery aspect of agents, i.e., identifying the agent(s) capable of executing a given task. This implies that there exists a
marketplace with a registry of agents - with a well-defined description of the agent capabilities and constraints.
In this paper, we outline a constraints based model to specify agent services. We show how the constraints of a composite agent can be derived and described in a manner consistent with respect to the constraints of its component agents. Finally, we discuss approximate matchmaking, and show how the notion of bounded inconsistency can be exploited to discover agents more efficiently.
Enterprise adoption of AI/ML services has significantly accelerated in recent years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., prediction, classification. In this talk, we emphasize on the compositionality aspect that enables seamless composition / orchestration of existing data and models addressing complex multi-domain use-cases. This enables reuse, agility, and efficiency in model development and maintenance efforts. We then extend this concept to the Generative AI world, discussing the different LLMOps architectural patterns enabling composition of Large Language Models (LLMs) and AI Agents.
Reinforcement Learning (RL) refers to a branch of Artificial Intelligence (AI) that is able to achieve complex goals by maximizing a reward function in real-time. Given that RL based approaches can basically be applied to any optimization problem, its enterprise adoption is picking up fast. In this talk, we will focus on Industrial Control Systems, and show why RL is 'best fit' for many control optimization problems, from controlling combustion engines, to robotic arms cutting metals, to air conditioning systems in buildings.
Enterprise adoption of AI/ML services has significantly accelerated in the last few years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., prediction, classification. In this context, Compositional AI envisions seamless composition of existing AI/ML services, to provide a new (composite) AI/ML service, capable of addressing complex multi-domain use-cases. In this work, we consider two MLOps aspects that need to be enabled to realize Composable AI scenarios: (i) integration of DataOps and MLOps, and (ii) extension of the integrated DataOps-MLOps pipeline such that inferences made by a deployed ML model can be provided as training dataset for a new model. In an enterprise AI/ML environment, this enables reuse, agility, and efficiency in development and maintenance efforts.
A Privacy Framework for Hierarchical Federated LearningDebmalya Biswas
Federated Learning (FL) enables heterogeneous entities to collaboratively develop an optimized (global) model by sharing data and models in a privacy preserving fashion. We consider a Hierarchical Federated Learning (HFL) environment with data ownership split among the entities representing the edge nodes. Each node can train models on the data they own, as well as request access to data and model(s) owned by their descendant nodes-to optimize their models, perform transfer learning on new data, and develop an ensemble model. Unfortunately, a practical realization of HFL is challenging today due to issues with data/model lineage tracking and providing subsequent privacy guarantees. In this paper, we propose a conceptual framework for HFL by capturing the data/model attributes at each node, including their privacy exposure. The framework enables scenarios where a node output may expose certain attributes of its underlying data, as well as identifying models in the hierarchy that need to be updated once a user whose data was used in their training has opted-out. By designing the computations appropriately and limiting the exposure by the nodes, we show that different levels of privacy can be guaranteed.
Edge AI Framework for Healthcare ApplicationsDebmalya Biswas
Edge AI enables intelligent solutions to be deployed on edge devices, reducing latency, allowing offline execution, and providing strong privacy guarantees. Unfortunately, achieving efficient and accurate execution of AI algorithms on edge devices, with limited power and computational resources, raises several deployment challenges. Existing solutions are very specific to a hardware platform/vendor. In this work, we present the MATE framework that provides tools to (1) foster model-to-platform adaptations, (2) enable validation of the deployed models proving their alignment with the originals, and (3) empower engineers and architects to do it efficiently using repeated, but rapid development cycles. We finally show the practical utility of the proposal by applying it on a real-life healthcare body-pose estimation app.
Abstract. Enterprise adoption of AI/ML services has significantly accelerated in the last few years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., predictiction, classification. In this talk, Debmalya Biswas will present the emerging paradigm of Compositional AI, also known as, Compositional Learning. Compositional AI envisions seamless composition of existing AI/ML services, to provide a new (composite) AI/ML service, capable of addressing complex multi-domain use-cases. In an enterprise context, this enables reuse, agility, and efficiency in development and maintenance efforts.
Abstract. With chatbots gaining traction and their adoption growing in different verticals, e.g. Health, Banking, Dating; and users sharing more and more private information with chatbots — studies have started to highlight the privacy risks of chatbots. In this paper, we propose two privacy-preserving approaches for chatbot conversations. The first approach applies ‘entity’ based privacy filtering and transformation, and can be applied directly on the app (client) side. It however requires knowledge of the chatbot design to be enabled. We present a second scheme based on Searchable Encryption that is able to preserve user chat privacy, without requiring any knowledge of the chatbot design. Finally, we present some experimental results based on a real-life employee Help Desk chatbot that validates both the need and feasibility of the proposed approaches.
Reinforcement Learning based HVAC Optimization in FactoriesDebmalya Biswas
Heating, Ventilation and Air Conditioning (HVAC) units are responsible for maintaining the temperature and humidity settings in a building. Studies have shown that HVAC accounts for almost 50% energy consumption in a building and 10% of global electricity usage. HVAC optimization thus has the potential to contribute significantly towards our sustainability goals, reducing energy consumption and CO2 emissions. In this work, we explore ways to optimize the HVAC controls in factories. Unfortunately, this is a complex problem as it requires computing an optimal state considering multiple variable factors, e.g. the occupancy, manufacturing schedule, temperature requirements of operating machines, air flow dynamics within the building, external weather conditions, energy savings, etc. We present a Reinforcement Learning (RL) based energy optimization model that has been applied in our factories. We show that RL is a good fit as it is able to learn and adapt to multi-parameterized system dynamics in real-time. It provides around 25% energy savings on top of the previously used Proportional–Integral–Derivative (PID) controllers.
Delayed Rewards in the context of Reinforcement Learning based Recommender ...Debmalya Biswas
We present a Reinforcement Learning (RL) based approach to implement Recommender systems. The results are based on a real-life Wellness app that is able to provide personalized health / activity related content to users in an interactive fashion. Unfortunately, current recommender systems are unable to adapt to continuously evolving features, e.g. user sentiment, and scenarios where the RL reward needs to computed based on multiple and unreliable feedback channels (e.g., sensors, wearables). To overcome this, we propose three constructs: (i) weighted feedback channels, (ii) delayed rewards, and (iii) rewards boosting, which we believe are essential for RL to be used in Recommender Systems.
Building an enterprise Natural Language Search Engine with ElasticSearch and ...Debmalya Biswas
Presented at Berlin Buzzwords 2019
https://berlinbuzzwords.de/19/session/building-enterprise-natural-language-search-engine-elasticsearch-and-facebooks-drqa
Personalized services attract high-value customers. Knowing the preferences and habits of an individual customer, it is possible to offer to that customer well customized and adapted services, matching his needs and desires. This is advantageous for the entity offering the service (e.g., a retailer) as well, as it helps in creating additional sales or improve customer retention. The main unsolved problem today is that the profile of each individual customer would be necessary in order to create such services, posing severe risks regarding privacy and data protection. This paper proposes efficient encryption schemes that allow profiling to be outsourced while preserving privacy. The schemes ensure that the customer is always in control of his profile data, at the same time making shopping data across multiple retailers available to third party service providers to be able to provide targeted services.
Privacy Policies Change Management for SmartphonesDebmalya Biswas
The ever increasing popularity of apps stems from their ability to provide highly customized services for the user.
The flip side is that to provide such customized services, apps need access to very sensitive personal user information. This has led to a lot of rogue apps that e.g. pass personal information to 3rd party Ad servers in the background. Studies have shown that current app vetting processes which are mainly restricted to install time verification mechanisms are incapable of detecting and preventing such attacks. We argue that the missing fundamental aspect here is the inability to capture and control runtime characteristics of apps, e.g. we need to know not only the list of sensors that need to be accessed by an app but also their frequency of access. This leads to the need for an expressive policy language that in addition to the list of sensors, also allows specifying when, where and how frequently can they be accessed.
An expressive policy language has the disadvantage of making the task of an average user more difficult in setting and analyzing the consequences of his privacy settings. Further, privacy polices evolve over time. Over time, users are likely to change their privacy settings, as a response to a recently discovered vulnerability, or to be able to install that “much desired” app, etc. Such a policy change affects both already installed (may no longer be compliant) and previously rejected apps (may be compliant now).
In this paper, we propose an integrated privacy add-on that (i) compares the apps profiles vs. user’s privacy settings, outlining the points of conflict as well as the different ways in which they can be resolved. And (ii) provides efficient change management with respect to any changes in user privacy settings.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
A tale of scale & speed: How the US Navy is enabling software delivery from l...sonjaschweigert1
Rapid and secure feature delivery is a goal across every application team and every branch of the DoD. The Navy’s DevSecOps platform, Party Barge, has achieved:
- Reduction in onboarding time from 5 weeks to 1 day
- Improved developer experience and productivity through actionable findings and reduction of false positives
- Maintenance of superior security standards and inherent policy enforcement with Authorization to Operate (ATO)
Development teams can ship efficiently and ensure applications are cyber ready for Navy Authorizing Officials (AOs). In this webinar, Sigma Defense and Anchore will give attendees a look behind the scenes and demo secure pipeline automation and security artifacts that speed up application ATO and time to production.
We will cover:
- How to remove silos in DevSecOps
- How to build efficient development pipeline roles and component templates
- How to deliver security artifacts that matter for ATO’s (SBOMs, vulnerability reports, and policy evidence)
- How to streamline operations with automated policy checks on container images
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
2. AGENDA
¡ Enterprise AI
¡ Ethical / Responsible AI
¡ Explainability
¡ Fairness & Bias
¡ Accountability
¡ Generative AI Usage Patterns
¡ Generative AI - Responsible
Design Patterns
3. ENTERPRISE AI
Enterprise
AI
Natural Language
Processing (NLP)
Computer
Vision/Image
Processing
Predictive
Analytics
Natural Language
Search
Demand Forecasting
(Churn prediction)
Text Classification
Object Detection
Recommendations
Chatbots
(Dialog Systems)
Image Classification
Summarization
Optical Character
Recognition (OCR)
Predictive Maintenance
of Machines
• Enterprise AI use-
cases are pervasive
4. RESPONSIBLE AI
“Ethical AI, also known as responsible AI, is the practice of using AI with good intention
to empower employees and businesses, and fairly impact customers and society. Ethical
AI enables companies to engender trust and scale AI with confidence.” [1]
Failing to operationalize Ethical AI can not only expose enterprises to reputational,
regulatory, and legal risks; but also lead to wasted resources, inefficiencies in product
development, and even an inability to use data to train AI models. [2]
[1] R. Porter. Beyond the promise: implementing Ethical AI, 2020 (link)
[2] R. Blackman.A Practical Guide to Building Ethical AI, 2020 (link)
5. REGULATIONS
¡ Good news: is that there has been a recent trend
towards ensuring that AI applications are
responsibly trained and deployed, in line with the
enterprise strategy and policies.
¡ Bad news: Efforts have been complicated by
different governmental organizations and regulatory
bodies releasing their own guidelines and policies;
with little to no standardization on the definition of
terms.
¡ For example, the EU AI Act mandates a different set
of dos & don’ts depending on the ‘risk level’ of an AI
application. However, quantifying the risk level of an
AI application is easier said than done as it basically
requires you to classify how the capabilities of a
non-deterministic system will impact users and
systems who might interact with it in the future.
6. KEY RESPONSIBLE AI ASPECTS
¡ Explainability
¡ Bias & Fairness
¡ Accountability
¡ Reproducibility
¡ Data Privacy
*D. Biswas. Ethical AI: its implications for Enterprise AI Use-cases
and Governance.Towards Data Science (link)
*D. Biswas. Privacy Preserving Chatbot Conversations.
3rd IEEE AIKE 2020: 179-182
7. EXPLAINABLE AI
¡ Explainable AI is an umbrella term for a
range of tools, algorithms and methods;
which accompany AI model predictions
with explanations.
¡ Explainability of AI models ranks high
among the list of ‘non-functional’ AI
features to be considered by enterprises.
¡ For example, this implies having to
explain why an ML model profiled a
user to be in a specific segment —
which led him/her to receiving an
advertisement.
(Labeled)
Data
Train ML
Model
Predictions
Explanation
Model
Explainable
Predictions
8. EXPLAINABLE AI FRAMEWORKS - LIME
¡ Local Interpretable Model-
Agnostic Explanations
(LIME*) provides easy to
understand explanations of
a prediction by training an
explainability model based
on samples around a
prediction.
¡ The approximate nature of
the explainability model
might limit its usage for
compliance needs.
*M.T. Ribeiro, S. Singh, C. Guestrin. “Why Should ITrustYou?” Explaining the
Predictions of Any Classifier, 2016 (link)
LIME output showing the important features, positively
and negatively impacting the model’s prediction.
9. EXPLAINABLE AI - FEASIBILITY
¡ Machine (Deep) Learning algorithms vary
in the level of accuracy and explainability
that they can provide- the two are often
inversely proportional.
¡ Explainability starts becoming more
difficult as as we move to Random
Forests, which are basically an ensemble
of DecisionTrees.At the end of the
spectrum are Neural Networks (Deep
Learning), which have shown human-level
accuracy.
Explainability
Accuracy
Logistic Regression
Decision Trees
Random Forest
(Ensemble of
Decision Trees)
Deep Learning
(Neural Networks)
10. EXPLAINABLE AI - ABSTRACTION
“important thing is to explain the right thing to the right person in the right way at the right time”*
Singapore AI Governance framework:“technical explainability may not always be enlightening, esp. to the
man in the street… providing an individual with counterfactuals (such as “you would have been approved if
your average debt was 15% lower” or “these are users with similar profiles to yours that received a
different decision”) can be a powerful type of explanation”
*N. Xie, et. al. Explainable Deep Learning:A Field
Guide for the Uninitiated, 2020 (link)
AI Developer
Goal: ensure/improve
performance
Regulatory Bodies
Goal: Ensure compliance with legislation,
protect interests of constituents
End Users
Goal: Understanding of
decision, trust model output
11. FAIRNESS & BIAS
¡ Bias is a phenomenon that occurs when an algorithm
produces results that are systemically prejudiced due to
erroneous assumptions in the machine learning process*.
¡ AI models should behave in all fairness towards everyone,
without any bias. However, defining ‘fairness’ is easier said
than done.
¡ Does fairness mean, e.g., that the same proportion of
male and female applicants get high risk assessment
scores?
¡ Or that the same level of risk result in the same score
regardless of gender?
¡ (Impossible to fulfill both)
* SearchEnterprise AI. Machine
Learning bias (AI bias) (link)
Google Photo labeling pictures of a black Haitian-
American programmer as “gorilla”
“White Barack Obama” images
(link)
A computer program used for bail and
sentencing decisions was labeled biased against
blacks. (link)
12. TYPES OF BIAS
¡ Bias creeps into AI models, primarily due to
the inherent bias already present in the
training data. So the ‘data’ part of AI model
development is key to addressing bias.
¡ Historical Bias: arises due to historical
inequality of human decisions captured in
the training data
¡ Representation Bias: arises due to training
data that is not representative of the
actual population
¡ Ensure that training data is representative and
uniformly distributed over the target
population - with respect to the selected
features.
Source: H. Suresh, J.V. Guttag.A Framework for Understanding
Unintended Consequences of Machine Learning, 2020 (link)
13. ACCOUNTABILITY
¡ Similar to the debate on
self-driving cars with
respect to “who is
responsible” if an accident
happens?
¡ The same debate applies in
the case of AI models as
well — who is accountable
if something goes wrong?
Source:
https://www.theguardian.com/technology/2023/no
v/06/openai-chatgpt-customers-copyright-lawsuits
“If you are challenged on copyright grounds, we will
assume responsibility for the potential legal risks
involved,” the company said.
The move to protect customers from intellectual
property lawsuits comes after IBM Corp., Microsoft
Corp., and Adobe Inc. announced similar legal
protections for users of their AI products.
Source:
https://www.theverge.com/2023/10/12/2391499
8/google-copyright-indemnification-generative-ai
14. ACCOUNTABILITY CHECKLIST
• Data ownership: Data is critical to AI systems, as such negotiation of
ownership issues around not only training data, but input data, output
data, and other generated data is critical. For example, knowledge of the
prompts (user queries) and chatbot responses are very important to
improve the bot performance over time.
• Liability: Given that we are engaging with a 3rd party, to what extent are
they liable? This is tricky to negotiate and depends on the extent to which
the AI system can operate independently. For example, in the case of a
Chatbot, if the bot is allowed to provide only a limited output (e.g. respond
to the user with only limited number of pre-approved responses), then the
risk is likely to be a lot lower as compared to an open-ended bot like
ChatGPT that can generate new responses.
• Confidentiality clauses: In addition to (training) data confidentiality, do we
want to prevent the vendor from providing competitors with access to the
trained / fine-tuned model, or at least any improvements to it —
particularly if it is giving a competitive advantage?
15. GEN AI USAGE PATTERNS
*D. Biswas. MLOps for Compositional AI. NeurIPSWorkshop on Challenges in Deploying and
Monitoring Machine Learning Systems (DMML), 2022.
*D. Biswas. Generative AI – LLMOps Architecture Patterns. Data Driven Investor, 2023 (link)
¡ Black-box LLM APIs: This is the
classic ChatGPT example, where we
have black-box access to a LLM
API/UI. Prompts are the primary
interaction mechanism for such
scenarios.
¡ While Enterprise LLM Apps have the
potential to be a multi-billion dollar
marketplace and accelerate LLM
adoption by providing an enterprise
ready solution; the same caution
needs to be exercised as you would
do before using a 3rd party ML
model — validate LLM/training data
ownership, IP, liability clauses.
16. GEN AI USAGE PATTERNS – LLMOPS (MLOPS FOR LLMS)
*D. Biswas. Contextualizing Large Language Models (LLMs)
with Enterprise Data. Data Driven Investor, 2023 (link)
¡ LLMs are generic in nature.To
realize the full potential of
LLMs for Enterprises, they
need to be contextualized with
enterprise knowledge captured
in terms of documents, wikis,
business processes, etc.
¡ This is achieved by fine-tuning
a LLM with enterprise
knowledge / embeddings to
develop a context-specific
LLM.
Public data
Open Source Pre-
trained LLM
Data Processing
Pipelines
Knowledge Graphs /
Embeddings
(Vector Stores)
Enterprise
data
Supervised fine-tuning
/ Few-shot Learning
Context-specific LLM /
Small Language Model (SLM)
Mobile / Web UI
End user Apps
Prompts
Tasks /
Queries
Users
SLM API
Reinforcement Learning from
Human Feedback (RLHF)
Model
Monitoring
Model
Versioning
Model
Caching
Ethical AI Safeguards
17. GENERATIVE AI - RESPONSIBLE DESIGN PATTERNS
We take inspiration from the
“enterprise friendly”
Microsoft, “developer
friendly” Google and “user
friendly” Apple — to enable
this ‘transparent’ approach
to Gen AI system design.
• Guidelines for Human-AI
Interaction by Microsoft
• People + AI
Guidebook by Google
• Machine Learning:
Human Interface
Guidelines by Apple