The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
Rather than trying to understand and regulate all types of AI, we recommend a different (and practical) approach in this talk based on AI Transparency —
to transparently outline the capabilities of the AI system based on its training methodology and set realistic expectations with respect to what it can (and cannot) do.
We outline LLMOps architecture patterns and show how the proposed approach can be integrated at different stages of the LLMOps pipeline capturing the model's capabilities. In addition, the AI system provider also specifies scenarios where (they believe that) the system can make mistakes, and recommends a ‘safe’ approach with guardrails for those scenarios.
Generative AI Use-cases for Enterprise - First SessionGene Leybzon
In this presentation, we will delve into the exciting applications of Generative AI across various business domains. Leveraging the capabilities of artificial intelligence and machine learning, Generative AI allows for dynamic, context-aware user interfaces that adapt in real-time to provide personalized user experiences. We will explore how this transformative technology can streamline design processes, facilitate user engagement, and open the doors to new forms of interactivity.
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
Discuss the impact and opportunity of using Generative AI to support your development and creative teams
* Explore business challenges in content creation
* Cost-per-unit of different types of content
* Use AI to reduce cost-per-unit
* New partnerships being formed that will have a material impact on the way we search and engage with content
Part 4 of a 9 Part Research Series named "What matters in AI" published on www.andremuscat.com
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
Generative AI Use-cases for Enterprise - First SessionGene Leybzon
In this presentation, we will delve into the exciting applications of Generative AI across various business domains. Leveraging the capabilities of artificial intelligence and machine learning, Generative AI allows for dynamic, context-aware user interfaces that adapt in real-time to provide personalized user experiences. We will explore how this transformative technology can streamline design processes, facilitate user engagement, and open the doors to new forms of interactivity.
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
GENERATIVE AI, THE FUTURE OF PRODUCTIVITYAndre Muscat
Discuss the impact and opportunity of using Generative AI to support your development and creative teams
* Explore business challenges in content creation
* Cost-per-unit of different types of content
* Use AI to reduce cost-per-unit
* New partnerships being formed that will have a material impact on the way we search and engage with content
Part 4 of a 9 Part Research Series named "What matters in AI" published on www.andremuscat.com
Unlocking the Power of Generative AI An Executive's Guide.pdfPremNaraindas1
Generative AI is here, and it can revolutionize your business. With its powerful capabilities, this technology can help companies create more efficient processes, unlock new insights from data, and drive innovation. But how do you make the most of these opportunities?
This guide will provide you with the information and resources needed to understand the ins and outs of Generative AI, so you can make informed decisions and capitalize on the potential. It covers important topics such as strategies for leveraging large language models, optimizing MLOps processes, and best practices for building with Generative AI.
Chat GPT 4 can pass the American state bar exam, but before you go expecting to see robot lawyers taking over the courtroom, hold your horses cowboys – we're not quite there yet. That being said, AI is becoming increasingly more human-like, and as a VC we need to start thinking about how this new wave of technology is going to affect the way we build and run businesses. What do we need to do differently? How can we make sure that our investment strategies are reflecting these changes? It's a brave new world out there, and we’ve got to keep the big picture in mind!
Sharing here with you what we at Cavalry Ventures found out during our Generative AI deep dive.
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
Generative AI: Past, Present, and Future – A Practitioner's Perspective
As the academic realm grapples with the profound implications of generative AI
and related applications like ChatGPT, I will present a grounded view from my
experience as a practitioner. Starting with the origins of neural networks in
the fields of logic, psychology, and computer science, I trace its history and
align it within the wider context of the pursuit of artificial intelligence.
This perspective will also draw parallels with historical developments in
psychology. Against this backdrop, I chart a proposed trajectory for the future.
Finally, I provide actionable insights for both academics and enterprising
individuals in the field.
How can we use generative AI in learning products? A rapid introduction to generative AI. Presented at ED Games Expo 2023 at the U.S. Department of Education, September 22, 2023.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
Exploring Opportunities in the Generative AI Value Chain.pdfDung Hoang
The article "Exploring Opportunities in the Generative AI Value Chain" by McKinsey & Company's QuantumBlack provides insights into the value created by generative artificial intelligence (AI) and its potential applications.
Presented at All Things Open RTP Meetup
Presented by Karthik Uppuluri, Fidelity
Title: Generative AI
Abstract: In this session, let us embark on a journey into the fascinating world of generative artificial intelligence. As an emergent and captivating branch of machine learning, generative AI has become instrumental in myriad of sectors, ranging from visual arts to creating software for technological solutions. This session requires no prior expertise in machine learning or AI. It aims to inculcate a robust understanding of fundamental concepts and principles of generative AI and its diverse applications. Join us as we delve into the mechanics of this transformative technology and unpack its potential.
This session was presented at the AWS Community Day in Munich (September 2023). It's for builders that heard the buzz about Generative AI but can’t quite grok it yet. Useful if you are eager to connect the dots on the Generative AI terminology and get a fast start for you to explore further and navigate the space. This session is largely product agnostic and meant to give you the fundamentals to get started.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
This presentation presents an overview of the challenges and opportunities of generative artificial intelligence in Web3. It includes a brief research history of generative AI as well as some of its immediate applications in Web3.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(This updated version builds on our previous deck: slideshare.net/LoicMerckel/intro-to-llms.)
Understanding generative AI models A comprehensive overview.pdfStephenAmell4
Generative AI refers to a branch of artificial intelligence that focuses on enabling machines to generate new and original content. Unlike traditional AI systems that follow predefined rules and patterns, generative AI leverages advanced algorithms and neural networks to autonomously produce outputs that mimic human creativity and decision-making.
The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
In this talk, we will provide an overview explaining the key Responsible AI aspects: Explainability, Bias, and Accountability. We will then outline the Gen AI usage patterns and show how the three aspects can be integrated at different stages of the LLMOps (MLOps for LLM) pipeline. We summarize the learnings in the form of Gen AI design patterns that can be readily applied to enterprise use-cases.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
For this plenary talk at the Charlotte AI Institute for Smarter Learning, Dr. Cori Faklaris introduces her fellow college educators to the exciting world of generative AI tools. She gives a high-level overview of the generative AI landscape and how these tools use machine learning algorithms to generate creative content such as music, art, and text. She then shares some examples of generative AI tools and demonstrate how she has used some of these tools to enhance teaching and learning in the classroom and to boost her productivity in other areas of academic life.
Leveraging Generative AI & Best practicesDianaGray10
In this event we will cover:
- What is Generative AI and how it is being for future of work.
- Best practices for developing and deploying generative AI based models in productions.
- Future of Generative AI, how generative AI is expected to evolve in the coming years.
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
Generative AI: Past, Present, and Future – A Practitioner's Perspective
As the academic realm grapples with the profound implications of generative AI
and related applications like ChatGPT, I will present a grounded view from my
experience as a practitioner. Starting with the origins of neural networks in
the fields of logic, psychology, and computer science, I trace its history and
align it within the wider context of the pursuit of artificial intelligence.
This perspective will also draw parallels with historical developments in
psychology. Against this backdrop, I chart a proposed trajectory for the future.
Finally, I provide actionable insights for both academics and enterprising
individuals in the field.
How can we use generative AI in learning products? A rapid introduction to generative AI. Presented at ED Games Expo 2023 at the U.S. Department of Education, September 22, 2023.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
Exploring Opportunities in the Generative AI Value Chain.pdfDung Hoang
The article "Exploring Opportunities in the Generative AI Value Chain" by McKinsey & Company's QuantumBlack provides insights into the value created by generative artificial intelligence (AI) and its potential applications.
Presented at All Things Open RTP Meetup
Presented by Karthik Uppuluri, Fidelity
Title: Generative AI
Abstract: In this session, let us embark on a journey into the fascinating world of generative artificial intelligence. As an emergent and captivating branch of machine learning, generative AI has become instrumental in myriad of sectors, ranging from visual arts to creating software for technological solutions. This session requires no prior expertise in machine learning or AI. It aims to inculcate a robust understanding of fundamental concepts and principles of generative AI and its diverse applications. Join us as we delve into the mechanics of this transformative technology and unpack its potential.
This session was presented at the AWS Community Day in Munich (September 2023). It's for builders that heard the buzz about Generative AI but can’t quite grok it yet. Useful if you are eager to connect the dots on the Generative AI terminology and get a fast start for you to explore further and navigate the space. This session is largely product agnostic and meant to give you the fundamentals to get started.
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
This presentation presents an overview of the challenges and opportunities of generative artificial intelligence in Web3. It includes a brief research history of generative AI as well as some of its immediate applications in Web3.
A non-technical overview of Large Language Models, exploring their potential, limitations, and customization for specific challenges. While this deck is tailored for an audience from the financial industry in mind, its content remains broadly applicable.
(This updated version builds on our previous deck: slideshare.net/LoicMerckel/intro-to-llms.)
Understanding generative AI models A comprehensive overview.pdfStephenAmell4
Generative AI refers to a branch of artificial intelligence that focuses on enabling machines to generate new and original content. Unlike traditional AI systems that follow predefined rules and patterns, generative AI leverages advanced algorithms and neural networks to autonomously produce outputs that mimic human creativity and decision-making.
The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
In this talk, we will provide an overview explaining the key Responsible AI aspects: Explainability, Bias, and Accountability. We will then outline the Gen AI usage patterns and show how the three aspects can be integrated at different stages of the LLMOps (MLOps for LLM) pipeline. We summarize the learnings in the form of Gen AI design patterns that can be readily applied to enterprise use-cases.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
Did you know that a recent study by McKinsey & Company highlighted that 84% of organizations are concerned about bias in their AI algorithms? However, there's a solution to this problem. Upholding best practices can significantly mitigate biases in AI for enterprises, particularly given the challenges posed by compliance and the rapid dissemination of information through digital media.
In this E42 Blog post, we delve into an array of best practices to mitigate bias and hallucinations in AI models. A few of these best practices include:
Model optimization: This practice focuses on enhancing model performance and reducing bias through various optimization techniques
Understanding model architecture: This involves a deep dive into the structure of AI models to identify and rectify biases
Human interactions: This emphasizes on the critical role of human feedback in the training loop in ensuring unbiased AI outcomes
On-premises large language models: This practice involves utilizing on-premises LLMs to maintain control over data and model training
Artificial intelligence (AI) is a multidisciplinary field of science and engineering whose goal is to create intelligent machines.
We believe that AI will be a force multiplier on technological progress in our increasingly digital, data-driven world. This is because everything around us today, ranging from culture to consumer products, is a product of intelligence.
The State of AI Report is now in its sixth year. Consider this report as a compilation of the most interesting things we’ve seen with a goal of triggering an informed conversation about the state of AI and its implication for the future.
We consider the following key dimensions in our report:
Research: Technology breakthroughs and their capabilities.
Industry: Areas of commercial application for AI and its business impact.
Politics: Regulation of AI, its economic implications and the evolving geopolitics of AI.
Safety: Identifying and mitigating catastrophic risks that highly-capable future AI systems could pose to us.
Predictions: What we believe will happen in the next 12 months and a 2022 performance review to keep us honest.
Produced by Nathan Benaich and Air Street Capital team
Data scientists have a duty to ensure they analyze data and train machine learning models responsibly; respecting individual privacy, mitigating bias, and ensuring transparency. This module explores some considerations and techniques for applying responsible machine learning principles.
GDG Cloud Southlake 28 Brad Taylor and Shawn Augenstein Old Problems in the N...James Anderson
GDG Cloud Southlake #28: Brad Taylor and Shawn Augenstein: Old Problems in the New Frontiers of AI
• Brad discusses how decades-old laws and expanding regulation have new implications in the ML and Large Model age, and will touch on:
• Legal and Regulatory: Data usage rights, cautionary tale of stability.ai and Getty Images, EU's planned expansion of GDPR re models
• How Neural Networks, zero and one-shot learning, and LLMs have increased the need for better data governance, lineage management
• Shawn speaks on the coming "Data Renaissance"
• The New IP: Prompts and Internatl Interaction Data
• Where GenAI can be used right now and where it maybe shouldn't be used yet
• The Power of the Diversity of Insight
• What is making the future look bright!
Brad has been an intrapreneur and entrepreneur in data, AI, and IoT and has led teams in the creation of NLP, data products and predictive analytics for retention, churn, driver safety, traffic, CX and fleet risk. He has built solutions on global hyperscalers GCP, AWS, Azure, and IBM. Brad is a former founding partner at Tech Wildcatters, and worked with dozens of mobile, SaaS and AI start-ups, many of which became both job creators and profitable exits for TW investors. He is currently a Senior Manager in Pepsico's global Strategy and Transformation group, where he focuses on delivering AI/ML driven solutions.
Shawn Augenstein is a dynamic and highly experienced professional, who is driven by educating, providing equal access to technology and equitable access to information. Currently, Shawn serves as Principal Data & AI Consultant at CDW, where he develops the curriculum and architectures for understanding and furthering the use of AI, as well as developing solutions for both partners and clients. In his spare time, he enjoys exploring new frontiers of Diffusers, capturing moments through photography, and listening to music as a passionate melophile.
AI Readiness: Five Areas Business Must Prepare for Success in Artificial Inte...Kaleido Insights
This research report from technology research firm, Kaleido Insights introduces a framework for organizational preparedness—not only of data and infrastructure, but of people, ethical, strategic and practical considerations needed to deploy effective and sustainable machine and deep learning programs. This research is the first to market to articulate the need for readiness beyond data and data science talent. Based on extensive research and interviews of more than 25 businesses involved in AI deployments, the report identifies and examines five fundamental areas businesses must prepare for sustainable AI. Download the full report: https://www.kaleidoinsights.com/order-reports/artificial-intelligence-ai-readiness/
State of AI Report 2023 - ONLINE presentationssuser2750ef
State of AI Report 2023 - ONLINE.pptx
When conducting a PEST analysis for the Syrian conflict, it's important to consider the political, economic, socio-cultural, and technological factors that have influenced and continue to impact the situation in Syria. Here's a high-level overview of a PEST analysis for the Syrian conflict:
1. Political Factors:
- Government Instability: Ongoing civil war and conflict have led to political instability and a complex power struggle between various factions and international players.
- Foreign Intervention: Involvement of external powers and regional actors has exacerbated the conflict and added geopolitical complexities to the situation.
- International Relations: Relations with global powers like the United States, Russia, and regional players like Iran and Turkey significantly impact the conflict dynamics.
2. Economic Factors:
- Humanitarian Crisis: The conflict has resulted in a severe humanitarian crisis, causing widespread displacement, destruction of infrastructure, and economic decline.
- Sanctions and Trade Barriers: International sanctions and disrupted trade have further worsened the economic situation in Syria, affecting the livelihoods of the population.
- Resource Depletion: Conflict-driven resource depletion, including loss of agricultural lands and disruption of industries, has weakened the economy.
3. Socio-cultural Factors:
- Civilian Suffering: The conflict has led to a significant loss of life, displacement of populations, and severe trauma among civilians, impacting social cohesion and community structures.
- Ethnic and Religious Divisions: Deep-seated ethnic and religious divisions have fueled the conflict, leading to sectarian tensions and societal fragmentation.
- Refugee Crisis: The conflict has triggered a massive refugee crisis, with millions of Syrians seeking asylum in neighboring countries and beyond, straining regional stability.
4. Technological Factors:
- Communication and Propaganda: Technology, including social media, has been used for communication, mobilization, and spreading propaganda by various actors in the conflict.
- Warfare Technology: Advancements in warfare technology and the use of drones, cyber warfare, and other advanced weaponry have transformed the nature of conflict in Syria.
- Cybersecurity Concerns: The conflict has also raised concerns about cybersecurity threats, misinformation campaigns, and digital vulnerabilities in the region.
This analysis provides a broad understanding of the multifaceted nature of the Syrian conflict, highlighting the diverse factors at play and the complex challenges facing Syria and the international community.
Machine learning and artificial intelligence are two of the most rapidly growing and transformative technologies of our time. These technologies are revolutionizing the way businesses operate, improving healthcare outcomes, and transforming the way we live our daily lives. Learn more about it in the PPT below!
This is an article about Generative AI. It discusses what it is and the different techniques used to create it. It also goes into the potential uses of Generative AI. Some of the important points from this article are that Generative AI is still in its early stages but has already shown promising results. It is also important to note that Generative AI can be used to create fake data that is indistinguishable from real data.
https://www.ltimindtree.com/wp-content/uploads/2023/01/DeepPoV-Generative-AI.pdf
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for misstatement of information thru its source, content material, or author and save you the unauthenticated assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for fake information presence. The implementation setup produced most volume 99% category accuracy, even as dataset is tested for binary (real or fake) labelling with multiple epochs.
UNCOVERING FAKE NEWS BY MEANS OF SOCIAL NETWORK ANALYSISpijans
The short access to facts on social media networks in addition to its exponential upward push also made it
tough to distinguish among faux information or actual facts. The quick dissemination thru manner of sharing has more high quality its falsification exponentially. It is also essential for the credibility of social media networks to avoid the spread of fake facts. So its miles rising research task to robotically check for
misstatement of information thru its source, content material, or author and save you the unauthenticated
assets from spreading rumours. This paper demonstrates an synthetic intelligence primarily based completely approach for the identification of the fake statements made by way of the use of social network
entities. Versions of Deep neural networks are being applied to evalues datasets and have a look at for
fake information presence. The implementation setup produced most volume 99% category accuracy, even
as dataset is tested for binary (real or fake) labelling with multiple epochs.
In the Dark? Understanding Big Data & AI: Talent Acquisition Strategies for 2018Yoh Staffing Solutions
Big Data and AI have changed the way companies acquire people. Is your organization one of them? Shed some light on this innovation with these valuable tips and gain a better understanding of the implications Big Data and AI can have on your talent acquisition strategy.
Similar to Regulating Generative AI - LLMOps pipelines with Transparency (20)
Constraints Enabled Autonomous Agent Marketplace: Discovery and MatchmakingDebmalya Biswas
The recent advances in Generative AI have renewed the discussion around Auto-GPT, a form of autonomous agent that can execute complex tasks, e.g., make a sale, plan a trip, etc. We focus on the discovery aspect of agents, i.e., identifying the agent(s) capable of executing a given task. This implies that there exists a
marketplace with a registry of agents - with a well-defined description of the agent capabilities and constraints.
In this paper, we outline a constraints based model to specify agent services. We show how the constraints of a composite agent can be derived and described in a manner consistent with respect to the constraints of its component agents. Finally, we discuss approximate matchmaking, and show how the notion of bounded inconsistency can be exploited to discover agents more efficiently.
Enterprise adoption of AI/ML services has significantly accelerated in recent years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., prediction, classification. In this talk, we emphasize on the compositionality aspect that enables seamless composition / orchestration of existing data and models addressing complex multi-domain use-cases. This enables reuse, agility, and efficiency in model development and maintenance efforts. We then extend this concept to the Generative AI world, discussing the different LLMOps architectural patterns enabling composition of Large Language Models (LLMs) and AI Agents.
Reinforcement Learning (RL) refers to a branch of Artificial Intelligence (AI) that is able to achieve complex goals by maximizing a reward function in real-time. Given that RL based approaches can basically be applied to any optimization problem, its enterprise adoption is picking up fast. In this talk, we will focus on Industrial Control Systems, and show why RL is 'best fit' for many control optimization problems, from controlling combustion engines, to robotic arms cutting metals, to air conditioning systems in buildings.
Enterprise adoption of AI/ML services has significantly accelerated in the last few years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., prediction, classification. In this context, Compositional AI envisions seamless composition of existing AI/ML services, to provide a new (composite) AI/ML service, capable of addressing complex multi-domain use-cases. In this work, we consider two MLOps aspects that need to be enabled to realize Composable AI scenarios: (i) integration of DataOps and MLOps, and (ii) extension of the integrated DataOps-MLOps pipeline such that inferences made by a deployed ML model can be provided as training dataset for a new model. In an enterprise AI/ML environment, this enables reuse, agility, and efficiency in development and maintenance efforts.
A Privacy Framework for Hierarchical Federated LearningDebmalya Biswas
Federated Learning (FL) enables heterogeneous entities to collaboratively develop an optimized (global) model by sharing data and models in a privacy preserving fashion. We consider a Hierarchical Federated Learning (HFL) environment with data ownership split among the entities representing the edge nodes. Each node can train models on the data they own, as well as request access to data and model(s) owned by their descendant nodes-to optimize their models, perform transfer learning on new data, and develop an ensemble model. Unfortunately, a practical realization of HFL is challenging today due to issues with data/model lineage tracking and providing subsequent privacy guarantees. In this paper, we propose a conceptual framework for HFL by capturing the data/model attributes at each node, including their privacy exposure. The framework enables scenarios where a node output may expose certain attributes of its underlying data, as well as identifying models in the hierarchy that need to be updated once a user whose data was used in their training has opted-out. By designing the computations appropriately and limiting the exposure by the nodes, we show that different levels of privacy can be guaranteed.
Edge AI Framework for Healthcare ApplicationsDebmalya Biswas
Edge AI enables intelligent solutions to be deployed on edge devices, reducing latency, allowing offline execution, and providing strong privacy guarantees. Unfortunately, achieving efficient and accurate execution of AI algorithms on edge devices, with limited power and computational resources, raises several deployment challenges. Existing solutions are very specific to a hardware platform/vendor. In this work, we present the MATE framework that provides tools to (1) foster model-to-platform adaptations, (2) enable validation of the deployed models proving their alignment with the originals, and (3) empower engineers and architects to do it efficiently using repeated, but rapid development cycles. We finally show the practical utility of the proposal by applying it on a real-life healthcare body-pose estimation app.
Abstract. Enterprise adoption of AI/ML services has significantly accelerated in the last few years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., predictiction, classification. In this talk, Debmalya Biswas will present the emerging paradigm of Compositional AI, also known as, Compositional Learning. Compositional AI envisions seamless composition of existing AI/ML services, to provide a new (composite) AI/ML service, capable of addressing complex multi-domain use-cases. In an enterprise context, this enables reuse, agility, and efficiency in development and maintenance efforts.
Abstract. With chatbots gaining traction and their adoption growing in different verticals, e.g. Health, Banking, Dating; and users sharing more and more private information with chatbots — studies have started to highlight the privacy risks of chatbots. In this paper, we propose two privacy-preserving approaches for chatbot conversations. The first approach applies ‘entity’ based privacy filtering and transformation, and can be applied directly on the app (client) side. It however requires knowledge of the chatbot design to be enabled. We present a second scheme based on Searchable Encryption that is able to preserve user chat privacy, without requiring any knowledge of the chatbot design. Finally, we present some experimental results based on a real-life employee Help Desk chatbot that validates both the need and feasibility of the proposed approaches.
Reinforcement Learning based HVAC Optimization in FactoriesDebmalya Biswas
Heating, Ventilation and Air Conditioning (HVAC) units are responsible for maintaining the temperature and humidity settings in a building. Studies have shown that HVAC accounts for almost 50% energy consumption in a building and 10% of global electricity usage. HVAC optimization thus has the potential to contribute significantly towards our sustainability goals, reducing energy consumption and CO2 emissions. In this work, we explore ways to optimize the HVAC controls in factories. Unfortunately, this is a complex problem as it requires computing an optimal state considering multiple variable factors, e.g. the occupancy, manufacturing schedule, temperature requirements of operating machines, air flow dynamics within the building, external weather conditions, energy savings, etc. We present a Reinforcement Learning (RL) based energy optimization model that has been applied in our factories. We show that RL is a good fit as it is able to learn and adapt to multi-parameterized system dynamics in real-time. It provides around 25% energy savings on top of the previously used Proportional–Integral–Derivative (PID) controllers.
Delayed Rewards in the context of Reinforcement Learning based Recommender ...Debmalya Biswas
We present a Reinforcement Learning (RL) based approach to implement Recommender systems. The results are based on a real-life Wellness app that is able to provide personalized health / activity related content to users in an interactive fashion. Unfortunately, current recommender systems are unable to adapt to continuously evolving features, e.g. user sentiment, and scenarios where the RL reward needs to computed based on multiple and unreliable feedback channels (e.g., sensors, wearables). To overcome this, we propose three constructs: (i) weighted feedback channels, (ii) delayed rewards, and (iii) rewards boosting, which we believe are essential for RL to be used in Recommender Systems.
Building an enterprise Natural Language Search Engine with ElasticSearch and ...Debmalya Biswas
Presented at Berlin Buzzwords 2019
https://berlinbuzzwords.de/19/session/building-enterprise-natural-language-search-engine-elasticsearch-and-facebooks-drqa
Personalized services attract high-value customers. Knowing the preferences and habits of an individual customer, it is possible to offer to that customer well customized and adapted services, matching his needs and desires. This is advantageous for the entity offering the service (e.g., a retailer) as well, as it helps in creating additional sales or improve customer retention. The main unsolved problem today is that the profile of each individual customer would be necessary in order to create such services, posing severe risks regarding privacy and data protection. This paper proposes efficient encryption schemes that allow profiling to be outsourced while preserving privacy. The schemes ensure that the customer is always in control of his profile data, at the same time making shopping data across multiple retailers available to third party service providers to be able to provide targeted services.
Privacy Policies Change Management for SmartphonesDebmalya Biswas
The ever increasing popularity of apps stems from their ability to provide highly customized services for the user.
The flip side is that to provide such customized services, apps need access to very sensitive personal user information. This has led to a lot of rogue apps that e.g. pass personal information to 3rd party Ad servers in the background. Studies have shown that current app vetting processes which are mainly restricted to install time verification mechanisms are incapable of detecting and preventing such attacks. We argue that the missing fundamental aspect here is the inability to capture and control runtime characteristics of apps, e.g. we need to know not only the list of sensors that need to be accessed by an app but also their frequency of access. This leads to the need for an expressive policy language that in addition to the list of sensors, also allows specifying when, where and how frequently can they be accessed.
An expressive policy language has the disadvantage of making the task of an average user more difficult in setting and analyzing the consequences of his privacy settings. Further, privacy polices evolve over time. Over time, users are likely to change their privacy settings, as a response to a recently discovered vulnerability, or to be able to install that “much desired” app, etc. Such a policy change affects both already installed (may no longer be compliant) and previously rejected apps (may be compliant now).
In this paper, we propose an integrated privacy add-on that (i) compares the apps profiles vs. user’s privacy settings, outlining the points of conflict as well as the different ways in which they can be resolved. And (ii) provides efficient change management with respect to any changes in user privacy settings.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
4. RESPONSIBLE AI
“Ethical AI, also known as responsible AI, is the practice of using AI with good
intention to empower employees and businesses, and fairly impact customers
and society. Ethical AI enables companies to engender trust and scale AI with
confidence.” [1]
Failing to operationalize Ethical AI can not only expose enterprises to
reputational, regulatory, and legal risks; but also lead to wasted resources,
inefficiencies in product development, and even an inability to use data to train
AI models. [2]
[1] R. Porter. Beyond the promise: implementing Ethical AI, 2020
(link)
[2] R. Blackman. A Practical Guide to Building Ethical AI, 2020 (link)
5. REGULATIONS
Good news: is that there has been a recent
trend towards ensuring that AI applications are
responsibly trained and deployed, in line with
the enterprise strategy and policies.
Bad news: Efforts have been complicated by
different governmental organizations and
regulatory bodies releasing their own
guidelines and policies; with little to no
standardization on the definition of terms.
For example, the EU AI Act mandates a different
set of dos & don’ts depending on the ‘risk level’
of an AI application. However, quantifying the
risk level of an AI application is easier said than
done as it basically requires you to classify how
the capabilities of a non-deterministic system
will impact users and systems who might
interact with it in the future.
6. ETHICAL AI PRINCIPLES
Explainability
Bias & Fairness
Accountability
Reproducibility
Data Privacy
*D. Biswas. Ethical AI: its implications for Enterprise AI Use-
cases and Governance. Towards Data Science (link)
*D. Biswas. Privacy Preserving Chatbot Conversations.
3rd IEEE AIKE 2020: 179-182
7. EXPLAINABLE AI
Explainable AI is an umbrella term for
a range of tools, algorithms and
methods; which accompany AI model
predictions with explanations.
Explainability of AI models ranks high
among the list of ‘non-functional’ AI
features to be considered by
enterprises.
For example, this implies having to
explain why an ML model profiled
a user to be in a specific segment
— which led him/her to receiving
an advertisement.
(Labeled)
Data
Train ML
Model
Predictions
Explanation
Model
Explainable
Predictions
8. EXPLAINABLE AI FRAMEWORKS - LIME
Local Interpretable
Model-Agnostic
Explanations (LIME*)
provides easy to
understand explanations
of a prediction by training
an explainability model
based on samples around
a prediction.
The approximate nature
of the explainability
model might limit its
usage for compliance
needs. *M. T. Ribeiro, S. Singh, C. Guestrin. “Why Should I Trust You?”
Explaining the Predictions of Any Classifier, 2016 (link)
LIME output showing the important features,
positively and negatively impacting the model’s
prediction.
9. EXPLAINABLE AI - FEASIBILITY
Machine (Deep) Learning algorithms
vary in the level of accuracy and
explainability that they can provide-
the two are often inversely
proportional.
Explainability starts becoming more
difficult as as we move to Random
Forests, which are basically an
ensemble of Decision Trees. At the
end of the spectrum are Neural
Networks (Deep Learning), which
have shown human-level accuracy.
Explainability
Accuracy
Logistic Regression
DecisionTrees
Random Forest
(Ensemble of
DecisionTrees)
Deep Learning
(Neural Networks)
10. EXPLAINABLE AI - ABSTRACTION
“important thing is to explain the right thing to the right person in the right way at the right
time”*
Singapore AI Governance framework: “technical explainability may not always be enlightening,
esp. to the man in the street… providing an individual with counterfactuals (such as “you would
have been approved if your average debt was 15% lower” or “these are users with similar profiles
to yours that received a different decision”) can be a powerful type of explanation”
*N. Xie, et. al. Explainable Deep Learning: A
Field Guide for the Uninitiated, 2020 (link)
AI Developer
Goal:ensure/improve
performance
Regulatory Bodies
Goal:Ensure compliance with legislation,
protect interests of constituents
End Users
Goal:Understanding of
decision,trust model output
11. FAIRNESS & BIAS
Bias is a phenomenon that occurs when an algorithm
produces results that are systemically prejudiced due
to erroneous assumptions in the machine learning
process*.
AI models should behave in all fairness towards
everyone, without any bias. However, defining
‘fairness’ is easier said than done.
Does fairness mean, e.g., that the same proportion
of male and female applicants get high risk
assessment scores?
Or that the same level of risk result in the same
score regardless of gender?
(Impossible to fulfill both)
* SearchEnterprise
AI. Machine Learning bias (AI
bias) (link)
Google Photo labeling pictures of a black
Haitian-American programmer as “gorilla”
“White Barack Obama”
images (link)
A computer program used for bail and
sentencing decisions was labeled biased
against blacks. (link)
12. TYPES OF BIAS
Bias creeps into AI models, primarily due
to the inherent bias already present in the
training data. So the ‘data’ part of AI
model development is key to addressing
bias.
Historical Bias: arises due to historical
inequality of human decisions
captured in the training data
Representation Bias: arises due to
training data that is not representative
of the actual population
Ensure that training data is representative
and uniformly distributed over the target
population - with respect to the selected
features. Source: H. Suresh, J. V. Guttag. A Framework for
Understanding Unintended Consequences of Machine
13. LLMOPS: MLOPS FOR LLMS
*D. Biswas. MLOps for Compositional AI. NeurIPS Workshop on Challenges in
Deploying and Monitoring Machine Learning Systems (DMML), 2022.
*D. Biswas. Generative AI – LLMOps Architecture Patterns. Data Driven Investor,
2023 (link)
Black-box LLM APIs: This is the
classic ChatGPT example, where
we have black-box access to a
LLM API/UI. Prompts are the
primary interaction mechanism for
such scenarios.
While Enterprise LLM Apps have
the potential to be a multi-billion
dollar marketplace and accelerate
LLM adoption by providing an
enterprise ready solution; the
same caution needs to be
exercised as you would do before
using a 3rd party ML model —
validate LLM/training data
ownership, IP, liability clauses.
14. LLMOPS: MLOPS FOR LLMS (2)
*D. Biswas. Contextualizing Large Language Models (LLMs)
with Enterprise Data. Data Driven Investor, 2023 (link)
LLMs are generic in nature.
To realize the full potential
of LLMs for Enterprises, they
need to be contextualized
with enterprise knowledge
captured in terms of
documents, wikis, business
processes, etc.
This is achieved by fine-
tuning a LLM with enterprise
knowledge / embeddings to
develop a context-specific
LLM.
15. GENERATIVE AI - RESPONSIBLE DESIGN PRINCIPLES
We take inspiration from
the “enterprise friendly”
Microsoft, “developer
friendly” Google and “user
friendly” Apple — to
enable this ‘transparent’
approach to Gen AI
system design.
• Guidelines for Human-
AI Interaction by
Microsoft
• People + AI
Guidebook by Google
• Machine Learning:
Human Interface
Guidelines by Apple