Abstract. Enterprise adoption of AI/ML services has significantly accelerated in the last few years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., predictiction, classification. In this talk, Debmalya Biswas will present the emerging paradigm of Compositional AI, also known as, Compositional Learning. Compositional AI envisions seamless composition of existing AI/ML services, to provide a new (composite) AI/ML service, capable of addressing complex multi-domain use-cases. In an enterprise context, this enables reuse, agility, and efficiency in development and maintenance efforts.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
Energy Data Analytics | Energy Efficiency | IndiaUmesh Bhutoria
This white paper/forward looking note focuses on the role that energy data analytics can play in driving energy efficiency practices and investments, especially in the Indian context. It’s based on extensive desk research and supported by an online survey conducted on industry personnel and other experts
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
MLOps Bridging the gap between Data Scientists and Ops.Knoldus Inc.
Through this session we're going to introduce the MLOps lifecycle and discuss the hidden loopholes that can affect the MLProject. Then we are going to discuss the ML Model lifecycle and discuss the problem with training. We're going to introduce the MLFlow Tracking module in order to track the experiments.
The Future of AI is Generative not Discriminative 5/26/2021Steve Omohundro
The deep learning AI revolution has been sweeping the world for a decade now. Deep neural nets are routinely used for tasks like translation, fraud detection, and image classification. PwC estimates that they will create $15.7 trillion/year of value by 2030. But most current networks are "discriminative" in that they directly map inputs to predictions. This type of model requires lots of training examples, doesn't generalize well outside of its training set, creates inscrutable representations, is subject to adversarial examples, and makes knowledge transfer difficult. People, in contrast, can learn from just a few examples, generalize far beyond their experience, and can easily transfer and reuse knowledge. In recent years, new kinds of "generative" AI models have begun to exhibit these desirable human characteristics. They represent the causal generative processes by which the data is created and can be compositional, compact, and directly interpretable. Generative AI systems that assist people can model their needs and desires and interact with empathy. Their adaptability to changing circumstances will likely be required by rapidly changing AI-driven business and social systems. Generative AI will be the engine of future AI innovation.
Energy Data Analytics | Energy Efficiency | IndiaUmesh Bhutoria
This white paper/forward looking note focuses on the role that energy data analytics can play in driving energy efficiency practices and investments, especially in the Indian context. It’s based on extensive desk research and supported by an online survey conducted on industry personnel and other experts
In this session, you'll get all the answers about how ChatGPT and other GPT-X models can be applied to your current or future project. First, we'll put in order all the terms – OpenAI, GPT-3, ChatGPT, Codex, Dall-E, etc., and explain why Microsoft and Azure are often mentioned in this context. Then, we'll go through the main capabilities of the Azure OpenAI and respective usecases that might inspire you to either optimize your product or build a completely new one.
MLOps Bridging the gap between Data Scientists and Ops.Knoldus Inc.
Through this session we're going to introduce the MLOps lifecycle and discuss the hidden loopholes that can affect the MLProject. Then we are going to discuss the ML Model lifecycle and discuss the problem with training. We're going to introduce the MLFlow Tracking module in order to track the experiments.
A Framework for Navigating Generative Artificial Intelligence for EnterpriseRocketSource
Generative AI has dominated the headlines recently, which has caused many enterprises to put a full stop to implementing this technology until they can understand what’s behind the glitz and glamour. What if we shifted the conversation? What if the focus became a fresh, incremental approach to embracing the opportunities with generative artificial intelligence to keep organizations moving upward on the S Curve of Growth?
Brands stay relevant and solve complex problems by testing the barometer for one thing — will a new strategy, tool, or piece of technology improve humanity?
Human connections are more vital than using shiny new tools or technology. As your teams work to steer clear of the temptation to do what everyone else is doing in uniform, this post will highlight how to stand out, compete, and do so with less risk in today’s world of generative AI overload.
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
Artificial Intelligence,
History of Artificial Intelligence,
Artificial Intelligence Use Cases,
Artificial Intelligence Applications,
Ways of Achieving AI,
Machine Learning,
Deep Learning,
Supervised and Unsupervised Learning,
Classification Vs Prediction,
TensorFlow,
TensorFlow Graphs,
History of TensorFlow,
Companies using TensorFlow,
Using Deep Q Networks to Learn Video Game Strategies,
TensorFlow Use Cases,
AI & Deep Learning with TensorFlow,
How TensorFlow used today
For more updates on Big Data, Cloud Computing, Data Analytics, Artificial Intelligence, IoT subscribe to http://www.mybigdataanalytics.in
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
Generative AI: Past, Present, and Future – A Practitioner's Perspective
As the academic realm grapples with the profound implications of generative AI
and related applications like ChatGPT, I will present a grounded view from my
experience as a practitioner. Starting with the origins of neural networks in
the fields of logic, psychology, and computer science, I trace its history and
align it within the wider context of the pursuit of artificial intelligence.
This perspective will also draw parallels with historical developments in
psychology. Against this backdrop, I chart a proposed trajectory for the future.
Finally, I provide actionable insights for both academics and enterprising
individuals in the field.
PyCaret is an open source, low-code machine learning library in Python that allows you to go from preparing your data to deploying your model within minutes in your choice of environment. This talk is a practical demo using PyCaret in your existing workflows and supercharges your data science team's productivity.
Information retrieval 13 alternative set theoretic modelsVaibhav Khanna
Alternative Set Theoretic Models
Fuzzy Set Model :a set theoretic model of document retrieval based on fuzzy theory.
Extended Boolean Model:a set theoretic model of document retrieval based on an extension of the classic Boolean model. The idea is to interpret partial matches as Euclidean distances represented in a vectorial space of index terms.
The need for intelligent, personalized experiences powered by AI is ever-growing. Our devices are producing more and more data that could help improve our AI experiences. How do we learn and efficiently process all this data from edge devices while maintaining privacy? On-device learning rather than cloud training can address these challenges. In this presentation, we’ll discuss:
- Why on-device learning is crucial for providing intelligent, personalized experiences without sacrificing privacy
- Our latest research in on-device learning, including few-shot learning, continuous learning, and federated learning
- How we are solving system and feasibility challenges to move from research to commercialization
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
Applying AI to software engineering problems: Do not forget the human!University of Córdoba
The application of artificial intelligence (AI) to software engineering (SE)-problem-solving has been around since the 80s when expert systems were first used. However, it is during the last 10 years that there has been a peak in the use of these techniques, first based on search and optimisation algorithms such as metaheuristics, and later based on machine learning algorithms. The aim is to help the software engineer to automate and optimise tasks of the software development process, and to use valuable information hidden in multiple data sources such as software repositories to execute insightful actions that generate improvements in the performance of the overall process. Today, the use of AI is trendy, and often overused as it could generate artificial results since it does not consider the subjective nature of the software development process requiring the experience and know-how of the engineer. With this Invited Talk, we will discuss different proposals to incorporate the human into the decision-making process in the application of AI for SE (AI4SE), from interactive algorithms to the generation of interpretable models or explanations.
Enterprise adoption of AI/ML services has significantly accelerated in recent years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., prediction, classification. In this talk, we emphasize on the compositionality aspect that enables seamless composition / orchestration of existing data and models addressing complex multi-domain use-cases. This enables reuse, agility, and efficiency in model development and maintenance efforts. We then extend this concept to the Generative AI world, discussing the different LLMOps architectural patterns enabling composition of Large Language Models (LLMs) and AI Agents.
A Privacy Framework for Hierarchical Federated LearningDebmalya Biswas
Federated Learning (FL) enables heterogeneous entities to collaboratively develop an optimized (global) model by sharing data and models in a privacy preserving fashion. We consider a Hierarchical Federated Learning (HFL) environment with data ownership split among the entities representing the edge nodes. Each node can train models on the data they own, as well as request access to data and model(s) owned by their descendant nodes-to optimize their models, perform transfer learning on new data, and develop an ensemble model. Unfortunately, a practical realization of HFL is challenging today due to issues with data/model lineage tracking and providing subsequent privacy guarantees. In this paper, we propose a conceptual framework for HFL by capturing the data/model attributes at each node, including their privacy exposure. The framework enables scenarios where a node output may expose certain attributes of its underlying data, as well as identifying models in the hierarchy that need to be updated once a user whose data was used in their training has opted-out. By designing the computations appropriately and limiting the exposure by the nodes, we show that different levels of privacy can be guaranteed.
A Framework for Navigating Generative Artificial Intelligence for EnterpriseRocketSource
Generative AI has dominated the headlines recently, which has caused many enterprises to put a full stop to implementing this technology until they can understand what’s behind the glitz and glamour. What if we shifted the conversation? What if the focus became a fresh, incremental approach to embracing the opportunities with generative artificial intelligence to keep organizations moving upward on the S Curve of Growth?
Brands stay relevant and solve complex problems by testing the barometer for one thing — will a new strategy, tool, or piece of technology improve humanity?
Human connections are more vital than using shiny new tools or technology. As your teams work to steer clear of the temptation to do what everyone else is doing in uniform, this post will highlight how to stand out, compete, and do so with less risk in today’s world of generative AI overload.
🔹How will AI-based content-generating tools change your mission and products?
🔹This complimentary webinar [ON-DEMAND] explores multiple use cases that drive adoption in their early adopter customer base to provide product leaders with insights into the future of generative AI-powered businesses, and the potential generative AI holds for driving innovation and improving business processes.
Artificial Intelligence,
History of Artificial Intelligence,
Artificial Intelligence Use Cases,
Artificial Intelligence Applications,
Ways of Achieving AI,
Machine Learning,
Deep Learning,
Supervised and Unsupervised Learning,
Classification Vs Prediction,
TensorFlow,
TensorFlow Graphs,
History of TensorFlow,
Companies using TensorFlow,
Using Deep Q Networks to Learn Video Game Strategies,
TensorFlow Use Cases,
AI & Deep Learning with TensorFlow,
How TensorFlow used today
For more updates on Big Data, Cloud Computing, Data Analytics, Artificial Intelligence, IoT subscribe to http://www.mybigdataanalytics.in
This talk overviews my background as a female data scientist, introduces many types of generative AI, discusses potential use cases, highlights the need for representation in generative AI, and showcases a few tools that currently exist.
Generative AI: Past, Present, and Future – A Practitioner's PerspectiveHuahai Yang
Generative AI: Past, Present, and Future – A Practitioner's Perspective
As the academic realm grapples with the profound implications of generative AI
and related applications like ChatGPT, I will present a grounded view from my
experience as a practitioner. Starting with the origins of neural networks in
the fields of logic, psychology, and computer science, I trace its history and
align it within the wider context of the pursuit of artificial intelligence.
This perspective will also draw parallels with historical developments in
psychology. Against this backdrop, I chart a proposed trajectory for the future.
Finally, I provide actionable insights for both academics and enterprising
individuals in the field.
PyCaret is an open source, low-code machine learning library in Python that allows you to go from preparing your data to deploying your model within minutes in your choice of environment. This talk is a practical demo using PyCaret in your existing workflows and supercharges your data science team's productivity.
Information retrieval 13 alternative set theoretic modelsVaibhav Khanna
Alternative Set Theoretic Models
Fuzzy Set Model :a set theoretic model of document retrieval based on fuzzy theory.
Extended Boolean Model:a set theoretic model of document retrieval based on an extension of the classic Boolean model. The idea is to interpret partial matches as Euclidean distances represented in a vectorial space of index terms.
The need for intelligent, personalized experiences powered by AI is ever-growing. Our devices are producing more and more data that could help improve our AI experiences. How do we learn and efficiently process all this data from edge devices while maintaining privacy? On-device learning rather than cloud training can address these challenges. In this presentation, we’ll discuss:
- Why on-device learning is crucial for providing intelligent, personalized experiences without sacrificing privacy
- Our latest research in on-device learning, including few-shot learning, continuous learning, and federated learning
- How we are solving system and feasibility challenges to move from research to commercialization
As generative AI adoption grows at record-setting speeds and computing demands increase, hybrid processing is more important than ever. But just like traditional computing evolved from mainframes and thin clients to today’s mix of cloud and edge devices, AI processing must be distributed between the cloud and devices for AI to scale and reach its full potential. In this talk you’ll learn:
• Why on-device AI is key
• Which generative AI models can run on device
• Why the future of AI is hybrid
• Qualcomm Technologies’ role in making hybrid AI a reality
Applying AI to software engineering problems: Do not forget the human!University of Córdoba
The application of artificial intelligence (AI) to software engineering (SE)-problem-solving has been around since the 80s when expert systems were first used. However, it is during the last 10 years that there has been a peak in the use of these techniques, first based on search and optimisation algorithms such as metaheuristics, and later based on machine learning algorithms. The aim is to help the software engineer to automate and optimise tasks of the software development process, and to use valuable information hidden in multiple data sources such as software repositories to execute insightful actions that generate improvements in the performance of the overall process. Today, the use of AI is trendy, and often overused as it could generate artificial results since it does not consider the subjective nature of the software development process requiring the experience and know-how of the engineer. With this Invited Talk, we will discuss different proposals to incorporate the human into the decision-making process in the application of AI for SE (AI4SE), from interactive algorithms to the generation of interpretable models or explanations.
Enterprise adoption of AI/ML services has significantly accelerated in recent years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., prediction, classification. In this talk, we emphasize on the compositionality aspect that enables seamless composition / orchestration of existing data and models addressing complex multi-domain use-cases. This enables reuse, agility, and efficiency in model development and maintenance efforts. We then extend this concept to the Generative AI world, discussing the different LLMOps architectural patterns enabling composition of Large Language Models (LLMs) and AI Agents.
A Privacy Framework for Hierarchical Federated LearningDebmalya Biswas
Federated Learning (FL) enables heterogeneous entities to collaboratively develop an optimized (global) model by sharing data and models in a privacy preserving fashion. We consider a Hierarchical Federated Learning (HFL) environment with data ownership split among the entities representing the edge nodes. Each node can train models on the data they own, as well as request access to data and model(s) owned by their descendant nodes-to optimize their models, perform transfer learning on new data, and develop an ensemble model. Unfortunately, a practical realization of HFL is challenging today due to issues with data/model lineage tracking and providing subsequent privacy guarantees. In this paper, we propose a conceptual framework for HFL by capturing the data/model attributes at each node, including their privacy exposure. The framework enables scenarios where a node output may expose certain attributes of its underlying data, as well as identifying models in the hierarchy that need to be updated once a user whose data was used in their training has opted-out. By designing the computations appropriately and limiting the exposure by the nodes, we show that different levels of privacy can be guaranteed.
Enterprise adoption of AI/ML services has significantly accelerated in the last few years. However, the majority of ML models are still developed with the goal of solving a single task, e.g., prediction, classification. In this context, Compositional AI envisions seamless composition of existing AI/ML services, to provide a new (composite) AI/ML service, capable of addressing complex multi-domain use-cases. In this work, we consider two MLOps aspects that need to be enabled to realize Composable AI scenarios: (i) integration of DataOps and MLOps, and (ii) extension of the integrated DataOps-MLOps pipeline such that inferences made by a deployed ML model can be provided as training dataset for a new model. In an enterprise AI/ML environment, this enables reuse, agility, and efficiency in development and maintenance efforts.
Three Dimensional Database: Artificial Intelligence to eCommerce Web service ...CSCJournals
A main objective of this paper is using artificial intelligence technique to web service agents and increase the efficiency of the agent communications. In recent years, web services have played a major role in computer applications. Web services are essential, as the design model of applications are dedicated to electronic businesses. This model aims to become one of the major formalisms for the design of distributed and cooperative applications in an open environment (the Internet). Current commercial and research-based efforts are reviewed and positioned within these two fields. A web service as a software system designed to support interoperable machine-to-machine interaction over a network. It has an interface described in a machine-process able format (specifically Web Services Description Language WSDL). Other systems interact with the web service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards. Particular attention is given to the application of AI techniques to the important issue of WS composition. Within the range of AI technologies considered, we focus on the work of the Semantic Web and Agent-based communities to provide web services with semantic descriptions and intelligent behavior and reasoning capabilities. Re-composition of web services is also considered and a number of adaptive agent approaches are introduced and implemented in publication domain with three dimensional databases and one of the areas of work is eCommerce.
Efficient and reliable hybrid cloud architecture for big databaseijccsa
The objective of our paper is to propose a Cloud computing framework which is feasible and necessary for
handling huge data. In our prototype system we considered national ID database structure of Bangladesh
which is prepared by election commission of Bangladesh. Using this database we propose an interactive
graphical user interface for Bangladeshi People Search (BDPS) that use a hybrid structure of cloud
computing handled by apache Hadoop where database is implemented by HiveQL. The infrastructure
divides into two parts: locally hosted cloud which is based on “Eucalyptus” and the remote cloud which is
implemented on well-known Amazon Web Service (AWS). Some common problems of Bangladesh aspect
which includes data traffic congestion, server time out and server down issue is also discussed.
apidays LIVE LONDON - A Decentralized Reference Architecture for Cloud-native...apidays
apidays LIVE LONDON - The Road to Embedded Finance, Banking and Insurance with APIs
A Decentralized Reference Architecture for Cloud-native Applications
Asanka Abeysinghe, Chief Technology Evangelist at WSO2
Data Virtualization: Introduction and Business Value (UK)Denodo
Watch full webinar here: https://bit.ly/30mHuYH
What started to evolve as the most agile and real-time enterprise data fabric, data virtualization is proving to go beyond its initial promise and is becoming one of the most important enterprise big data fabrics. Denodo’s vision is to provide a unified data delivery layer as a logical data fabric, to bridge the gap between the IT and the business, hiding the underlying complexity and creating a semantic layer to expose data in a business friendly manner.
Attend this webinar to learn:
- What data virtualization really is
- How it differs from other enterprise data integration technologies
- Why data virtualization is finding enterprise-wide deployment inside some of the largest organizations
- Business Value of data virtualization and customer use cases
- Highlights of the newly launched Denodo Platform 8.0
API Enablement on Mainframes. How to API enable mainframe applications & services. How to integrated mainframe services and applications to mobile, cloud and external apps. This white paper covers couple of patterns to API enable mainframe based applications and services.
Software Design PatternsConsider a company migrating to a third-p.pdfarorastores
Software Design Patterns:
Consider a company migrating to a third-party cloud-based solution from an internally
maintained ecosystem of applications utilizing one current-generation database system, as well
as a legacy system for older data. They plan to migrate all data to the cloud based solution in
time. But, for now, they are going to transition to the new cloud-based applications and the
cloud-based database for new data, but will rely upon the existing and legacy database for older
data. The databases have approximately the same functionality, but different interfaces and
languages.
What design pattern highlights the most significant challenge associated with integrating the
different databases (as well as one way of addressing it)?
What is that challenge?
Briefly, and in English, describe how the pattern teaches that we should approach this problem?
In other words, what is the pattern that should follow for the solution?
Solution
Design patterns like Factory pattern,Singleton pattern etc basically provide solutions to general
problems which are faced by software developers during the development phase. These patterns
do not play any role in Data migration.
There are four stages in Data Migration. They are:
1.Semantic Data models which comprises of the Dimensional models,Semantic models,
Mapping to Semantic building blocks.
2. Data Mapping Specifications which is used to translate Source data to target data.
3. KPIs and Data lineage which is useful in establishing the data lineage for the org and other
rightful requirements.
4. End-to-End scope of Data models is used to standardise data that is loaded in the Data
Warehouse.
Please follow the list of steps provided below while migrating data to the cloud:
1. Assessing the requirements and then plan.
2.Disintegrate the dependencies after the initial assessments.
3. Redesign, re-program and reintegrate.
4. Testing of new migrated components.
5. Fine tuning and training.
However, there would be technical issues while data migration. Many firms which migrate the
data to the cloud, proceed in a hybrid model, keeping key elementss of their infrastructure
inhouse and under their comtrol while they outsource less sensitive or core components.
Cloud vendors would always expect the customers to provide or develop a virtual image jointly
that specifies their basic server configuration, which is offered as a service after being built
inside the cloud. It is required that the IT team also have the skillset tp create a VM template
which includes infrastructure, application and security that is required by the enterprise..
Similar to Compositional AI: Fusion of AI/ML Services (20)
Constraints Enabled Autonomous Agent Marketplace: Discovery and MatchmakingDebmalya Biswas
The recent advances in Generative AI have renewed the discussion around Auto-GPT, a form of autonomous agent that can execute complex tasks, e.g., make a sale, plan a trip, etc. We focus on the discovery aspect of agents, i.e., identifying the agent(s) capable of executing a given task. This implies that there exists a
marketplace with a registry of agents - with a well-defined description of the agent capabilities and constraints.
In this paper, we outline a constraints based model to specify agent services. We show how the constraints of a composite agent can be derived and described in a manner consistent with respect to the constraints of its component agents. Finally, we discuss approximate matchmaking, and show how the notion of bounded inconsistency can be exploited to discover agents more efficiently.
The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
In this talk, we will provide an overview explaining the key Responsible AI aspects: Explainability, Bias, and Accountability. We will then outline the Gen AI usage patterns and show how the three aspects can be integrated at different stages of the LLMOps (MLOps for LLM) pipeline. We summarize the learnings in the form of Gen AI design patterns that can be readily applied to enterprise use-cases.
Reinforcement Learning (RL) refers to a branch of Artificial Intelligence (AI) that is able to achieve complex goals by maximizing a reward function in real-time. Given that RL based approaches can basically be applied to any optimization problem, its enterprise adoption is picking up fast. In this talk, we will focus on Industrial Control Systems, and show why RL is 'best fit' for many control optimization problems, from controlling combustion engines, to robotic arms cutting metals, to air conditioning systems in buildings.
Regulating Generative AI - LLMOps pipelines with TransparencyDebmalya Biswas
The growing adoption of Gen AI, esp. LLMs, has re-ignited the discussion around AI Regulations — to ensure that AI/ML systems are responsibly trained and deployed. Unfortunately, this effort is complicated by multiple governmental organizations and regulatory bodies releasing their own guidelines and policies with little to no agreement on the definition of terms.
Rather than trying to understand and regulate all types of AI, we recommend a different (and practical) approach in this talk based on AI Transparency —
to transparently outline the capabilities of the AI system based on its training methodology and set realistic expectations with respect to what it can (and cannot) do.
We outline LLMOps architecture patterns and show how the proposed approach can be integrated at different stages of the LLMOps pipeline capturing the model's capabilities. In addition, the AI system provider also specifies scenarios where (they believe that) the system can make mistakes, and recommends a ‘safe’ approach with guardrails for those scenarios.
Edge AI Framework for Healthcare ApplicationsDebmalya Biswas
Edge AI enables intelligent solutions to be deployed on edge devices, reducing latency, allowing offline execution, and providing strong privacy guarantees. Unfortunately, achieving efficient and accurate execution of AI algorithms on edge devices, with limited power and computational resources, raises several deployment challenges. Existing solutions are very specific to a hardware platform/vendor. In this work, we present the MATE framework that provides tools to (1) foster model-to-platform adaptations, (2) enable validation of the deployed models proving their alignment with the originals, and (3) empower engineers and architects to do it efficiently using repeated, but rapid development cycles. We finally show the practical utility of the proposal by applying it on a real-life healthcare body-pose estimation app.
Ethical AI: Establish an AI/ML Governance framework addressing Reproducibility, Explainability, Bias & Accountability for Enterprise AI use-cases.
Presentation on “Open Source Enterprise AI/ML Governance” at Linux Foundation’s Open Compliance Summit, Dec 2020 (https://events.linuxfoundation.org/open-compliance-summit/)
Full article: https://towardsdatascience.com/ethical-ai-its-implications-for-enterprise-ai-use-cases-and-governance-81602078f5db
Abstract. With chatbots gaining traction and their adoption growing in different verticals, e.g. Health, Banking, Dating; and users sharing more and more private information with chatbots — studies have started to highlight the privacy risks of chatbots. In this paper, we propose two privacy-preserving approaches for chatbot conversations. The first approach applies ‘entity’ based privacy filtering and transformation, and can be applied directly on the app (client) side. It however requires knowledge of the chatbot design to be enabled. We present a second scheme based on Searchable Encryption that is able to preserve user chat privacy, without requiring any knowledge of the chatbot design. Finally, we present some experimental results based on a real-life employee Help Desk chatbot that validates both the need and feasibility of the proposed approaches.
Reinforcement Learning based HVAC Optimization in FactoriesDebmalya Biswas
Heating, Ventilation and Air Conditioning (HVAC) units are responsible for maintaining the temperature and humidity settings in a building. Studies have shown that HVAC accounts for almost 50% energy consumption in a building and 10% of global electricity usage. HVAC optimization thus has the potential to contribute significantly towards our sustainability goals, reducing energy consumption and CO2 emissions. In this work, we explore ways to optimize the HVAC controls in factories. Unfortunately, this is a complex problem as it requires computing an optimal state considering multiple variable factors, e.g. the occupancy, manufacturing schedule, temperature requirements of operating machines, air flow dynamics within the building, external weather conditions, energy savings, etc. We present a Reinforcement Learning (RL) based energy optimization model that has been applied in our factories. We show that RL is a good fit as it is able to learn and adapt to multi-parameterized system dynamics in real-time. It provides around 25% energy savings on top of the previously used Proportional–Integral–Derivative (PID) controllers.
Delayed Rewards in the context of Reinforcement Learning based Recommender ...Debmalya Biswas
We present a Reinforcement Learning (RL) based approach to implement Recommender systems. The results are based on a real-life Wellness app that is able to provide personalized health / activity related content to users in an interactive fashion. Unfortunately, current recommender systems are unable to adapt to continuously evolving features, e.g. user sentiment, and scenarios where the RL reward needs to computed based on multiple and unreliable feedback channels (e.g., sensors, wearables). To overcome this, we propose three constructs: (i) weighted feedback channels, (ii) delayed rewards, and (iii) rewards boosting, which we believe are essential for RL to be used in Recommender Systems.
Building an enterprise Natural Language Search Engine with ElasticSearch and ...Debmalya Biswas
Presented at Berlin Buzzwords 2019
https://berlinbuzzwords.de/19/session/building-enterprise-natural-language-search-engine-elasticsearch-and-facebooks-drqa
Personalized services attract high-value customers. Knowing the preferences and habits of an individual customer, it is possible to offer to that customer well customized and adapted services, matching his needs and desires. This is advantageous for the entity offering the service (e.g., a retailer) as well, as it helps in creating additional sales or improve customer retention. The main unsolved problem today is that the profile of each individual customer would be necessary in order to create such services, posing severe risks regarding privacy and data protection. This paper proposes efficient encryption schemes that allow profiling to be outsourced while preserving privacy. The schemes ensure that the customer is always in control of his profile data, at the same time making shopping data across multiple retailers available to third party service providers to be able to provide targeted services.
Privacy Policies Change Management for SmartphonesDebmalya Biswas
The ever increasing popularity of apps stems from their ability to provide highly customized services for the user.
The flip side is that to provide such customized services, apps need access to very sensitive personal user information. This has led to a lot of rogue apps that e.g. pass personal information to 3rd party Ad servers in the background. Studies have shown that current app vetting processes which are mainly restricted to install time verification mechanisms are incapable of detecting and preventing such attacks. We argue that the missing fundamental aspect here is the inability to capture and control runtime characteristics of apps, e.g. we need to know not only the list of sensors that need to be accessed by an app but also their frequency of access. This leads to the need for an expressive policy language that in addition to the list of sensors, also allows specifying when, where and how frequently can they be accessed.
An expressive policy language has the disadvantage of making the task of an average user more difficult in setting and analyzing the consequences of his privacy settings. Further, privacy polices evolve over time. Over time, users are likely to change their privacy settings, as a response to a recently discovered vulnerability, or to be able to install that “much desired” app, etc. Such a policy change affects both already installed (may no longer be compliant) and previously rejected apps (may be compliant now).
In this paper, we propose an integrated privacy add-on that (i) compares the apps profiles vs. user’s privacy settings, outlining the points of conflict as well as the different ways in which they can be resolved. And (ii) provides efficient change management with respect to any changes in user privacy settings.
Levelwise PageRank with Loop-Based Dead End Handling Strategy : SHORT REPORT ...Subhajit Sahu
Abstract — Levelwise PageRank is an alternative method of PageRank computation which decomposes the input graph into a directed acyclic block-graph of strongly connected components, and processes them in topological order, one level at a time. This enables calculation for ranks in a distributed fashion without per-iteration communication, unlike the standard method where all vertices are processed in each iteration. It however comes with a precondition of the absence of dead ends in the input graph. Here, the native non-distributed performance of Levelwise PageRank was compared against Monolithic PageRank on a CPU as well as a GPU. To ensure a fair comparison, Monolithic PageRank was also performed on a graph where vertices were split by components. Results indicate that Levelwise PageRank is about as fast as Monolithic PageRank on the CPU, but quite a bit slower on the GPU. Slowdown on the GPU is likely caused by a large submission of small workloads, and expected to be non-issue when the computation is performed on massive graphs.
Explore our comprehensive data analysis project presentation on predicting product ad campaign performance. Learn how data-driven insights can optimize your marketing strategies and enhance campaign effectiveness. Perfect for professionals and students looking to understand the power of data analysis in advertising. for more details visit: https://bostoninstituteofanalytics.org/data-science-and-artificial-intelligence/
Show drafts
volume_up
Empowering the Data Analytics Ecosystem: A Laser Focus on Value
The data analytics ecosystem thrives when every component functions at its peak, unlocking the true potential of data. Here's a laser focus on key areas for an empowered ecosystem:
1. Democratize Access, Not Data:
Granular Access Controls: Provide users with self-service tools tailored to their specific needs, preventing data overload and misuse.
Data Catalogs: Implement robust data catalogs for easy discovery and understanding of available data sources.
2. Foster Collaboration with Clear Roles:
Data Mesh Architecture: Break down data silos by creating a distributed data ownership model with clear ownership and responsibilities.
Collaborative Workspaces: Utilize interactive platforms where data scientists, analysts, and domain experts can work seamlessly together.
3. Leverage Advanced Analytics Strategically:
AI-powered Automation: Automate repetitive tasks like data cleaning and feature engineering, freeing up data talent for higher-level analysis.
Right-Tool Selection: Strategically choose the most effective advanced analytics techniques (e.g., AI, ML) based on specific business problems.
4. Prioritize Data Quality with Automation:
Automated Data Validation: Implement automated data quality checks to identify and rectify errors at the source, minimizing downstream issues.
Data Lineage Tracking: Track the flow of data throughout the ecosystem, ensuring transparency and facilitating root cause analysis for errors.
5. Cultivate a Data-Driven Mindset:
Metrics-Driven Performance Management: Align KPIs and performance metrics with data-driven insights to ensure actionable decision making.
Data Storytelling Workshops: Equip stakeholders with the skills to translate complex data findings into compelling narratives that drive action.
Benefits of a Precise Ecosystem:
Sharpened Focus: Precise access and clear roles ensure everyone works with the most relevant data, maximizing efficiency.
Actionable Insights: Strategic analytics and automated quality checks lead to more reliable and actionable data insights.
Continuous Improvement: Data-driven performance management fosters a culture of learning and continuous improvement.
Sustainable Growth: Empowered by data, organizations can make informed decisions to drive sustainable growth and innovation.
By focusing on these precise actions, organizations can create an empowered data analytics ecosystem that delivers real value by driving data-driven decisions and maximizing the return on their data investment.
Adjusting primitives for graph : SHORT REPORT / NOTESSubhajit Sahu
Graph algorithms, like PageRank Compressed Sparse Row (CSR) is an adjacency-list based graph representation that is
Multiply with different modes (map)
1. Performance of sequential execution based vs OpenMP based vector multiply.
2. Comparing various launch configs for CUDA based vector multiply.
Sum with different storage types (reduce)
1. Performance of vector element sum using float vs bfloat16 as the storage type.
Sum with different modes (reduce)
1. Performance of sequential execution based vs OpenMP based vector element sum.
2. Performance of memcpy vs in-place based CUDA based vector element sum.
3. Comparing various launch configs for CUDA based vector element sum (memcpy).
4. Comparing various launch configs for CUDA based vector element sum (in-place).
Sum with in-place strategies of CUDA mode (reduce)
1. Comparing various launch configs for CUDA based vector element sum (in-place).
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Data and AI
Round table discussion of vector databases, unstructured data, ai, big data, real-time, robots and Milvus.
A lively discussion with NJ Gen AI Meetup Lead, Prasad and Procure.FYI's Co-Found
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...John Andrews
SlideShare Description for "Chatty Kathy - UNC Bootcamp Final Project Presentation"
Title: Chatty Kathy: Enhancing Physical Activity Among Older Adults
Description:
Discover how Chatty Kathy, an innovative project developed at the UNC Bootcamp, aims to tackle the challenge of low physical activity among older adults. Our AI-driven solution uses peer interaction to boost and sustain exercise levels, significantly improving health outcomes. This presentation covers our problem statement, the rationale behind Chatty Kathy, synthetic data and persona creation, model performance metrics, a visual demonstration of the project, and potential future developments. Join us for an insightful Q&A session to explore the potential of this groundbreaking project.
Project Team: Jay Requarth, Jana Avery, John Andrews, Dr. Dick Davis II, Nee Buntoum, Nam Yeongjin & Mat Nicholas
Data Centers - Striving Within A Narrow Range - Research Report - MCG - May 2...pchutichetpong
M Capital Group (“MCG”) expects to see demand and the changing evolution of supply, facilitated through institutional investment rotation out of offices and into work from home (“WFH”), while the ever-expanding need for data storage as global internet usage expands, with experts predicting 5.3 billion users by 2023. These market factors will be underpinned by technological changes, such as progressing cloud services and edge sites, allowing the industry to see strong expected annual growth of 13% over the next 4 years.
Whilst competitive headwinds remain, represented through the recent second bankruptcy filing of Sungard, which blames “COVID-19 and other macroeconomic trends including delayed customer spending decisions, insourcing and reductions in IT spending, energy inflation and reduction in demand for certain services”, the industry has seen key adjustments, where MCG believes that engineering cost management and technological innovation will be paramount to success.
MCG reports that the more favorable market conditions expected over the next few years, helped by the winding down of pandemic restrictions and a hybrid working environment will be driving market momentum forward. The continuous injection of capital by alternative investment firms, as well as the growing infrastructural investment from cloud service providers and social media companies, whose revenues are expected to grow over 3.6x larger by value in 2026, will likely help propel center provision and innovation. These factors paint a promising picture for the industry players that offset rising input costs and adapt to new technologies.
According to M Capital Group: “Specifically, the long-term cost-saving opportunities available from the rise of remote managing will likely aid value growth for the industry. Through margin optimization and further availability of capital for reinvestment, strong players will maintain their competitive foothold, while weaker players exit the market to balance supply and demand.”
2. Enterprise AI
Enterprise AI/ML use-cases are
pervasive.
4
Broadly categorized by the three core
AI/ML capabilities enabling them:
Natural Language Processing (NLP),
Computer Vision and Predictive Analytics
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
sed do eiusmod tempor
Majority of AI/ML models are still
developed with the goal of solving a
single task, e.g., prediction, classification.
3. Compositional AI Scenario
Consider the online Repair Service of a
luxury goods vendor.
The service consists of a Computer Vision
(CV) model capable of assessing the repairs
needed, given a picture of the product
uploaded by the customer.
Product Repair
Assessment CV Model
Chatbot Ordering App
Repair
Ordering
Service
The assessment is followed by an Ordering
Chatbot conversation that captures
additional details required to process the
user’s repair request, e.g., damage details,
username, contact details, etc.
4. Compositional AI Scenario (2)
In future, when the enterprise is looking for models
to develop a Product Recommendation service; the
Repair Service is considered.
The data gathered by the Repair Service: state of
products owned by the users (gathered by CV
assessment model) together with their demographics
(gathered by the Ordering Chatbot) - provides
additional training data for the Recommender Service.
Privacy policies may prevent their data from being
combined, such that, they cannot be used to profile
customers – “data used for a different purpose than
originally intended”. Product Repair
Assessment CV Model Chatbot Ordering App
Repair Ordering
Service
[Damaged product
images + Text
description, Customer
demographics ]
Product
Recommendation
Service
[Products purchased
+ Demographics]
5. Compositional AI Scenario (3)
Enterprise further wants to develop a CV
App to detect Defective products during
Manufacturing.
The Repair Service can help here as it has
labeled images of damaged products (with
the product damage descriptions provided
to the Chatbot acting as ‘labels’).
Product Repair
Assessment CV Model Chatbot Ordering App
Repair Ordering
Service
[Damaged product
images + Text
description, Customer
demographics ]
Manufacturing Defect
Detection App
[Damaged Product images +
Text description]
Training data is acquired by fusing data
gathered by two different AI/ML Services.
6. Compositionality
Ability to form new
(composite) services by
combining the capabilities
of existing services.
The existing services may
themselves be composite,
leading to a hierarchical
composition.
7. Prior-Art: Web Services Composition
WS-Composition enables reuse and
integration of existing (isolated)
applications in an enterprise.
Composition challenges: Discovery,
Matchmaking, Monitoring, Transactions
BPEL specification to orchestrate Web
Services Compositions (link)
* D. Biswas. Web Services Discovery and Constraints Composition. RR 2007: 73-87
* D. Biswas, K. Vidyasankar. Optimal Compensation for Hierarchical Web services
Compositions under Restricted Visibility. IEEE APSCC 2009: 293-300
8. Prior-Art: Secure Composition
Given a complex task, first partition the task
to several, simpler sub-tasks. Then, design
protocols for securely realizing the sub-tasks.
Universal Composition (UC)*-framework
ensures that the protocol composed from
(secure) sub-protocols, securely realizes the
given task.
UC continues to guarantee security in novel
execution environments, or where other
protocols are running concurrently – essential
to run protocols in complex, unpredictable
and adversarial environments.
* Ran Canetti. 2020. Universally Composable Security. J. ACM 67, 5, Article 28 (October 2020).
9. ML Prior-Art: Ensemble Learning
Ensemble Learning attempts to make the
best use of the predictions from multiple
models catering to the same problem.
Commonly used Ensemble Learning
techniques include: Bagging, Boosting
and Stacking.
D4
D3
D2
D1
Original Training
Data D
Split data set
Build multiple
models
Combine
models
10. ML Prior-Art: Federated Learning
Federated learning, also known as Collaborative Learning, enables multiple (non-trusting) entities to
collaborate in training an ML model on their combined dataset.
FL-Neural Network training: All nodes agree upon
the same neural network architecture and
task to train a global model.
During each epoch, nodes download the global model
parameters from the coordinator, and updates them
locally using some variant of gradient descent on
their local datasets; sharing the updated values back
with the coordinator.
The coordinator node averages the gathered
parameter values from all child nodes.
* B. McMahan, et. al. Communication-Efficient Learning of Deep Networks from
Decentralized Data. AISTATS 2017: 1273-1282
Org3
Data
Org2
Data
Org1
Data
Training data
belonging to
different
organizations
Locally
trained Neural
Networks
Coordinator:
Parameter Server
(Global model -
average parameters)
Download global
parameters
Share local
updates
11. ML Prior-Art: Stacking Neural Networks
In the context of OCR, CNN is only used
as the feature extractor; with the
features provided as input to the LSTM.
The LSTM is able to take into account
both the preceding and following set of
output characters - to output the most
probable character at each time step.
Fusion
Input image Image features
CNN LSTM
“fusion”
Sequential Composition
12. AI Service Basics
AI Service: Data + Model + API
(Labeled)
Data
(Train)
ML Model
API
Endpoint
DataOps MLOps
APIOps /
API Mesh /
API
Management
13. DataOps – Data Fusion
Integration/fusion tools for AI Services are lacking - a key part of Compositional AI
“DataOps is an automated,
process-oriented methodology,
used by analytic and data teams, to
improve the quality and reduce the
cycle time of data analytics.”
- Wikipedia
Data Processing
NiFi: Data movements and transformations
Spark: Complex data transformations
Data Integration
PrestoDB: Federate queries over multiple data sources
Hive + LLAP (Data Warehouse): Central repository of integrated
data from one or more data sources
Neo4j: Use graph structures to understand relationships and
perform semantic queries
Data Access
Tableau, PowerBI: Dashboard, Reports
WSO2: Expose data and ML services as APIs
Data Ingestion
Kafka: Millions of events per seconds
HDFS: Hadoop File System
Federated
Queries
Data
Marts
Knowledge
Graphs
14. MLOps
Manages model versions and
parameters, however model
fusion aspect is missing.
* D. Sculley, et. al. Hidden Technical Debt in Machine Learning Systems. NIPS 2015: 2503-2511
MLOps, also known as ModelOps,
combines DevOps with ML to
manage ML models in production.
End-to-end ML lifecycle: Data and
(Serving) API aspects are also
considered.
15. APIOps - API Management – API Mesh
(Black-box APIs) Good for prototyping,
difficult to use for strategic use-cases,
without any knowledge of the
underlying models and data.
Cloud ML APIs providing core AI/ML
capabilities, e.g., Computer Vision, NLP,
Chatbots, Speech, Video, etc.
16. Data Governance
Data Governance includes: Data
catalog, Data dictionary, Data
provenance and lineage tracking,
Data modeling, etc.
We have considered the operational
part: DataOps, MLOps, APIOps.
Does the answer lie in establishing a
governance framework?
Data Processing
NiFi: Data movements and transformations
Spark: Complex data transformations
Data Integration
PrestoDB: Federate queries over multiple data sources
Hive + LLAP (Data Warehouse): Central repository of integrated
data from one or more data sources
Neo4j: Use graph structures to understand relationships and
perform semantic queries
Data Access
Tableau, PowerBI: Dashboard, Reports
WSO2: Expose data and ML services as APIs
Data Ingestion
Kafka: Millions of events per seconds
HDFS: Hadoop File System
D
a
ta
G
o
v
e
r
n
a
n
c
e
E
th
ic
a
l
A
I
G
o
v
e
r
n
a
n
c
e
:
P
r
iv
a
c
y
,
E
x
p
la
in
a
b
ility
,
B
ia
s
/F
a
ir
n
e
s
s
,
A
c
c
o
u
n
ta
b
ility
17. Data Governance: FAIR Principles
The software / ML code part – how the
data is transformed is not considered. This
leads to potentially conflicting Open Data
vs. Open-Source Software frameworks.
FAIR principles provide guidance in terms of
specifying the data lineage and provenance,
maximizing reuse and enabling the users to
decide which data is fit for their purpose.
Source: https://www.openaire.eu/
*Lamprecht et. al., Towards FAIR principles for research software, June 2020
*“there are also several significant
differences between data and software as
digital research objects”
18. Ethical AI Governance
**Key components of an Ethical AI
Governance Framework include:
Privacy, Explainability, Bias/Fairness
& Accountability.
*“Ethical AI, also known as
Responsible AI, is the practice of using
AI with good intention to empower
employees and businesses, and fairly
impact customers and society.”
*R. Porter. Beyond the promise: implementing Ethical AI, 2020 .
**D. Biswas. Ethical AI: its implications for Enterprise AI Use-cases and
Governance. Linux Foundation Open Compliance Summit, 2020 (Article)
Data Processing
NiFi: Data movements and transformations
Spark: Complex data transformations
Data Integration
PrestoDB: Federate queries over multiple data sources
Hive + LLAP (Data Warehouse): Central repository of integrated
data from one or more data sources
Neo4j: Use graph structures to understand relationships and
perform semantic queries
Data Access
Tableau, PowerBI: Dashboard, Reports
WSO2: Expose data and ML services as APIs
Data Ingestion
Kafka: Millions of events per seconds
HDFS: Hadoop File System
D
a
ta
G
o
v
e
r
n
a
n
c
e
E
th
ic
a
l
A
I
G
o
v
e
r
n
a
n
c
e
:
P
r
iv
a
c
y
,
E
x
p
la
in
a
b
ility
,
B
ia
s
/F
a
ir
n
e
s
s
,
A
c
c
o
u
n
ta
b
ility
19. Ethical AI Governance: Privacy
Black box attacks are still possible when
the attacker only has access to the APIs:
invoke the model and observe the
relationships between inputs and outputs.
Two broad categories of inference attacks:
membership inference (if a specific user
data item was present in the training
dataset) and property inference
(reconstruct properties of a participant’s
dataset) attacks.
M. Rigaki and S. Garcia. A Survey of Privacy Attacks in Machine Learning.
2020.
A. Ilyas, L. Engstrom, A. Athalye, and J. Lin. Black-box Adversarial Attacks
with Limited Queries and Information. ICML 2018, pages 2137–2146.
20. ML Privacy
This is because (during backpropagation) gradients of a given
layer of a neural network are computed using the layer’s
feature values and the error from the next layer.
For example, in the case of sequential fully connected layers,
A trained model may leak insights related
to its training dataset.*
*M. Nasr, et. al. Comprehensive Privacy Analysis of Deep Learning: Passive
and Active White-box Inference Attacks against Centralized and Federated
Learning. IEEE Symposium on Security and Privacy (SP), 2019, 739–753.
the gradient of error E with respect to Wl is:
That is, the gradients of Wl are inner products of the error
from the next layer and the features hl; and hence the
correlation between the gradients and features. This is esp.
true if certain weights in the weight matrix are sensitive to
specific features or values in the participants’ dataset.
21. Privacy implications in Compositional AI
A recent FTC ruling* stated that it is no longer
sufficient to just delete data when a user opts-
out; the organization will need to delete
models/algorithms trained on that data as
well.
Enforcing this in a compositional setting
requires capturing the (higher) level
composite services that have directly or
indirectly accessed the underlying (affected)
training data.
*FTC. California Company Settles FTC Allegations It Deceived
Consumers about use of Facial Recognition in Photo Storage App, 2021.
Privacy policies, e.g., FTC FIIPs** recommend that
data is only used for specific purposes (for which
the user has provided explicit opt-in), and not
combined with other datasets to reveal additional
insights that can be used to profile the user.
Such data aggregations can be very difficult to
detect in a compositional setting, as (higher) level
composite services can (via intermediate services)
aggregate data belonging to different services -
without their explicit approval.
**R. Gellman. Fair Information Practices: A Basic History -
Version 2.20. 2021. doi: 10.2139/ssrn.2415020.
23. Conclusion
Compositional AI envisions seamless composition of existing
AI/ML services, to provide a new (composite) AI/ML service,
capable of addressing complex multi-domain use-cases.
Data Fusion --> Compositional AI Comprehensive framework
integrating DataOps +
MLOps + AIOps
Critical to enable enterprise
reuse: Reduce re-work (80%)
in Data Engineering
Non-functional aspects will also
need to be addressed, e.g.,
lineage, privacy.
1 2
3 4