Based on our discussion, it seems you've been feeling down lately due to being away from family. While distance can be difficult, trying to visit more or video chat may help you feel less isolated. Remember that these feelings will pass. Would you like to discuss strategies to lift your mood until you can see them again? I'm here to help however I can.
Patient: Yes, please provide some strategies.
https://maison-workshop.com/prof-amit-sheth/
Video: https://youtu.be/pRUXTuxm3as
Keynote at the 7th International Workshop on Mining Actionable Insights from Social Networks (MAISoN 2021) – Special Edition on Responsible, August 21, 2021 and is co-located with the 15th International Joint Conference on Artificial Intelligence (IJCAI 2021). Also ASONAM 2021.
With the increasing legalization of medical and recreational use of substances, more research is needed to understand the association between mental health and user behavior related to drug consumption. Specifically, drug overdose and substance use-related mental health issues have become two major topics that have been widely discussed on social media platforms. Big social media data has the potential to provide deeper insights about these associations to public health analysts for making policy decisions. Multiple national population surveys have found that about half of those who experience a mental health illness during their lives will also experience a substance use disorder and vice versa. The communications related to addiction and mental health are complex to process and understand given their language and contextual characteristics. Surface-level data analysis alone is not sufficient to understand the complex nature of relationships among the addiction and mental health context. Moreover, dark web vendors have been using social media as a new marketplace for drugs. Social media users also discuss the novel drugs emerging in dark web marketplaces and associated side effects/health conditions. These communications get complex when researchers try to annotate them or link them to a specific mental health entity. Considering the significant sensitivity of such communications and to protect user privacy on social media, a potential solution requires reliable algorithms for modeling such communications. We demonstrate the value of incorporating domain-specific knowledge in natural language understanding to identify the relationship between mental health and drug addiction. We discuss end-to-end knowledge-infused deep learning frameworks that leverage the pre-trained language representation model and domain-specific declarative knowledge source to extract entities and their relationships jointly. Our model is further tailored to focus on the entities mentioned in the sentence where ontology is used to locate the target entity’s position. We also demonstrate the capabilities of inclusion of the knowledge-aware representation in association with language models that can extract the Drug-Mental health condition associations.
Acknowledgments: Usha Lokala, Raminta Daniulaityte, Francois Lamy, Manas Gaur, Jyotishman Pathak, and collaborators on NIDA/NIH and NSF funded projects on Addiction and Mental Health.
http://wiki.aiisc.ai/index.php/Public_Health_Addictions_Research_at_AIISC
http://wiki.aiisc.ai/index.php/Modeling_Social_Behavior_for_Healthcare_Utilization_in_Depression
Don't Handicap AI without Explicit KnowledgeAmit Sheth
Keynote at IEEE Services 2021: Abstract: https://conferences.computer.org/services/2021/keynotes/sheth.html
Video: https://lnkd.in/d-r3YXaC
Video of the same keynote given at DEXA2021: https://www.youtube.com/watch?v=u-06kK9TysA
September 9, 2021 15:00 - 16:20 UTC
ABSTRACT
Knowledge representation as expert system rules or using frames and a variety of logics played a key role in capturing explicit knowledge during the hay days of AI in the past century. Such knowledge, aligned with planning and reasoning is part of what we refer to as Symbolic AI. The resurgent AI of this century in the form of Statistical AI has benefitted from massive data and computing. On some tasks, deep learning methods have even exceeded human performance levels. This gave the false sense that data alone is enough, and explicit knowledge is not needed. But as we start chasing machine intelligence that is comparable with human intelligence, there is an increasing realization that we cannot do without explicit knowledge. Neuroscience (role of long-term memory, strong interactions between different specialized regions of data on tasks such as multimodal sensing), cognitive science (bottom brain versus top brain, perception versus cognition), brain-inspired computing, behavioral economics (system 1 versus system 2), and other disciplines point to need for furthering AI to neuro-symbolic AI (i.e., hybrid of Statistical AI and Symbolic AI, also referred to as the third wave of AI). As we make this progress, the role of explicit knowledge becomes more evident. I will specifically look at our endeavor to support human-like intelligence, our desire for AI systems to interact with humans naturally, and our need to explain the path and reasons for AI systems’ workings. Nevertheless, the variety of knowledge needed to support understanding and intelligence is varied and complex. Using the example of progressing from NLP to NLU, I will demonstrate the dimensions of explicit knowledge, which may include, linguistic, language syntax, common sense, general (world model), specialized (e.g., geographic), and domain-specific (e.g., mental health) knowledge. I will also argue that despite this complexity, such knowledge can be scalability created and maintained (even dynamically or continually). Finally, I will describe our work on knowledge-infused learning as an example strategy for fusing statistical and symbolic AI in a variety of ways.
I have framed this talk to encourage Pharmacy students to embrace computing in general, and data science and artificial intelligence techniques in particular. The reason is that data-driven science has overtaken traditional lab science; chemistry and biology that underlie pharmacy have become data-driven sciences, and a significant majority of the new jobs in pharma industries demand data analysis skills. Increasingly, traditional bioinformatics approaches are being complemented or replaced by machine learning or deep learning algorithms, especially for cases that have large data sets. I will provide a few examples (e.g., drug discovery, finding adverse drug reactions and broadly pharmacovigilance, and selecting patients for clinical trials) to demonstrate how big data and/or AI are indispensable to pharma research and industry today.
Knowledge Graphs and their central role in big data processing: Past, Present...Amit Sheth
Keynote at CODS-COMAD 2020, Hyderabad, India, 06 Jan 2020: https://cods-comad.in/keynotes.html
Abstract : Early use of knowledge graphs, before the start of this century, related to building a knowledge graph manually or semi-automatically and applying them for semantic applications, such as search, browsing, personalization, and advertisement. Taalee/Semagix Semantic Search in 2000 had a KG that covered many domains and supported search with an equivalent of today’s infobox. Along with the growth of big data, machine learning became the preferred technique for searching, analyzing and deriving insights from such data. We observed the complementary nature of bottom-up (machine learning-driven) and top-down (semantic, knowledge graph and planning based) techniques. Recently we have seen growing efforts involving the shallow use of a knowledge graph to improve the semantic and conceptual processing of data. The future promises deeper and congruent incorporation or integration of the knowledge graphs in the learning techniques (which we call knowledge-infused learning), where knowledge graphs combining statistical AI (bottom-up) and symbolic AI learning techniques (top-down) play a critical role in hybrid and integrated intelligent systems. Throughout this talk, we will provide real-world examples, products, and applications where the knowledge graph played a pivotal role.
ON EXPLOITING MULTIMODAL INFORMATION FOR MACHINE INTELLIGENCE AND NATURAL IN...Amit Sheth
Keynote: SECOND INTERNATIONAL WORKSHOP IN MULTIMEDIA PRAGMATICSMMPrag 2019, San Jose, California, 28-30 March 2019
http://mipr.sigappfr.org/19/keynote-speakers/
The Holy Grail of machine intelligence is the ability to mimic the human brain. In computing, we have created silos in dealing with each modality (text/language processing, speech processing,image processing, video processing, etc.). However, the human brain’s cognitive and perceptual capability to seamlessly consume (listen and see) and communicate (writing/typing, voice, gesture) multimodal (text, image, video, etc.) information challenges the machine intelligence research. Emerging chatbots for demanding health applications present the requirements for these capabilities. To support the corresponding data analysis and reasoning needs, we have to explore a pedagogical framework consisting of semantic computing, cognitive computing, and perceptual computing (http://bit.ly/w-SCP). In particular, we have been motivated by the brain’s amazing perceptive power that abstracts massive amounts of multimodal data by filtering and processing them into a few concepts (representable by a few bits) to act upon. From the information processing perspective, this requires moving from syntactic and semantic big data processing to actionable information that can be weaved naturally into human activities and experience (http://bit.ly/w-CHE). Exploration of the above research agenda, including powerful use cases, is afforded in a growing number of emerging technologies and their applications - such as chatbots and robotics. In this talk, I will provide these examples and share the early progress we have made towards building health chatbots (http://bit.ly/H-Chatbot) that consume contextually relevant multimodal data and support different forms/modalities of interactions to achieve various alternatives for digital health (http://bit.ly/k-APH). I will also discuss the indispensable role of domain knowledge and personalization using domain and personalized knowledge graphs as part of various reasoning and learning techniques.
The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.
Current trends in cognitive science and brain computing research 18th june 2020Dr G R Sinha
Medical Image Processing is study of acquisition, processing and analysis of various types of medical image modalities. Biomedical Imaging is one such modalities that mainly includes EEG, EMG, fMRI, MEG signals and their analysis for numerous applications such as diagnosis of mental disorder, sleep analysis, cognitive ability, study of memory and attention. Cognitive Science Research exploits biomedical modalities related to human brain and make use of the images in decoding brain commands and understanding them. This is very important in brain computer interface (BCI) and assessment of cognitive abilities. The abilities of human brain with the help of EEG signals can be described, decoded and used in performing desired tasks in numerous applications like robotics, driverless cars etc. EEG records brain activities especially electrical activities which are actually due to psychological, physiological and other changes in human brain. This lecture highlights an overview of cognitive science and brain computing research with its challenges and opportunities.
https://maison-workshop.com/prof-amit-sheth/
Video: https://youtu.be/pRUXTuxm3as
Keynote at the 7th International Workshop on Mining Actionable Insights from Social Networks (MAISoN 2021) – Special Edition on Responsible, August 21, 2021 and is co-located with the 15th International Joint Conference on Artificial Intelligence (IJCAI 2021). Also ASONAM 2021.
With the increasing legalization of medical and recreational use of substances, more research is needed to understand the association between mental health and user behavior related to drug consumption. Specifically, drug overdose and substance use-related mental health issues have become two major topics that have been widely discussed on social media platforms. Big social media data has the potential to provide deeper insights about these associations to public health analysts for making policy decisions. Multiple national population surveys have found that about half of those who experience a mental health illness during their lives will also experience a substance use disorder and vice versa. The communications related to addiction and mental health are complex to process and understand given their language and contextual characteristics. Surface-level data analysis alone is not sufficient to understand the complex nature of relationships among the addiction and mental health context. Moreover, dark web vendors have been using social media as a new marketplace for drugs. Social media users also discuss the novel drugs emerging in dark web marketplaces and associated side effects/health conditions. These communications get complex when researchers try to annotate them or link them to a specific mental health entity. Considering the significant sensitivity of such communications and to protect user privacy on social media, a potential solution requires reliable algorithms for modeling such communications. We demonstrate the value of incorporating domain-specific knowledge in natural language understanding to identify the relationship between mental health and drug addiction. We discuss end-to-end knowledge-infused deep learning frameworks that leverage the pre-trained language representation model and domain-specific declarative knowledge source to extract entities and their relationships jointly. Our model is further tailored to focus on the entities mentioned in the sentence where ontology is used to locate the target entity’s position. We also demonstrate the capabilities of inclusion of the knowledge-aware representation in association with language models that can extract the Drug-Mental health condition associations.
Acknowledgments: Usha Lokala, Raminta Daniulaityte, Francois Lamy, Manas Gaur, Jyotishman Pathak, and collaborators on NIDA/NIH and NSF funded projects on Addiction and Mental Health.
http://wiki.aiisc.ai/index.php/Public_Health_Addictions_Research_at_AIISC
http://wiki.aiisc.ai/index.php/Modeling_Social_Behavior_for_Healthcare_Utilization_in_Depression
Don't Handicap AI without Explicit KnowledgeAmit Sheth
Keynote at IEEE Services 2021: Abstract: https://conferences.computer.org/services/2021/keynotes/sheth.html
Video: https://lnkd.in/d-r3YXaC
Video of the same keynote given at DEXA2021: https://www.youtube.com/watch?v=u-06kK9TysA
September 9, 2021 15:00 - 16:20 UTC
ABSTRACT
Knowledge representation as expert system rules or using frames and a variety of logics played a key role in capturing explicit knowledge during the hay days of AI in the past century. Such knowledge, aligned with planning and reasoning is part of what we refer to as Symbolic AI. The resurgent AI of this century in the form of Statistical AI has benefitted from massive data and computing. On some tasks, deep learning methods have even exceeded human performance levels. This gave the false sense that data alone is enough, and explicit knowledge is not needed. But as we start chasing machine intelligence that is comparable with human intelligence, there is an increasing realization that we cannot do without explicit knowledge. Neuroscience (role of long-term memory, strong interactions between different specialized regions of data on tasks such as multimodal sensing), cognitive science (bottom brain versus top brain, perception versus cognition), brain-inspired computing, behavioral economics (system 1 versus system 2), and other disciplines point to need for furthering AI to neuro-symbolic AI (i.e., hybrid of Statistical AI and Symbolic AI, also referred to as the third wave of AI). As we make this progress, the role of explicit knowledge becomes more evident. I will specifically look at our endeavor to support human-like intelligence, our desire for AI systems to interact with humans naturally, and our need to explain the path and reasons for AI systems’ workings. Nevertheless, the variety of knowledge needed to support understanding and intelligence is varied and complex. Using the example of progressing from NLP to NLU, I will demonstrate the dimensions of explicit knowledge, which may include, linguistic, language syntax, common sense, general (world model), specialized (e.g., geographic), and domain-specific (e.g., mental health) knowledge. I will also argue that despite this complexity, such knowledge can be scalability created and maintained (even dynamically or continually). Finally, I will describe our work on knowledge-infused learning as an example strategy for fusing statistical and symbolic AI in a variety of ways.
I have framed this talk to encourage Pharmacy students to embrace computing in general, and data science and artificial intelligence techniques in particular. The reason is that data-driven science has overtaken traditional lab science; chemistry and biology that underlie pharmacy have become data-driven sciences, and a significant majority of the new jobs in pharma industries demand data analysis skills. Increasingly, traditional bioinformatics approaches are being complemented or replaced by machine learning or deep learning algorithms, especially for cases that have large data sets. I will provide a few examples (e.g., drug discovery, finding adverse drug reactions and broadly pharmacovigilance, and selecting patients for clinical trials) to demonstrate how big data and/or AI are indispensable to pharma research and industry today.
Knowledge Graphs and their central role in big data processing: Past, Present...Amit Sheth
Keynote at CODS-COMAD 2020, Hyderabad, India, 06 Jan 2020: https://cods-comad.in/keynotes.html
Abstract : Early use of knowledge graphs, before the start of this century, related to building a knowledge graph manually or semi-automatically and applying them for semantic applications, such as search, browsing, personalization, and advertisement. Taalee/Semagix Semantic Search in 2000 had a KG that covered many domains and supported search with an equivalent of today’s infobox. Along with the growth of big data, machine learning became the preferred technique for searching, analyzing and deriving insights from such data. We observed the complementary nature of bottom-up (machine learning-driven) and top-down (semantic, knowledge graph and planning based) techniques. Recently we have seen growing efforts involving the shallow use of a knowledge graph to improve the semantic and conceptual processing of data. The future promises deeper and congruent incorporation or integration of the knowledge graphs in the learning techniques (which we call knowledge-infused learning), where knowledge graphs combining statistical AI (bottom-up) and symbolic AI learning techniques (top-down) play a critical role in hybrid and integrated intelligent systems. Throughout this talk, we will provide real-world examples, products, and applications where the knowledge graph played a pivotal role.
ON EXPLOITING MULTIMODAL INFORMATION FOR MACHINE INTELLIGENCE AND NATURAL IN...Amit Sheth
Keynote: SECOND INTERNATIONAL WORKSHOP IN MULTIMEDIA PRAGMATICSMMPrag 2019, San Jose, California, 28-30 March 2019
http://mipr.sigappfr.org/19/keynote-speakers/
The Holy Grail of machine intelligence is the ability to mimic the human brain. In computing, we have created silos in dealing with each modality (text/language processing, speech processing,image processing, video processing, etc.). However, the human brain’s cognitive and perceptual capability to seamlessly consume (listen and see) and communicate (writing/typing, voice, gesture) multimodal (text, image, video, etc.) information challenges the machine intelligence research. Emerging chatbots for demanding health applications present the requirements for these capabilities. To support the corresponding data analysis and reasoning needs, we have to explore a pedagogical framework consisting of semantic computing, cognitive computing, and perceptual computing (http://bit.ly/w-SCP). In particular, we have been motivated by the brain’s amazing perceptive power that abstracts massive amounts of multimodal data by filtering and processing them into a few concepts (representable by a few bits) to act upon. From the information processing perspective, this requires moving from syntactic and semantic big data processing to actionable information that can be weaved naturally into human activities and experience (http://bit.ly/w-CHE). Exploration of the above research agenda, including powerful use cases, is afforded in a growing number of emerging technologies and their applications - such as chatbots and robotics. In this talk, I will provide these examples and share the early progress we have made towards building health chatbots (http://bit.ly/H-Chatbot) that consume contextually relevant multimodal data and support different forms/modalities of interactions to achieve various alternatives for digital health (http://bit.ly/k-APH). I will also discuss the indispensable role of domain knowledge and personalization using domain and personalized knowledge graphs as part of various reasoning and learning techniques.
The recent series of innovations in deep learning have shown enormous potential to impact individuals and society, both positively and negatively. The deep learning models utilizing massive computing power and enormous datasets have significantly outperformed prior historical benchmarks on increasingly difficult, well-defined research tasks across technology domains such as computer vision, natural language processing, signal processing, and human-computer interactions. However, the Black-Box nature of deep learning models and their over-reliance on massive amounts of data condensed into labels and dense representations pose challenges for the system’s interpretability and explainability. Furthermore, deep learning methods have not yet been proven in their ability to effectively utilize relevant domain knowledge and experience critical to human understanding. This aspect is missing in early data-focused approaches and necessitated knowledge-infused learning and other strategies to incorporate computational knowledge. Rapid advances in our ability to create and reuse structured knowledge as knowledge graphs make this task viable. In this talk, we will outline how knowledge, provided as a knowledge graph, is incorporated into the deep learning methods using knowledge-infused learning. We then discuss how this makes a fundamental difference in the interpretability and explainability of current approaches and illustrate it with examples relevant to a few domains.
Current trends in cognitive science and brain computing research 18th june 2020Dr G R Sinha
Medical Image Processing is study of acquisition, processing and analysis of various types of medical image modalities. Biomedical Imaging is one such modalities that mainly includes EEG, EMG, fMRI, MEG signals and their analysis for numerous applications such as diagnosis of mental disorder, sleep analysis, cognitive ability, study of memory and attention. Cognitive Science Research exploits biomedical modalities related to human brain and make use of the images in decoding brain commands and understanding them. This is very important in brain computer interface (BCI) and assessment of cognitive abilities. The abilities of human brain with the help of EEG signals can be described, decoded and used in performing desired tasks in numerous applications like robotics, driverless cars etc. EEG records brain activities especially electrical activities which are actually due to psychological, physiological and other changes in human brain. This lecture highlights an overview of cognitive science and brain computing research with its challenges and opportunities.
Artificial Intelligence (AI) has revolutionized in information technology.AI is a subfield of computer science that includes the creation of intelligent machines and software that work and react like human beings. AI and its Applications gets used in various fields of life of humans as expert system solve the complex problems in various areas as science, engineering, business, medicine, video games and Advertising. But “Do any traffic lights use Artificial Intelligence??”, I thought a lot of this when waiting in a red light. This paper gives an overview of Artificial Intelligence and its applications used in human life. This will explore the current use of Artificial Intelligence technologies in Network Intrusion for protecting computer and communication networks from intruders, in the medical area-medicine, to improve hospital inpatient care, for medical image classification, in the accounting databases to mitigate the problems of it, in the computer games, and in Advertising. Also, it will show artificial intelligence principle and how they were applying in traffic signal control, how they solve some traffic problem in actual. This paper gives an introduction to a self-learning system based on RBF neural network and how the system can simulate the traffic police’s experience. This paper is focusing on how to evaluate the effect of the control with the changing of the traffic and adjust the signal with the different techniques of Artificial Intelligence.
Modern signal processing is dead without machine learning! 5th july 2020Dr G R Sinha
This lecture highlights role of Machine Learning in Modern Signal Processing Applications such as Driver-less Cars, Robotics, Smart Environment Monitoring, Healthcare etc.
Computational Social Science as the Ultimate Web IntelligenceAmit Sheth
Panel at Web Intelligence, Dec 4-6, 2018, Santiago Chile
Funding Acknowledgement: Research supported in part by:
NSF Award#: CNS 1513721 TWC SBE: Medium: Context-Aware Harassment Detection on Social Media.
View represented are those of the speaker/author, and not of the sponsor.
Big Data and Artificial Intelligence in Critical Care
Anesthesia and Intensive Care
San Raffaele Hospital, Milan, Italy
Vita-Salute San Raffaele University, Milan, Italy
Follow us on Twitter, Facebook and Instagram @SRAnesthesiaICU
Cognitive Computing by Professor Gordon Pipadiannepatricia
Professor Dr. Gordon Pipa, University of Osnabrueck, Germany is making this presentation for the Cognitive Systems Institute Speaker Series on May 26, 2016.
The age of artificial intelligence, deep dives on machine learning and deep learning. Machine perception and applications. How company use AI in their businesses. Case study: Netflix. Basic tools for data manipulation and data visualization.
Video at: https://www.linkedin.com/video/live/urn:li:ugcPost:6705141260845412352/
In this talk, we will review some of the challenges related to Industry 4.0 or Factory of Future, and how can Artificial Intelligence help address them.
Examples include the use of semantic interoperability and integration to support the use of sensor collected data in decision making, the use of computer vision to identify deviations in the process and manage quality, and the use of predictive algorithms for device maintenance.
Smart Data for you and me: Personalized and Actionable Physical Cyber Social ...Amit Sheth
Featured Keynote at Worldcomp'14, July 2014: http://www.world-academy-of-science.org/worldcomp14/ws/keynotes/keynote_sheth
Video of the talk at: http://youtu.be/2991W7OBLqU
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is human health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information, etc.). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will forward the concept of Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If I am an asthma patient, for all the data relevant to me with the four V-challenges, what I care about is simply, “How is my current health, and what is the risk of having an asthma attack in my personal situation, especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city. I will present examples from a couple of these.
Pistoia Alliance Webinar Demystifying AI: Centre of Excellence for AI Webina...Pistoia Alliance
Pistoia Alliance launched its Centre of Excellence for Artificial Intelligence (AI) in Life Sciences where we hope to bring together best practice, adoption strategy and hackathons covering a range of challenges.
Over the coming months we will be hosting a series of topics and speakers giving their perspectives on the role of Artificial & Augmented Intelligence in Life Sciences and Healthcare.
The topics will cover some of the current challenges, user stories & value in using AI in life sciences. If you want to get involved in this series as a speaker or suggest topics please get in touch
Webinar 1 will focused on the following
A Brief History
Big Data/ML/DL/AI - fundamentals and concepts
Data Fidelity importance
Some best practices
How are machine learning and artificial intelligence revolutionizing insurance?
This presentation explains it briefly, including current trends and effects on the business.
Identical Users in Different Social Media Provides Uniform Network Structure ...IJMTST Journal
The primary point of this venture is secure the client login and information sharing among the interpersonal organizations like Gmail, Face book and furthermore find unknown client utilizing this systems. On the off chance that the first client not accessible in the systems, but rather their companions or mysterious client knows their login points of interest implies conceivable to abuse their talks. In this venture we need to defeat the mysterious client utilizing the system without unique client information. Unapproved client utilizing the login to talk, share pictures or recordings and so on. This is the issue to be overcome in this venture .That implies client initially enlist their subtle elements with one secured question and reply. Since the unknown client can erase their talk or information. In this by utilizing the secured questions we need to recuperate the unapproved client talk history or imparting subtle elements to their IP address or MAC address. So in this venture they have discovered an approach to keep the mysterious clients abuse the first client login points.
A Novel Approach of Data Driven Analytics for Personalized Healthcare through...IJMTST Journal
Despite the fact that big data technologies appear to be overhyped and guaranteed to have extraordinary potential in the space of pharmaceutical, if the improvement happens in coordinated condition in mix with other showing strategies, it will going to ensure an unvarying redesign of in-silico solution and prompt positive clinical reception. This proposed explore is wanted to investigate the real issues with a specific end goal to have a compelling coordination of enormous information analytics and effective modeling in healthcare.
Presentation at the AAAI 2013 Fall Symposium on Semantics for Big Data, Arlington, Virginia, November 15-17, 2013
Additional related material at: http://wiki.knoesis.org/index.php/Smart_Data
Related paper at: http://www.knoesis.org/library/resource.php?id=1903
Abstract: We discuss the nature of Big Data and address the role of semantics in analyzing and processing Big Data that arises in the context of Physical-Cyber-Social Systems. We organize our research around the five V's of Big Data, where four of the Vs are harnessed to produce the fifth V - value. To handle the challenge of Volume, we advocate semantic perception that can convert low-level observational data to higher-level abstractions more suitable for decision-making. To handle the challenge of Variety, we resort to the use of semantic models and annotations of data so that much of the intelligent processing can be done at a level independent of heterogeneity of data formats and media. To handle the challenge of Velocity, we seek to use continuous semantics capability to dynamically create event or situation specific models and recognize new concepts, entities and facts. To handle Veracity, we explore the formalization of trust models and approaches to glean trustworthiness. The above four Vs of Big Data are harnessed by the semantics-empowered analytics to derive Value for supporting practical applications transcending physical-cyber-social continuum.
TRANSFORMING BIG DATA INTO SMART DATA: Deriving Value via Harnessing Volume, ...Amit Sheth
Keynote given at ICDE2014, April 2014. Details at: http://ieee-icde2014.eecs.northwestern.edu/keynotes.html
A video of a version of this talk is available here: http://youtu.be/8RhpFlfpJ-A
(download to see many hidden slides).
Two versions of this talk, targeted at Smart Energy and Personalized Digital Health domains/apps at: http://wiki.knoesis.org/index.php/Smart_Data
Previous (older) version replaced by this version: http://www.slideshare.net/apsheth/big-data-to-smart-data-keynote
Towards Smart Chatbots for Enhanced Health: Using Multisensory Sensing & Sem...Amit Sheth
https://sites.google.com/view/deep-dial-2019/keynotes
Understanding and managing health is complex. Throughout the last few decades of modern medicine, we have relied clinicians on most health-related decision making. New technologies have enabled a growing involvement of patients in their own health management, aided by increasing variety and amount of patient-generated health data. Augmented personalized health [http://bit.ly/k-APH, http://bit.ly/APH-HI] strategy has outlined a broad variety of patient and clinician engagement in devising an increasingly more sophisticated and powerful health management solutions - from self-monitoring, self-appraisal, self-management, intervention to the prediction of disease progression and planning. Chatbot could play a pivotal role throughout the unfolding data-driven, AI-supported ecosystem [http://bit.ly/H-Chatbot] that engages patients and clinicians in collecting data, in driving their actions, informing them of their choices, and even delivering part of the clinical care (e.g., Cognitive Behavioral Therapy (CBT) for mental health patients). Nevertheless, this will require quite a few advances in making a more intelligent technology. In this talk, we will share some experience and observations based on our ongoing collaborative projects that usually involve clinicians and patients targeting pediatric asthma management, pre-and-post bariatric surgery care regimen, depression and other mental health issues, and nutrition. Using use cases and prototypes, we will elucidate the need, support, and use of domain- and user-specific knowledge graphs, Natural Language Processing (NLP), machine learning, and conversational AI for:
- multimodal interactions including text, voice, and other media, along with the use of diverse devices and software platforms for “natural” communication
- context enabled by deep relevant medical/healthcare knowledge including clinical protocols
- personalization by collecting and using the history of the individual patient from IoT health devices, open data, and Electronic Medical Record (EMR)
- abstraction by aggregating and correlating diverse streams data to draw plausible explanation(s) based on public (cohort-level) data (for example percentage of asthmatic patient who gets symptom when exposed to certain triggers) and personal data
- smart dialogue (intent) management and response generations by causal relations and inference of association
May 2021 snapshot of some of the Research and Collaborations in dHealth/personalized health, public health, epidemiology, biomedicine at the AI Institute of the University of South Carolina [AIISC]
Artificial Intelligence (AI) has revolutionized in information technology.AI is a subfield of computer science that includes the creation of intelligent machines and software that work and react like human beings. AI and its Applications gets used in various fields of life of humans as expert system solve the complex problems in various areas as science, engineering, business, medicine, video games and Advertising. But “Do any traffic lights use Artificial Intelligence??”, I thought a lot of this when waiting in a red light. This paper gives an overview of Artificial Intelligence and its applications used in human life. This will explore the current use of Artificial Intelligence technologies in Network Intrusion for protecting computer and communication networks from intruders, in the medical area-medicine, to improve hospital inpatient care, for medical image classification, in the accounting databases to mitigate the problems of it, in the computer games, and in Advertising. Also, it will show artificial intelligence principle and how they were applying in traffic signal control, how they solve some traffic problem in actual. This paper gives an introduction to a self-learning system based on RBF neural network and how the system can simulate the traffic police’s experience. This paper is focusing on how to evaluate the effect of the control with the changing of the traffic and adjust the signal with the different techniques of Artificial Intelligence.
Modern signal processing is dead without machine learning! 5th july 2020Dr G R Sinha
This lecture highlights role of Machine Learning in Modern Signal Processing Applications such as Driver-less Cars, Robotics, Smart Environment Monitoring, Healthcare etc.
Computational Social Science as the Ultimate Web IntelligenceAmit Sheth
Panel at Web Intelligence, Dec 4-6, 2018, Santiago Chile
Funding Acknowledgement: Research supported in part by:
NSF Award#: CNS 1513721 TWC SBE: Medium: Context-Aware Harassment Detection on Social Media.
View represented are those of the speaker/author, and not of the sponsor.
Big Data and Artificial Intelligence in Critical Care
Anesthesia and Intensive Care
San Raffaele Hospital, Milan, Italy
Vita-Salute San Raffaele University, Milan, Italy
Follow us on Twitter, Facebook and Instagram @SRAnesthesiaICU
Cognitive Computing by Professor Gordon Pipadiannepatricia
Professor Dr. Gordon Pipa, University of Osnabrueck, Germany is making this presentation for the Cognitive Systems Institute Speaker Series on May 26, 2016.
The age of artificial intelligence, deep dives on machine learning and deep learning. Machine perception and applications. How company use AI in their businesses. Case study: Netflix. Basic tools for data manipulation and data visualization.
Video at: https://www.linkedin.com/video/live/urn:li:ugcPost:6705141260845412352/
In this talk, we will review some of the challenges related to Industry 4.0 or Factory of Future, and how can Artificial Intelligence help address them.
Examples include the use of semantic interoperability and integration to support the use of sensor collected data in decision making, the use of computer vision to identify deviations in the process and manage quality, and the use of predictive algorithms for device maintenance.
Smart Data for you and me: Personalized and Actionable Physical Cyber Social ...Amit Sheth
Featured Keynote at Worldcomp'14, July 2014: http://www.world-academy-of-science.org/worldcomp14/ws/keynotes/keynote_sheth
Video of the talk at: http://youtu.be/2991W7OBLqU
Big Data has captured a lot of interest in industry, with the emphasis on the challenges of the four Vs of Big Data: Volume, Variety, Velocity, and Veracity, and their applications to drive value for businesses. Recently, there is rapid growth in situations where a big data challenge relates to making individually relevant decisions. A key example is human health, fitness, and well-being. Consider for instance, understanding the reasons for and avoiding an asthma attack based on Big Data in the form of personal health signals (e.g., physiological data measured by devices/sensors or Internet of Things around humans, on the humans, and inside/within the humans), public health signals (information coming from the healthcare system such as hospital admissions), and population health signals (such as Tweets by people related to asthma occurrences and allergens, Web services providing pollen and smog information, etc.). However, no individual has the ability to process all these data without the help of appropriate technology, and each human has different set of relevant data!
In this talk, I will forward the concept of Smart Data that is realized by extracting value from Big Data, to benefit not just large companies but each individual. If I am an asthma patient, for all the data relevant to me with the four V-challenges, what I care about is simply, “How is my current health, and what is the risk of having an asthma attack in my personal situation, especially if that risk has changed?” As I will show, Smart Data that gives such personalized and actionable information will need to utilize metadata, use domain specific knowledge, employ semantics and intelligent processing, and go beyond traditional reliance on ML and NLP.
For harnessing volume, I will discuss the concept of Semantic Perception, that is, how to convert massive amounts of data into information, meaning, and insight useful for human decision-making. For dealing with Variety, I will discuss experience in using agreement represented in the form of ontologies, domain models, or vocabularies, to support semantic interoperability and integration. For Velocity, I will discuss somewhat more recent work on Continuous Semantics, which seeks to use dynamically created models of new objects, concepts, and relationships, using them to better understand new cues in the data that capture rapidly evolving events and situations.
Smart Data applications in development at Kno.e.sis come from the domains of personalized health, energy, disaster response, and smart city. I will present examples from a couple of these.
Pistoia Alliance Webinar Demystifying AI: Centre of Excellence for AI Webina...Pistoia Alliance
Pistoia Alliance launched its Centre of Excellence for Artificial Intelligence (AI) in Life Sciences where we hope to bring together best practice, adoption strategy and hackathons covering a range of challenges.
Over the coming months we will be hosting a series of topics and speakers giving their perspectives on the role of Artificial & Augmented Intelligence in Life Sciences and Healthcare.
The topics will cover some of the current challenges, user stories & value in using AI in life sciences. If you want to get involved in this series as a speaker or suggest topics please get in touch
Webinar 1 will focused on the following
A Brief History
Big Data/ML/DL/AI - fundamentals and concepts
Data Fidelity importance
Some best practices
How are machine learning and artificial intelligence revolutionizing insurance?
This presentation explains it briefly, including current trends and effects on the business.
Identical Users in Different Social Media Provides Uniform Network Structure ...IJMTST Journal
The primary point of this venture is secure the client login and information sharing among the interpersonal organizations like Gmail, Face book and furthermore find unknown client utilizing this systems. On the off chance that the first client not accessible in the systems, but rather their companions or mysterious client knows their login points of interest implies conceivable to abuse their talks. In this venture we need to defeat the mysterious client utilizing the system without unique client information. Unapproved client utilizing the login to talk, share pictures or recordings and so on. This is the issue to be overcome in this venture .That implies client initially enlist their subtle elements with one secured question and reply. Since the unknown client can erase their talk or information. In this by utilizing the secured questions we need to recuperate the unapproved client talk history or imparting subtle elements to their IP address or MAC address. So in this venture they have discovered an approach to keep the mysterious clients abuse the first client login points.
A Novel Approach of Data Driven Analytics for Personalized Healthcare through...IJMTST Journal
Despite the fact that big data technologies appear to be overhyped and guaranteed to have extraordinary potential in the space of pharmaceutical, if the improvement happens in coordinated condition in mix with other showing strategies, it will going to ensure an unvarying redesign of in-silico solution and prompt positive clinical reception. This proposed explore is wanted to investigate the real issues with a specific end goal to have a compelling coordination of enormous information analytics and effective modeling in healthcare.
Presentation at the AAAI 2013 Fall Symposium on Semantics for Big Data, Arlington, Virginia, November 15-17, 2013
Additional related material at: http://wiki.knoesis.org/index.php/Smart_Data
Related paper at: http://www.knoesis.org/library/resource.php?id=1903
Abstract: We discuss the nature of Big Data and address the role of semantics in analyzing and processing Big Data that arises in the context of Physical-Cyber-Social Systems. We organize our research around the five V's of Big Data, where four of the Vs are harnessed to produce the fifth V - value. To handle the challenge of Volume, we advocate semantic perception that can convert low-level observational data to higher-level abstractions more suitable for decision-making. To handle the challenge of Variety, we resort to the use of semantic models and annotations of data so that much of the intelligent processing can be done at a level independent of heterogeneity of data formats and media. To handle the challenge of Velocity, we seek to use continuous semantics capability to dynamically create event or situation specific models and recognize new concepts, entities and facts. To handle Veracity, we explore the formalization of trust models and approaches to glean trustworthiness. The above four Vs of Big Data are harnessed by the semantics-empowered analytics to derive Value for supporting practical applications transcending physical-cyber-social continuum.
TRANSFORMING BIG DATA INTO SMART DATA: Deriving Value via Harnessing Volume, ...Amit Sheth
Keynote given at ICDE2014, April 2014. Details at: http://ieee-icde2014.eecs.northwestern.edu/keynotes.html
A video of a version of this talk is available here: http://youtu.be/8RhpFlfpJ-A
(download to see many hidden slides).
Two versions of this talk, targeted at Smart Energy and Personalized Digital Health domains/apps at: http://wiki.knoesis.org/index.php/Smart_Data
Previous (older) version replaced by this version: http://www.slideshare.net/apsheth/big-data-to-smart-data-keynote
Towards Smart Chatbots for Enhanced Health: Using Multisensory Sensing & Sem...Amit Sheth
https://sites.google.com/view/deep-dial-2019/keynotes
Understanding and managing health is complex. Throughout the last few decades of modern medicine, we have relied clinicians on most health-related decision making. New technologies have enabled a growing involvement of patients in their own health management, aided by increasing variety and amount of patient-generated health data. Augmented personalized health [http://bit.ly/k-APH, http://bit.ly/APH-HI] strategy has outlined a broad variety of patient and clinician engagement in devising an increasingly more sophisticated and powerful health management solutions - from self-monitoring, self-appraisal, self-management, intervention to the prediction of disease progression and planning. Chatbot could play a pivotal role throughout the unfolding data-driven, AI-supported ecosystem [http://bit.ly/H-Chatbot] that engages patients and clinicians in collecting data, in driving their actions, informing them of their choices, and even delivering part of the clinical care (e.g., Cognitive Behavioral Therapy (CBT) for mental health patients). Nevertheless, this will require quite a few advances in making a more intelligent technology. In this talk, we will share some experience and observations based on our ongoing collaborative projects that usually involve clinicians and patients targeting pediatric asthma management, pre-and-post bariatric surgery care regimen, depression and other mental health issues, and nutrition. Using use cases and prototypes, we will elucidate the need, support, and use of domain- and user-specific knowledge graphs, Natural Language Processing (NLP), machine learning, and conversational AI for:
- multimodal interactions including text, voice, and other media, along with the use of diverse devices and software platforms for “natural” communication
- context enabled by deep relevant medical/healthcare knowledge including clinical protocols
- personalization by collecting and using the history of the individual patient from IoT health devices, open data, and Electronic Medical Record (EMR)
- abstraction by aggregating and correlating diverse streams data to draw plausible explanation(s) based on public (cohort-level) data (for example percentage of asthmatic patient who gets symptom when exposed to certain triggers) and personal data
- smart dialogue (intent) management and response generations by causal relations and inference of association
May 2021 snapshot of some of the Research and Collaborations in dHealth/personalized health, public health, epidemiology, biomedicine at the AI Institute of the University of South Carolina [AIISC]
Augmented Personalized Health: using AI techniques on semantically integrated...Amit Sheth
Keynote @ 2018 AAAI Joint Workshop on Health Intelligence (W3PHIAI 2018), 2 February 2018, New Orleans, LA [Video: https://youtu.be/GujvoWRa0O8]
Related article: https://ieeexplore.ieee.org/document/8355891/
Abstract
Healthcare as we know it is in the process of going through a massive change - from episodic to continuous, from disease-focused to wellness and quality of life focused, from clinic centric to anywhere a patient is, from clinician controlled to patient empowered, and from being driven by limited data to 360-degree, multimodal personal-public-population physical-cyber-social big data-driven. While the ability to create and capture data is already here, the upcoming innovations will be in converting this big data into smart data through contextual and personalized processing such that patients and clinicians can make better decisions and take timely actions for augmented personalized health. In this talk, we will discuss how use of AI techniques on semantically integrated patient-generated health data (PGHD), environmental data, clinical data, and public social data is exploited to achieve a range of augmented health management strategies that include self-monitoring, self-appraisal, self-management, intervention, and Disease Progression Tracking and Prediction. We will review examples and outcomes from a number of applications, some involving patient evaluations, including asthma in children, bariatric surgery/obesity, mental health/depression, that are part of the Kno.e.sis kHealth personalized digital health initiative.
Background: Background: http://bit.ly/k-APH, http://bit.ly/kAsthma, http://j.mp/PARCtalk
A quick review of Kno.e.sis’ research subset on knowledge-enhanced learning with on personal and public health, wellbeing and social good applications.
kHealth: Semantic Multi-sensory Mobile Approach to Personalized Asthma CareAmit Sheth
P7: A New Paradigm for Health Care in the 21st Century
Scientific Session at AAAS2019 Annual Meeting
https://aaas.confex.com/aaas/2019/meetingapp.cgi/Session/21133
https://cra.org/ccc/ccc-at-aaas/2019-sessions/
Asthma is a chronic multifactorial disease and traditional clinical practice requires patients to meet their clinician in a timely yet infrequently meetings scheduled once in 3-6 months depending on the patient’s condition. The clinical diagnosis relies on the patient’s description of their current health condition. The patient’s description need not be accurate at times and may lack some important aspects needed for accurate diagnosis. We at Kno.e.sis work with clinicians and their pediatric asthma patients at the Dayton Children's Hospital to evaluate an IoT/mobileApp enabled personalized digital health management. We built a kHealth system for continuous monitoring and improved tracking of 30 parameters including the child’s symptoms, activities, sleep, and treatment adherence. It can allow precise determination of asthma triggers and a reliable assessment of medication compliance and effectiveness.
More at: https://aaas.confex.com/aaas/2019/meetingapp.cgi/Paper/23000
Big Data, CEP and IoT : Redefining Healthcare Information Systems and AnalyticsTauseef Naquishbandi
Big Data is a term encompassing the use of techniques to capture, process, analyze and visualize potentially large datasets in a reasonable time frame not accessible to standard technologies.
It refers to the ability to crunch vast collections of information, analyze it instantly, and draw from it sometimes profoundly surprising conclusions
Big data solutions can help stakeholders personalize care, engage patients, reduce variability and costs, and improve quality of health delivery.
Big data analytics can also contribute to providing a rich context to shape many areas of health care like analysis of effects, side-effects of drugs, genome analysis etc.
1 1 Abstract—With the advent of the technologicAbbyWhyte974
1
1
Abstract—With the advent of the technological world, the
technology is getting more and more advanced day-by-
day. Artificial Intelligence (AI) can possibly affect pretty
much every part of medical care, from identification to
forecast and anticipation. The appropriation of new
advances in medical services, nonetheless, slacks far
behind the rise of new advances. An elementary
understanding of developing Artificial Intelligence
proceedings can be basic though wellbeing couldn't care
less experts. These advancements incorporate master
frameworks, mechanical cycle robotization, regular
language preparing, Artificial Intelligence, and deepest
understanding. In the research article, different
technologies have been derived for the detection of
different health diseases. First of all, background
knowledge has been taken under consideration. After
that, diseases like Diabetes, Alzheimer’s disease and
health disease have been discussed. It has been evaluated
that technologies are providing extremely efficient results
with higher level of accuracy which shows that the
discussed technologies are contributing at their best level.
The proposed methods for the discussed diseases in
different research articles have also been evaluated and
highlighted. Every technology has its own benefits. The
proposed article illustrate that how Artificial Intelligence
is contributing in healthcare department and in the
detection of different health diseases.
Index Terms— Expert System, Decision making
Support, Artificial Intelligence, Clinical Decision Support
System, Magnetic Resonance Imaging (MRI), Alzheimer’s
Disease
I. INTRODUCTION
A. Artificial Intelligence
Artificial intelligence is how different machines exhibit
intelligence compared to natural intelligence used by different
humans and animals. In simple words, the theory related to
the growth of computer systems to perform tasks usually
needs human intelligence, for instance, visual perceptions,
decision making, translation of languages, and speed
recognition (Fei Jang, 2017). It is known as a digital
computer's capability or called a computer-controlled robot to
execute tasks usually connected with intelligence. This term
AI is applied to those projects related to developing systems
bestowed with factors of human or intellectual processes, for
example, the ability to reason, generalizing, abstracting, learn
from past experiences, or to discover meaning. In the 1940s,
digital computers evolved and came into existence, so from
1940, since now, computers are designed to perform
complicated and complex tasks, for instance, working on
advanced proofs and theorems from mathematical portions as
well as playing chess. Despite continued advances in the
speed of computer processing and memory capacity still, there
is a gap in programming that they cannot be as flexible as
human beings. This system is ...
1
1
Abstract—With the advent of the technological world, the
technology is getting more and more advanced day-by-
day. Artificial Intelligence (AI) can possibly affect pretty
much every part of medical care, from identification to
forecast and anticipation. The appropriation of new
advances in medical services, nonetheless, slacks far
behind the rise of new advances. An elementary
understanding of developing Artificial Intelligence
proceedings can be basic though wellbeing couldn't care
less experts. These advancements incorporate master
frameworks, mechanical cycle robotization, regular
language preparing, Artificial Intelligence, and deepest
understanding. In the research article, different
technologies have been derived for the detection of
different health diseases. First of all, background
knowledge has been taken under consideration. After
that, diseases like Diabetes, Alzheimer’s disease and
health disease have been discussed. It has been evaluated
that technologies are providing extremely efficient results
with higher level of accuracy which shows that the
discussed technologies are contributing at their best level.
The proposed methods for the discussed diseases in
different research articles have also been evaluated and
highlighted. Every technology has its own benefits. The
proposed article illustrate that how Artificial Intelligence
is contributing in healthcare department and in the
detection of different health diseases.
Index Terms— Expert System, Decision making
Support, Artificial Intelligence, Clinical Decision Support
System, Magnetic Resonance Imaging (MRI), Alzheimer’s
Disease
I. INTRODUCTION
A. Artificial Intelligence
Artificial intelligence is how different machines exhibit
intelligence compared to natural intelligence used by different
humans and animals. In simple words, the theory related to
the growth of computer systems to perform tasks usually
needs human intelligence, for instance, visual perceptions,
decision making, translation of languages, and speed
recognition (Fei Jang, 2017). It is known as a digital
computer's capability or called a computer-controlled robot to
execute tasks usually connected with intelligence. This term
AI is applied to those projects related to developing systems
bestowed with factors of human or intellectual processes, for
example, the ability to reason, generalizing, abstracting, learn
from past experiences, or to discover meaning. In the 1940s,
digital computers evolved and came into existence, so from
1940, since now, computers are designed to perform
complicated and complex tasks, for instance, working on
advanced proofs and theorems from mathematical portions as
well as playing chess. Despite continued advances in the
speed of computer processing and memory capacity still, there
is a gap in programming that they cannot be as flexible as
human beings. This system is ...
Overview of Health Informatics: survey of fundamentals of health information technology, Identify the forces behind health informatics, educational and career opportunities in health informatics.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Exploiting Multimodal Information for Machine Intelligence and Natural Interactions
1. EXPLOITING MULTIMODAL INFORMATION
FOR MACHINE INTELLIGENCE AND NATURAL
INTERACTIONS
Dr. Amit Sheth amit.aiisc.ai
Director of AI Institute #AIISC aiisc.ai
University of South Carolina
International Workshop on Multimedia Applications (IWMA 2021), 3 March 2021
LNM Institute of Information Technology, Jaipur, India.
3. 3
OUTLINE
● Human perception of the real world is multi-modal- that ism our brian seamlessly
processes data in the form of various modalities (text, speech, and visual).
● Multimodal information are essential and together, they provide nuances that a single
modality can’t. Human communication is intrinsically multimodal--e.g, speech +
expression + gestures.
● For a machine to attain intelligence, it requires comprehensive understanding of the
environment that it is in. And to develop natural interactions with human, a machine
needs to develop understanding of the data it consumes.
● This talk will focus on different data modalities and examples on how a machine
(chatbot) can use such information to provide intelligent assistant and natural
communication in the health domain.
Credit
https://aiisc.ai/people
Revathy
Joey
Kaushik
4. 4
Machine-centric to
Human-centric Computing
Artificial
Intelligence
Ambient
Intelligence
Augmenting
Human Intellect
Human-Computer
Symbiosis
Computing for
Human Experience
Machine-centric Human-centric
John McCarthy Mark Weiser Douglas Engelbart Joseph C.R. Licklider
Figure: Views along the spectrum of machine-centric to human-centric computing.
At the far right is our work on Computing for Human Experience, which explores paradigms such as
Semantic, Cognitive, and Perceptual Computing. http://bit.ly/SCP-Magazine
AI Institute
http://bit.ly/k-Che,
http://slidesha.re/k-che
5. 5
Using Chatbots to Go Beyond Traditional
Patient-Doctor Consultation
Socio-
economic
Demo-
graphic
Family &
social
Psychologic
al
Environment
Genetic
Susceptibilit
y
Source: Why do people consult the doctor?
- Stephen M Campbell and Martin O Roland
Decision
Making
Can voice assistant (chatbot) technology
substantially improve monitoring of
patient’s conditions and needs?
Simple Tasks
● Appointment scheduling
● Information retrieval
● Scripted-automation
Complex & Demanding Tasks
● Multimodal input and output
● Natural communication
● Augmented Personalized Health
(serving different levels of health needs)
Contextualization
Personalization
Abstraction
Different modality of data
Images
Text Speech Videos IoTs
7. Figure source: https://www.aarp.org/health/conditions-treatments/info-2017/
bronchitis-and-pneumonia-symptoms.html
A machine may recognize the picture as
“a woman is coughing”.
As human, we immediately conjecture and
relate to many phenomena with different
contexts.
Semantic
Association
(Label picture
as coughing)
Cognitive
(Look at additional
background information
& interpret in different
context, ie: cough vs
wheezing cough
Perception
(Has the patient condition worsen?
How well is the patient doing?)
Paradigms
that Shape
Human
Experience
8. AUGMENTED PERSONALIZED HEALTH
EXPLOITING MULTIMODAL INFORMATION FOR:
SELF-MONITORING - Constant and remote monitoring of disease specific health indicators for any given patient
SELF-APPRAISAL - Interpretation of the data collected with respect to disease context for the patient to evaluate
themselves
SELF-MANAGEMENT- Identify the deviation from normal and assist patients to get back to prescribed care plan
INTERVENTION - Change in the care plan - with the converted smart data by APH, provide decision support for
treatment adjustments
DISEASE PROGRESSION AND TRACKING - Longitudinal data collection and analysis to enhance patients health
over the time
9. “
9
“The Holy Grail of machine intelligence is the ability
to mimic the human brain. However, the human
brain’s cognitive and perceptual capability to
seamlessly consume, abstract massive amounts of
multimodal data, and communicate information
challenges the machine intelligence research.
Growing number of emerging technologies such as
chatbots & robotics present the requirements for
these capabilities.”
10. What is Modality
GENERAL
A particular mode in which
something exists or is
experienced or expressed.
A particular form of sensory
perception: ‘the visual and
auditory modalities’.
HEALTHCARE
MODALITY
Modality (medical imaging), a type
of equipment used to acquire
structural or functional images of
the body, such as radiography,
ultrasound, nuclear medicine,
computed tomography, magnetic
resonance imaging and visible
light.
IN HCI
A modality is the classification
of a single independent
channel of sensory
input/output between a
computer and a human.
Multiple modalities can be
used in combination to provide
complementary methods that
may be redundant but convey
information more effectively.
10
11. 11
Machine Intelligence for Chatbot:
Incorporating Multiple Streams
& Modalities
Figure: Chatbot exploiting multimodal
information for machine intelligence
and natural interactions
From simple informational
interface (text, speech) to
intelligent assistant
12. USE CASES & PROTOTYPES
Examples on collaborative projects
@ AI Institute
13. 13
Health Related Studies at AI Institute
[Overview]
Health
Challenges
(Also Dementia,
Obesity, Parkinson’s,
Liver Cirrhosis,
ADHF)
Public Policy/ Population Epidemiology Personalized Health
PCS + EMR + Multimodal
(Speech + Image)
kHealth
Asthma in Children
Bariatric Surgery
Nutrition
Physical(IoT)/Cyber/
Social (PCS)+ EMR
Marijuana Social
Drug Abuse Social
Mental Health
Depression & Suicide Social + Public + EMR
Health Knowledge
Graph Services
Social + Clinical Data
...and infrastructure
technologies: Context-aware KR
(SP), KG Development, Smart
Data from PCS Big Data, Twitris
14. 3 Chatbots (Alpha Stage)
1. NOURICH: A Google Assistant based
Conversational Nutrition Management System
1. Knowledge-enabled (kHealth) Personalized
ChatBot for Asthma: Contextualized &
Personalized Conversations involving
Multimodal data (IoT & Devices)
1. ReaCTrack: Personalized Adverse Reaction
Conversation-based Tracker for Clinical
Depression
14
HCI: Applications & Chatbots
@ AI Institute
kHealth
Asthma
kHealth
Nutrition
Mental Health
Active (Subset)
Healthcare Projects
@ KNO.E.SIS with
mApps/chatbot
kHealth Framework: a knowledge-enabled semantic platform that
captures the data and analyzes it to produce actionable information.
15. 15
Physical-Cyber-Social (PCS) Data
Mobile app Q/A (tablet), forced exhaled volume in 1 sec (FEV1),
peak expiratory flow (PEF), indoor temperature, indoor humidity,
particulate matter, volatile organic compound, carbon dioxide, air
quality index, pollen level, outdoor temperature, outdoor humidity,
number of steps, heart rate and number of hours of sleep. Also
clinical notes.
kHealth Asthma
Nutrition
Mental Health
Active Healthcare
Projects
at AI Inst. (Subset)
Modality of Data
For monitoring asthma control and predict vulnerability
Q/A, diet, food profile, food images, nutrition
knowledge bases, user knowledge graph.
For nutrition tracking and diet monitoring
Modeling Social Behavior for Healthcare Utilization in Mental Health
Q/A, social media profile (Twitter, Reddit).
17. 17
Use Case 1: ASTHMA
Many Sources of Highly Diverse Data
(& collection methods: Active + Passive):
Up to 1852 data points/ patient /day
kBot with screen interface
for conversation
Images
Text
Speech
*(Asthma-Obesity)
★ Episodic to Continuous Monitoring
★ Clinician-centric to Patient-centric
★ Clinician controlled to Patient-empowered
★ Disease Focused to Wellness-focused
★ Sparse data to Multimodal Big Data
18. Data Collection
>150
patients
29
parameters
1852
data points per
patient per day
63%
kit compliance
● Data Collection: Since Dec 2016
● Active sensing: 18 data points/day
(Peak flow meter and Tablet)
● Passive sensing: 1834 data points/
day (Foobot, Fitbit, Outdoor
environmental data)
5-17
years of age
1 or 3
months of
monitoring
18
19. 19
Utkarshani Jaimini, Krishnaprasad Thirunarayan,
Maninder Kalra, Revathy Venkataramanan,
Dipesh Kadariya, Amit Sheth, “How Is My Child’s
Asthma?” Digital Phenotype and Actionable
Insights for Pediatric Asthma”, JMIR Pediatr
Parent 2018;1(2):e11988, DOI: 10.2196/11988.
20. Revathy Venkataramanan, Krishnaprasad
Thirunarayan, Utkarshani Jaimini, Dipesh
Kadariya, Hong Yung Yip, Maninder Kalra, Amit
Sheth, “Determination of Personalized Asthma
Triggers from Multimodal Sensing and a Mobile
app”, JMIR Pediatr Parent 2018;1(2):e11988, DOI:
10.2196/11988.
24. 24
Use Case 3: kBot Elder Care Intelligent Assistant to ask elderly with
Heart Failure (HF),
Chronic Obstructive Pulmonary Disease (COPD) or
Type 2 Diabetes Mellitus (T2DM).
26. “ To support the corresponding (chatbots) data
analysis and reasoning needs, we have to explore
a pedagogical framework consisting of
Semantic computing, Cognitive
computing, and Perceptual computing
This requires moving from syntactic and semantic
big data processing to actionable information that
can be weaved naturally into human activities and
experience.
26
28. 28
Semantic
Browsing
Extraction
Data Integration and Interlinking
Entity
Complex Extraction
Aberrant Drug-
related
Behaviour
Neuro-Cognitive
Symptoms
Adverse
Drug
Reaction
Relation Event Severity
Personal Sensor Data De-identified EMR Blog Post
Context Representation Relevant Subgraph Selection
Semantic Search
Disease-specific
Chatbot
Visualization
Health
Knowledge Graph
Intent
Open Health Knowledge Graph
30. 30
Evolving Patient Health Knowledge Graph (PHKG)
Figure: A healthcare assistant bot interacts with the patient via various conversational interfaces (voice, text,
and visual) to acquire and disseminate information, and provide recommendation (validated by physician).
The core functionalities of the chatbot (Component C boxed in blue) are extended with a background HKG
(Component A boxed in green) and a evolving PKG (Component B boxed in orange).
★ Smarter & engaging agent
★ Minimize active sensing
(Questions to be asked)
★ Ask only informed & intelligent
questions
★ Relevant & Contextualized
conversations
★ Personalized & Human-Like
31. 31
ONE SLIDE TO SHOW HOW
PHKG EVOLVES OVER TIME
AI Inst Alchemy API
KHealth Project (IoT) datasets (e.g., asthma, obesity, Parkinson)
Reasoning mechanisms
Enriching KG
Enriching KG
In-built rule-based
inference engine
Machine
Learning
Updating the KG
with more triples
Analyzing datasets
Executing reasoning
Ontology Catalogs:
● BioPortal
● Linked Open Vocabularies (LOV)
● Linked Open Vocabularies for
Internet of Things (LOV4IoT)
Linked Open Data (LOD):
● UMLS
● SNOMED-CT
● ICD-10
● Clinical Trials
● Sider
Personalized Health
Knowledge Graph
(PHKG)
Personal
Sensor Data
Electronic Medical
Records (EMR)
Figure: How a PHKG evolves with multimodal information
32. GENERIC CHATBOT VS
INTELLIGENT CHATBOT
Needed for Machine Intelligence and Natural Interactions:
Contextualization, Personalization, and Abstraction
33. 33
Contextualization and
Personalization
kBOT initiates greeting
conversation.
Understands the patient’s health
condition (allergic reaction to high
ragweed pollen level) via the
personalized patient’s knowledge
graph generated from EMR, PGHD,
and prior interactions with the kBot.
Generates predictions or
recommended course of actions.
Inference based on patient’s
historical records and background
health knowledge graph containing
contextualized (domain-specific)
knowledge.
Figure: Example kBot conversation which
utilizes background health knowledge graph
and patient’s knowledge graph to infer and
generate recommendation to patients.
★ Conversing only information relevant to
the patient
Context enabled by relevant
healthcare knowledge including
clinical protocols.
34. 34
Contextualization
refers to data interpretation in terms of knowledge (context).
Without Domain Knowledge With Domain Knowledge
Chatbot with domain (drug) knowledge
is potentially more natural and able to
deal with variations.
35. 35
Personalization
refers to future course of action by taking into account the contextual factors such as user’s health history,
physical characteristics, environmental factors, activity, and lifestyle.
Without
Contextualized Personalization
With
Contextualized Personalization
Chatbot with contextualized
(asthma) knowledge is
potentially more
personalized and engaging.
37. 37
Smarter Chatbot with
Semantically-Abstracted Information
Smarter
data
Data Sophistication
Smart (semantically-abstracted)
data should answer:
★ What causes my disease severity?
★ How well am I doing with respect to prescribed care
plan?
★ Am I deviating from the care plan? I am following the
care plan but my disease
is not well controlled.
★ Do I need treatment adjustments?
★ How well controlled is my disease over time?
Example of Abstraction
38. 38
Semantic, Cognitive, Perceptual Computing:
Paradigms That Shape Human Experience
http://bit.ly/SCPComputing
Humans are interested in high-level
concepts (phenotypic characteristics).
Semantic Computing: Assign labels and
associate meanings (representation &
contextualization).
Cognitive Computing: Interpretation of data with
respect to perspectives, constraints, domain
knowledge, and personal context.
Perceptual Computing: A cyclical process of
semantic-cognitive computing for higher level of
perception and reasoning (abstraction & action).
39. Knowledge
-Infused
Learning with
Semantic,
Cognitive,
Perceptual
Computing
Framework
39
THE BABY STEPS:
MACHINE / DEEP LEARNING INFUSED WITH
PERSONALIZED HEALTH KNOWLEDGE GRAPH
Knowledge
Domain (Ontology)
Personalized HKG
Multisensory
Sensing &
Multimodal Data
Interactions
Images
Text Speech Videos
IoTs
Natural Language
Processing,
Machine with Deep
Learning
AUGMENTED PERSONALIZED
HEALTH (APH)
Modeling broader disease context, and
personalized user behavior
Reasoning & decision-
making framework To achieve ABSTRACTION and minimize data
overload, assist in making choices, appraisal,
recommendations
43. Lives in Los Angeles
From
Denver
Moves between/high freq
family1
Lives In
has
Expert designed Schema for PKG:
lives(Patient, ?)
has_family(Patient, Family,?)
family_location(Patient, Family, ?)
visit_frequency(Patient, Family, ?)
+
Relational facts from the PKG
lives(patient1,” Los Angeles”)
has_family(patient1, family1, “True”)
family_location(patient1, family1,
“Denver”)
visit_frequency(patient1, family1, “high”)
patient1
Knowledge Infused
Reinforcement Learning:
Knowledge
+
Patient context
+
Patient feedback
Depression
sadness
Suffering
Context
Knowledge
➢ a) Reminding-clarification,
➢ b) Information-gathering,
➢ c) Appraisal,
➢ d) Symptom check,
➢ e) Facilitate communication with health-
care provider/ Connect to professional
Caption: The relational context is derived from the PKG along
with the schema, from which, in combination with the patients
feedback and domain knowledge, the Knowledge Infused
Reinforcement Learning algorithm outputs a high level
recommendation.
44.
45. 45
In short,
❖ Multimodal information are essential and can be
exploited for machine intelligence and natural
interactions.
❖ Knowledge-infused learning could give us the power
need to match complex requirements.
❖ Semantic-Cognitive-Perceptual Computing enables
contextualization, personalization, and abstraction for
Augmented Personalized Health.
46. 46
5 faculty, >12 PhDs, few Masters, >5
undergrads, 2 Post-Docs, >10 Research Interns
Alumni in/as
Industry: IBM T.J. Watson, Almaden, Amazon, Samsung
America, LinkedIn, Facebook, Bosch
Start-ups: AppZen, AnalyticsFox, Cognovi Labs
Faculty: George Mason, University of Kentucky, Case Western
Reserve, North Carolina State University, University of Dayton
Core AI
Neuro-symbolic computing/Hybrid AI, Knowledge
Graph Development, Deep Learning, Reinforcement
Learning, Natural Language Processing, Knowledge-
infused Learning (for deep learning and NLP),
Multimodal AI (including IoT/sensor data streams,
images), Collaborative Assistants, Multiagent
Systems (incl. Coordinating systems of decision
making agents including humans, robots, sensors),
Semantic-Cognitive-Perceptual Computing, Brain-
inspired computing,
Interpretation/Explainability/Trust/Ethics in AI
systems, Search, Gaming
Interdisciplinary AI and application
domains: Medicine/Clinical, Biomedicine, Social
Good/Harm, Public Health (mental health,
addiction), Education, Manufacturing, Disaster
Management
Editor's Notes
Slide 3: Inner circle : talks about our research areas and strength
Convey from simple tasks to complicated, it is not simple, there are many issues: data, different modality, context, personalization
Growing ecosystem of chatbot
Chatbot as intermediary patient <-> doctor
Take an example of elderly care, rather than serving as just a basic voice interface, a chatbot should consume (like human)
different streams and modalities of data, textual data, voice & speech data, images, and background knowledge of the patient to be able to assist intelligently for an elderly.
JMIR Paper
voice by libertetstudio from the Noun Project
text by Vectorstall from the Noun Project
Dye info -
Doritos
https://ndb.nal.usda.gov/ndb/foods/show/45366963?fgcd=&manu=&format=&count=&max=25&offset=&sort=default&order=asc&qlookup=doritos&ds=&qt=&qp=&qa=&qn=&q=&ing=
Vanilla frosting - https://ndb.nal.usda.gov/ndb/foods/show/45122774?fgcd=&manu=&format=&count=&max=25&offset=&sort=default&order=asc&qlookup=DUNCAN+HINES%2C+WHIPPED+FROSTING%2C+VANILLA%2C+UPC%3A+644209405923&ds=&qt=&qp=&qa=&qn=&q=&ing=
Step 1: Personalized information from clinician visit in the discharge summary and target expert designed initial set of questions, compiled into a personalized knowledge graph stored on a cloud.
Step 2: The Knowledge from the PKG stored in the cloud, infused into the RL method to predict high-level chatbot tasks. Cloud monitored for safety by the clinician. Patient’s answers/feedback that act as rewards.
Step 3: The high-level task is used to generate dialogue with the patient and updates to the PKG are appropriately made and this process continues during the length of their interactions.