This document discusses how knowledge graphs and graph analytics can be used for anomaly detection in financial services. It describes building time-sequenced graph data models from a base knowledge graph to model customer behavior over time. Champion models are applied to each time window to learn a statistical distribution, and outliers in that distribution that are hard to reproduce can indicate anomalous financial behavior worthy of investigation, such as money laundering. Scaling the graph snapshots by collections of nodes and edges allows analyzing behavior at different levels from micro to macro.
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
The document provides an overview of a presentation on enabling explainable artificial intelligence through Bayesian rule lists. Some key points:
- The presentation will cover challenges with model opacity, defining interpretability, and how Bayesian rule lists can be used to build naturally interpretable models through rule extraction.
- Bayesian rule lists work well for tabular datasets and generate human-understandable "if-then-else" rules. They aim to optimize over pre-mined frequent patterns to construct an ordered set of conditional statements.
- There is often a tension between model performance and interpretability. Bayesian rule lists can achieve accuracy comparable to more opaque models like random forests on benchmark datasets while maintaining interpretability.
Introduction to Machine Learning - WeCloudDataWeCloudData
WeCloudData offers data science training programs and customized corporate training. They have 21 part-time instructors and 2 full-time instructors with expertise in tools like Python, Spark, and AWS. WeCloudData organizes data science meetup events and conferences, and provides workshops at various conferences. Their Applied Machine Learning course teaches tools and techniques over 12 sessions, includes a hands-on project, and helps with interview preparation.
Introduction to Machine Learning - WeCloudDataWeCloudData
In this talk, WeCloudData introduces the lifecycle of machine learning and its tools/ecosystems. For more detail about WeCloudData's machine learning course please visit: https://weclouddata.com/data-science/
Deep Credit Risk Ranking with LSTM with Kyle GroveDatabricks
This document discusses techniques for developing interpretable deep learning models for credit risk prediction. It begins by describing the goals of credit risk models, including calibration, accuracy, and interpretability. It then discusses the tradeoff between accuracy and interpretability in models, with more complex models gaining accuracy at the cost of interpretability. The document proposes using techniques like augmented data proxy models, directly fit proxy models, model scoring, and model distillation to develop interpretable deep learning models for credit risk prediction. It also presents a case study applying an ensemble of LSTM and gradient boosted trees to a large credit dataset, achieving higher accuracy than traditional models. Finally, it discusses tools for explaining individual predictions from these complex models.
The document discusses model interpretation and the Skater library. It begins with defining model interpretation and explaining why it is needed, particularly for understanding model behavior and ensuring fairness. It then introduces Skater, an open-source Python library that provides model-agnostic interpretation tools. Skater uses techniques like partial dependence plots and LIME explanations to interpret models globally and locally. The document demonstrates Skater's functionality and discusses its ability to interpret a variety of model types.
Choosing a Machine Learning technique to solve your needGibDevs
This document discusses choosing a machine learning technique to solve a problem. It begins with an overview of machine learning and popular approaches like linear regression, logistic regression, decision trees, k-means clustering, principal component analysis, support vector machines, and neural networks. It then discusses important considerations like knowing your data, cleaning your data, categorizing the problem, understanding constraints, choosing an algorithm, and evaluating models. Programming languages like Python and libraries, datasets, and cloud support resources are also mentioned.
This document discusses how knowledge graphs and graph analytics can be used for anomaly detection in financial services. It describes building time-sequenced graph data models from a base knowledge graph to model customer behavior over time. Champion models are applied to each time window to learn a statistical distribution, and outliers in that distribution that are hard to reproduce can indicate anomalous financial behavior worthy of investigation, such as money laundering. Scaling the graph snapshots by collections of nodes and edges allows analyzing behavior at different levels from micro to macro.
Human in the loop: Bayesian Rules Enabling Explainable AIPramit Choudhary
The document provides an overview of a presentation on enabling explainable artificial intelligence through Bayesian rule lists. Some key points:
- The presentation will cover challenges with model opacity, defining interpretability, and how Bayesian rule lists can be used to build naturally interpretable models through rule extraction.
- Bayesian rule lists work well for tabular datasets and generate human-understandable "if-then-else" rules. They aim to optimize over pre-mined frequent patterns to construct an ordered set of conditional statements.
- There is often a tension between model performance and interpretability. Bayesian rule lists can achieve accuracy comparable to more opaque models like random forests on benchmark datasets while maintaining interpretability.
Introduction to Machine Learning - WeCloudDataWeCloudData
WeCloudData offers data science training programs and customized corporate training. They have 21 part-time instructors and 2 full-time instructors with expertise in tools like Python, Spark, and AWS. WeCloudData organizes data science meetup events and conferences, and provides workshops at various conferences. Their Applied Machine Learning course teaches tools and techniques over 12 sessions, includes a hands-on project, and helps with interview preparation.
Introduction to Machine Learning - WeCloudDataWeCloudData
In this talk, WeCloudData introduces the lifecycle of machine learning and its tools/ecosystems. For more detail about WeCloudData's machine learning course please visit: https://weclouddata.com/data-science/
Deep Credit Risk Ranking with LSTM with Kyle GroveDatabricks
This document discusses techniques for developing interpretable deep learning models for credit risk prediction. It begins by describing the goals of credit risk models, including calibration, accuracy, and interpretability. It then discusses the tradeoff between accuracy and interpretability in models, with more complex models gaining accuracy at the cost of interpretability. The document proposes using techniques like augmented data proxy models, directly fit proxy models, model scoring, and model distillation to develop interpretable deep learning models for credit risk prediction. It also presents a case study applying an ensemble of LSTM and gradient boosted trees to a large credit dataset, achieving higher accuracy than traditional models. Finally, it discusses tools for explaining individual predictions from these complex models.
The document discusses model interpretation and the Skater library. It begins with defining model interpretation and explaining why it is needed, particularly for understanding model behavior and ensuring fairness. It then introduces Skater, an open-source Python library that provides model-agnostic interpretation tools. Skater uses techniques like partial dependence plots and LIME explanations to interpret models globally and locally. The document demonstrates Skater's functionality and discusses its ability to interpret a variety of model types.
Choosing a Machine Learning technique to solve your needGibDevs
This document discusses choosing a machine learning technique to solve a problem. It begins with an overview of machine learning and popular approaches like linear regression, logistic regression, decision trees, k-means clustering, principal component analysis, support vector machines, and neural networks. It then discusses important considerations like knowing your data, cleaning your data, categorizing the problem, understanding constraints, choosing an algorithm, and evaluating models. Programming languages like Python and libraries, datasets, and cloud support resources are also mentioned.
This document provides an overview of getting started with data science using Python. It discusses what data science is, why it is in high demand, and the typical skills and backgrounds of data scientists. It then covers popular Python libraries for data science like NumPy, Pandas, Scikit-Learn, TensorFlow, and Keras. Common data science steps are outlined including data gathering, preparation, exploration, model building, validation, and deployment. Example applications and case studies are discussed along with resources for learning including podcasts, websites, communities, books, and TV shows.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
This document provides an introduction to machine learning concepts. It begins with an overview of the book's organization and topics to be covered, including descriptive statistics, algebra, linear regression, classification, clustering, decision trees, and neural networks. It then discusses requisite skills like basic Python and software needed. The document provides definitions of machine learning and describes common problem types it can solve. It also outlines popular machine learning tools and frameworks.
Andrii Belas "Modern approaches to working with categorical data in machine l...Lviv Startup Club
- Andrey Belas is an AI Solution Architect at SMART business and expert in machine learning and public speaker
- He created and mentors the SMART Data Science Academy and is responsible for the technical development of the data science team and architecture of all data science projects at SMART business
- He has Microsoft certifications in areas like Big Data and Advanced Analytics, Cloud Data Science with Azure Machine Learning, and Developing SQL Data Models
- He has experience in domains like Deep Learning, Computer Vision, AI in Forecasting, AI in Marketing, and Risk Management
In this presentation I list and try to answer some useful questions about machine learning, and large-scale machine learning in particular.
I talk about things like what we can and cannot do with ML, do I need a cluster for large-scale ML, what are common problems with ML systems and future directions.
The Price is Wrong - Quantative Finance TerminusDB
This talk focused on graph databases as applied to quantitative finance and pricing. Graphs are mathematical structures used to study relationships between objects and entities and provide a better way of dealing with abstract concepts like relationships and interactions.
To incorporate real world complexity, data management needs to be built around relationships and connections. With tools like the DataChemist Knowledge Graph we can price effectively and enjoy the benefits of deeper understanding.
Building AI Applications using Knowledge GraphsAndre Freitas
This document provides an overview of building AI applications using knowledge graphs. It discusses the goals of the tutorial, which are to provide a broad view of multiple perspectives on knowledge graphs and show how knowledge graphs can form the foundation for building AI systems. The tutorial focuses on contemporary and emerging perspectives through exemplar approaches and infrastructures, rather than providing an exhaustive survey. It also notes that the tutorial is not a standard academic tutorial and takes a big picture view rather than being a comprehensive survey.
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
The document discusses explainable AI (XAI) and making machine learning and deep learning models more interpretable. It covers the necessity and principles of XAI, popular model-agnostic XAI methods for ML and DL models, frameworks like LIME, SHAP, ELI5 and SKATER, and research questions around evolving XAI to be understandable by non-experts. The key topics covered are model-agnostic XAI, surrogate models, influence methods, visualizations and evaluating descriptive accuracy of explanations.
The slide has details on below points:
1. Introduction to Machine Learning
2. What are the challenges in acceptance of Machine Learning in Banks
3. How to overcome the challenges in adoption of Machine Learning in Banks
4. How to find new use cases of Machine Learning
5. Few current interesting use cases of Machine Learning
Please contact me (shekup@gmail.com) or connect with me on LinkedIn (https://www.linkedin.com/in/shekup/) for more explanation on ML and how it may help your business.
The slides are inspired by:
Survey & interviews done by me with Bankers & Technology Professionals
Presentation from Google NEXT 2017
Presentation by DATUM on Youtube
Royal Society Machine Learning
Big Data & Social Analytics Course from MIT & GetSmarter
DutchMLSchool. Logistic Regression, Deepnets, Time SeriesBigML, Inc
DutchMLSchool. Logistic Regression, Deepnets, and Time Series (Supervised Learning II) - Main Conference: Introduction to Machine Learning.
DutchMLSchool: 1st edition of the Machine Learning Summer School in The Netherlands.
A Comprehensive Learning Path to Become a Data Science 2021.pptxRajSingh512965
The 2021 data science learning path provides a comprehensive curriculum to become a data scientist. It includes extended skills in storytelling, model deployment, unsupervised learning, exercises, and projects. The path covers key skills and tools like Python, R, machine learning algorithms, deep learning, natural language processing, and model deployment. It consists of monthly modules that progress from the data science toolkit to advanced topics, with hands-on training and real-world projects.
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
Most organizations understand the predictive power and the potential gains from AIML, but AI and ML are still now a black box technology for them. While deep learning and neural networks can provide excellent inputs to businesses, leaders are challenged to use them because of the complete blind faith required to ‘trust’ AI. In this talk we will use the latest technological developments from researchers, the US defense department, and the industry to unbox the black box and provide businesses a clear understanding of the policy levers that they can pull, why, and by how much, to make effective decisions?
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...Daniel Zivkovic
Serverless Toronto's 6th-anniversary event helps IT pros understand and prepare for the #GenAI tsunami ahead. You'll gain situational awareness of the LLM Landscape, receive condensed insights, and actionable advice about RAG in 2024 from Google AI Lead Mark Ryan and LlamaIndex creator Jerry Liu. We chose #RAG (Retrieval-Augmented Generation) because it is the predominant paradigm for building #LLM (Large Language Model) applications in enterprises today - and that's where the jobs will be shifting. Here is the recording: https://youtu.be/P5xd1ZjD-Os?si=iq8xibj5pJsJ62oW
The document describes the Like2Vec recommender system model. It transforms sparse user-item rating matrices into a graph representation, and then uses the DeepWalk algorithm to learn embeddings of nodes in the graph. These embeddings are trained with the Skip-Gram language model on random walks generated through the graph. Like2Vec is evaluated on the Netflix dataset and is shown to outperform baselines in Recall-at-N, which directly measures the quality of top recommendations compared to RMSE which does not. Recall-at-N is argued to be a superior evaluation metric for recommender systems.
This document provides an overview of machine learning interpretability. It defines interpretability as the ability to explain model decisions in understandable terms. Not all systems and models require interpretability. The document discusses the goals of interpretability like building trustworthy models and ensuring fairness. Interpretability can examine models globally or locally. Popular interpretability techniques discussed include LIME, LRP, DeepLIFT and SHAP. LIME approximates a model's behavior locally with an interpretable model. LRP, DeepLIFT and SHAP attribute importance to input features for a model's predictions.
Scaling the mirrorworld with knowledge graphsAlan Morrison
After registration at https://www.brighttalk.com/webcast/9273/364148, you can view the full recording, which begins with Scott Abel's intro for a few minutes, then my talk for 20 minutes, and then Sebastian Gabler's. First presented on October 23 at an SWC webinar.
Conclusions:
(1) The mirrorworld (a world of digital twins, which will be 25 years in the making, according to Kevin Kelly) will require semantic knowledge graphs for interaction and interoperability.
(2) This fact implies massive future demand for knowledge graph technology and other new data infrastructure innovations, comparable to the scale of oil & gas industry infrastructure development over 150 years.
(3) Conceivably, knowledge graphs could be used to address a $205 billion market demand by 2021 for graph databases, information management, digital twins, conversational AI, virtual assistants and as knowledge bases/accelerated training for deep learning, etc. but the problem is that awareness of the tech is low, and the semantics community that understands the tech is still quite small.
(4) Over the next decades, knowledge graphs promise both scalability and substantial efficiencies in enterprises. But lack of awareness of its potential and how to harness it will continue to be stumbling blocks to adoption.
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.
To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.
Relationships Matter: Using Connected Data for Better Machine LearningNeo4j
Relationships are highly predictive of behavior, yet most data science models overlook this information because it's difficult to extract network structure for use in machine learning (ML).
With graphs, relationships are embedded in the data itself, making it practical to add these predictive capabilities to your existing practices.
That’s why we’re presenting and demoing the use of graph-native ML to make breakthrough predictions. This will cover:
- Different approaches to graph feature engineering, from queries and algorithms to embeddings
- How ML techniques leverage everything from classical network science to deep learning and graph convolutional neural networks
- How to generate representations of your graph using graph embeddings, create ML models for link prediction or node classification, and apply these models to add missing information to an existing graph/incoming data
- Why no-code visualization and prototyping is important
PyData SF 2016 --- Moving forward through the darknessChia-Chi Chang
This document discusses various types of "blindness" that can occur when applying machine learning modeling procedures and techniques. It notes that modeling procedures often focus on decomposing problems and data in a way that can lose important connections or information. Specific issues highlighted include the gap between problems and available data, information loss when converting data to vectors, disconnects between mathematical concepts and real-world applications, limitations of individual ML techniques, and challenges with new data and labels. The document advocates thinking more from both data-driven and problem-driven perspectives, and considering alternative techniques that can bridge gaps, such as metric learning and one-versus-all classifiers.
Atelier - Architecture d’applications de Graphes - GraphSummit ParisNeo4j
Atelier - Architecture d’applications de Graphes
Participez à cet atelier pratique animé par des experts de Neo4j qui vous guideront pour découvrir l’intelligence contextuelle. En utilisant un jeu de données réel, nous construirons étape par étape une solution de graphes ; de la construction du modèle de données de graphes à l’exécution de requêtes et à la visualisation des données. L’approche sera applicable à de multiples cas d’usages et industries.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
More Related Content
Similar to KLARNA - Language Models and Knowledge Graphs: A Systems Approach
This document provides an overview of getting started with data science using Python. It discusses what data science is, why it is in high demand, and the typical skills and backgrounds of data scientists. It then covers popular Python libraries for data science like NumPy, Pandas, Scikit-Learn, TensorFlow, and Keras. Common data science steps are outlined including data gathering, preparation, exploration, model building, validation, and deployment. Example applications and case studies are discussed along with resources for learning including podcasts, websites, communities, books, and TV shows.
Part of the ongoing effort with Skater for enabling better Model Interpretation for Deep Neural Network models presented at the AI Conference.
https://conferences.oreilly.com/artificial-intelligence/ai-ny/public/schedule/detail/65118
This document provides an introduction to machine learning concepts. It begins with an overview of the book's organization and topics to be covered, including descriptive statistics, algebra, linear regression, classification, clustering, decision trees, and neural networks. It then discusses requisite skills like basic Python and software needed. The document provides definitions of machine learning and describes common problem types it can solve. It also outlines popular machine learning tools and frameworks.
Andrii Belas "Modern approaches to working with categorical data in machine l...Lviv Startup Club
- Andrey Belas is an AI Solution Architect at SMART business and expert in machine learning and public speaker
- He created and mentors the SMART Data Science Academy and is responsible for the technical development of the data science team and architecture of all data science projects at SMART business
- He has Microsoft certifications in areas like Big Data and Advanced Analytics, Cloud Data Science with Azure Machine Learning, and Developing SQL Data Models
- He has experience in domains like Deep Learning, Computer Vision, AI in Forecasting, AI in Marketing, and Risk Management
In this presentation I list and try to answer some useful questions about machine learning, and large-scale machine learning in particular.
I talk about things like what we can and cannot do with ML, do I need a cluster for large-scale ML, what are common problems with ML systems and future directions.
The Price is Wrong - Quantative Finance TerminusDB
This talk focused on graph databases as applied to quantitative finance and pricing. Graphs are mathematical structures used to study relationships between objects and entities and provide a better way of dealing with abstract concepts like relationships and interactions.
To incorporate real world complexity, data management needs to be built around relationships and connections. With tools like the DataChemist Knowledge Graph we can price effectively and enjoy the benefits of deeper understanding.
Building AI Applications using Knowledge GraphsAndre Freitas
This document provides an overview of building AI applications using knowledge graphs. It discusses the goals of the tutorial, which are to provide a broad view of multiple perspectives on knowledge graphs and show how knowledge graphs can form the foundation for building AI systems. The tutorial focuses on contemporary and emerging perspectives through exemplar approaches and infrastructures, rather than providing an exhaustive survey. It also notes that the tutorial is not a standard academic tutorial and takes a big picture view rather than being a comprehensive survey.
Explainable AI - making ML and DL models more interpretableAditya Bhattacharya
The document discusses explainable AI (XAI) and making machine learning and deep learning models more interpretable. It covers the necessity and principles of XAI, popular model-agnostic XAI methods for ML and DL models, frameworks like LIME, SHAP, ELI5 and SKATER, and research questions around evolving XAI to be understandable by non-experts. The key topics covered are model-agnostic XAI, surrogate models, influence methods, visualizations and evaluating descriptive accuracy of explanations.
The slide has details on below points:
1. Introduction to Machine Learning
2. What are the challenges in acceptance of Machine Learning in Banks
3. How to overcome the challenges in adoption of Machine Learning in Banks
4. How to find new use cases of Machine Learning
5. Few current interesting use cases of Machine Learning
Please contact me (shekup@gmail.com) or connect with me on LinkedIn (https://www.linkedin.com/in/shekup/) for more explanation on ML and how it may help your business.
The slides are inspired by:
Survey & interviews done by me with Bankers & Technology Professionals
Presentation from Google NEXT 2017
Presentation by DATUM on Youtube
Royal Society Machine Learning
Big Data & Social Analytics Course from MIT & GetSmarter
DutchMLSchool. Logistic Regression, Deepnets, Time SeriesBigML, Inc
DutchMLSchool. Logistic Regression, Deepnets, and Time Series (Supervised Learning II) - Main Conference: Introduction to Machine Learning.
DutchMLSchool: 1st edition of the Machine Learning Summer School in The Netherlands.
A Comprehensive Learning Path to Become a Data Science 2021.pptxRajSingh512965
The 2021 data science learning path provides a comprehensive curriculum to become a data scientist. It includes extended skills in storytelling, model deployment, unsupervised learning, exercises, and projects. The path covers key skills and tools like Python, R, machine learning algorithms, deep learning, natural language processing, and model deployment. It consists of monthly modules that progress from the data science toolkit to advanced topics, with hands-on training and real-world projects.
Deciphering AI - Unlocking the Black Box of AIML with State-of-the-Art Techno...Analytics India Magazine
Most organizations understand the predictive power and the potential gains from AIML, but AI and ML are still now a black box technology for them. While deep learning and neural networks can provide excellent inputs to businesses, leaders are challenged to use them because of the complete blind faith required to ‘trust’ AI. In this talk we will use the latest technological developments from researchers, the US defense department, and the industry to unbox the black box and provide businesses a clear understanding of the policy levers that they can pull, why, and by how much, to make effective decisions?
All in AI: LLM Landscape & RAG in 2024 with Mark Ryan (Google) & Jerry Liu (L...Daniel Zivkovic
Serverless Toronto's 6th-anniversary event helps IT pros understand and prepare for the #GenAI tsunami ahead. You'll gain situational awareness of the LLM Landscape, receive condensed insights, and actionable advice about RAG in 2024 from Google AI Lead Mark Ryan and LlamaIndex creator Jerry Liu. We chose #RAG (Retrieval-Augmented Generation) because it is the predominant paradigm for building #LLM (Large Language Model) applications in enterprises today - and that's where the jobs will be shifting. Here is the recording: https://youtu.be/P5xd1ZjD-Os?si=iq8xibj5pJsJ62oW
The document describes the Like2Vec recommender system model. It transforms sparse user-item rating matrices into a graph representation, and then uses the DeepWalk algorithm to learn embeddings of nodes in the graph. These embeddings are trained with the Skip-Gram language model on random walks generated through the graph. Like2Vec is evaluated on the Netflix dataset and is shown to outperform baselines in Recall-at-N, which directly measures the quality of top recommendations compared to RMSE which does not. Recall-at-N is argued to be a superior evaluation metric for recommender systems.
This document provides an overview of machine learning interpretability. It defines interpretability as the ability to explain model decisions in understandable terms. Not all systems and models require interpretability. The document discusses the goals of interpretability like building trustworthy models and ensuring fairness. Interpretability can examine models globally or locally. Popular interpretability techniques discussed include LIME, LRP, DeepLIFT and SHAP. LIME approximates a model's behavior locally with an interpretable model. LRP, DeepLIFT and SHAP attribute importance to input features for a model's predictions.
Scaling the mirrorworld with knowledge graphsAlan Morrison
After registration at https://www.brighttalk.com/webcast/9273/364148, you can view the full recording, which begins with Scott Abel's intro for a few minutes, then my talk for 20 minutes, and then Sebastian Gabler's. First presented on October 23 at an SWC webinar.
Conclusions:
(1) The mirrorworld (a world of digital twins, which will be 25 years in the making, according to Kevin Kelly) will require semantic knowledge graphs for interaction and interoperability.
(2) This fact implies massive future demand for knowledge graph technology and other new data infrastructure innovations, comparable to the scale of oil & gas industry infrastructure development over 150 years.
(3) Conceivably, knowledge graphs could be used to address a $205 billion market demand by 2021 for graph databases, information management, digital twins, conversational AI, virtual assistants and as knowledge bases/accelerated training for deep learning, etc. but the problem is that awareness of the tech is low, and the semantics community that understands the tech is still quite small.
(4) Over the next decades, knowledge graphs promise both scalability and substantial efficiencies in enterprises. But lack of awareness of its potential and how to harness it will continue to be stumbling blocks to adoption.
Unified Approach to Interpret Machine Learning Model: SHAP + LIMEDatabricks
For companies that solve real-world problems and generate revenue from the data science products, being able to understand why a model makes a certain prediction can be as crucial as achieving high prediction accuracy in many applications. However, as data scientists pursuing higher accuracy by implementing complex algorithms such as ensemble or deep learning models, the algorithm itself becomes a blackbox and it creates the trade-off between accuracy and interpretability of a model’s output.
To address this problem, a unified framework SHAP (SHapley Additive exPlanations) was developed to help users interpret the predictions of complex models. In this session, we will talk about how to apply SHAP to various modeling approaches (GLM, XGBoost, CNN) to explain how each feature contributes and extract intuitive insights from a particular prediction. This talk is intended to introduce the concept of general purpose model explainer, as well as help practitioners understand SHAP and its applications.
Relationships Matter: Using Connected Data for Better Machine LearningNeo4j
Relationships are highly predictive of behavior, yet most data science models overlook this information because it's difficult to extract network structure for use in machine learning (ML).
With graphs, relationships are embedded in the data itself, making it practical to add these predictive capabilities to your existing practices.
That’s why we’re presenting and demoing the use of graph-native ML to make breakthrough predictions. This will cover:
- Different approaches to graph feature engineering, from queries and algorithms to embeddings
- How ML techniques leverage everything from classical network science to deep learning and graph convolutional neural networks
- How to generate representations of your graph using graph embeddings, create ML models for link prediction or node classification, and apply these models to add missing information to an existing graph/incoming data
- Why no-code visualization and prototyping is important
PyData SF 2016 --- Moving forward through the darknessChia-Chi Chang
This document discusses various types of "blindness" that can occur when applying machine learning modeling procedures and techniques. It notes that modeling procedures often focus on decomposing problems and data in a way that can lose important connections or information. Specific issues highlighted include the gap between problems and available data, information loss when converting data to vectors, disconnects between mathematical concepts and real-world applications, limitations of individual ML techniques, and challenges with new data and labels. The document advocates thinking more from both data-driven and problem-driven perspectives, and considering alternative techniques that can bridge gaps, such as metric learning and one-versus-all classifiers.
Atelier - Architecture d’applications de Graphes - GraphSummit ParisNeo4j
Atelier - Architecture d’applications de Graphes
Participez à cet atelier pratique animé par des experts de Neo4j qui vous guideront pour découvrir l’intelligence contextuelle. En utilisant un jeu de données réel, nous construirons étape par étape une solution de graphes ; de la construction du modèle de données de graphes à l’exécution de requêtes et à la visualisation des données. L’approche sera applicable à de multiples cas d’usages et industries.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
Neo4j - Product Vision and Knowledge Graphs - GraphSummit ParisNeo4j
Dr. Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Découvrez les dernières innovations de Neo4j, et notamment les dernières intégrations cloud et les améliorations produits qui font de Neo4j un choix essentiel pour les développeurs qui créent des applications avec des données interconnectées et de l’IA générative.
SOPRA STERIA - GraphRAG : repousser les limitations du RAG via l’utilisation ...Neo4j
Romain CAMPOURCY – Architecte Solution, Sopra Steria
Patrick MEYER – Architecte IA Groupe, Sopra Steria
La Génération de Récupération Augmentée (RAG) permet la réponse à des questions d’utilisateur sur un domaine métier à l’aide de grands modèles de langage. Cette technique fonctionne correctement lorsque la documentation est simple mais trouve des limitations dès que les sources sont complexes. Au travers d’un projet que nous avons réalisé, nous vous présenterons l’approche GraphRAG, une nouvelle approche qui utilise une base Neo4j générée pour améliorer la compréhension des documents et la synthèse d’informations. Cette méthode surpasse l’approche RAG en fournissant des réponses plus holistiques et précises.
ADEO - Knowledge Graph pour le e-commerce, entre challenges et opportunités ...Neo4j
Charles Gouwy, Business Product Leader, Adeo Services (Groupe Leroy Merlin)
Alors que leur Knowledge Graph est déjà intégré sur l’ensemble des expériences d’achat de leur plateforme e-commerce depuis plus de 3 ans, nous verrons quelles sont les nouvelles opportunités et challenges qui s’ouvrent encore à eux grâce à leur utilisation d’une base de donnée de graphes et l’émergence de l’IA.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphAware - Transforming policing with graph-based intelligence analysisNeo4j
Petr Matuska, Sales & Sales Engineering Lead, GraphAware
Western Australia Police Force’s adoption of Neo4j and the GraphAware Hume graph analytics platform marks a significant advancement in data-driven policing. Facing the challenges of growing volumes of valuable data scattered in disconnected silos, the organisation successfully implemented Neo4j database and Hume, consolidating data from various sources into a dynamic knowledge graph. The result was a connected view of intelligence, making it easier for analysts to solve crime faster. The partnership between Neo4j and GraphAware in this project demonstrates the transformative impact of graph technology on law enforcement’s ability to leverage growing volumes of valuable data to prevent crime and protect communities.
GraphSummit Stockholm - Neo4j - Knowledge Graphs and Product UpdatesNeo4j
David Pond, Lead Product Manager, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Shirley Bacso, Data Architect, Ingka Digital
“Linked Metadata by Design” represents the integration of the outcomes from human collaboration, starting from the design phase of data product development. This knowledge is captured in the Data Knowledge Graph. It not only enables data products to be robust and compliant but also well-understood and effectively utilized.
Your enemies use GenAI too - staying ahead of fraud with Neo4jNeo4j
Delivered by Michael Down at Gartner Data & Analytics Summit London 2024 - Your enemies use GenAI too: Staying ahead of fraud with Neo4j.
Fraudsters exploit the latest technologies like generative AI to stay undetected. Static applications can’t adapt quickly enough. Learn why you should build flexible fraud detection apps on Neo4j’s native graph database combined with advanced data science algorithms. Uncover complex fraud patterns in real-time and shut down schemes before they cause damage.
BT & Neo4j _ How Knowledge Graphs help BT deliver Digital Transformation.pptxNeo4j
Delivered by Sreenath Gopalakrishna, Director of Software Engineering at BT, and Dr Jim Webber, Chief Scientist at Neo4j, at Gartner Data & Analytics Summit London 2024 this presentation examines how knowledge graphs and GenAI combine in real-world solutions.
BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Future innovation plans include the exploration of uses of EKG + Generative AI.
Workshop: Enabling GenAI Breakthroughs with Knowledge Graphs - GraphSummit MilanNeo4j
Look beyond the hype and unlock practical techniques to responsibly activate intelligence across your organization’s data with GenAI. Explore how to use knowledge graphs to increase accuracy, transparency, and explainability within generative AI systems. You’ll depart with hands-on experience combining relationships and LLMs for increased domain-specific context and enhanced reasoning.
Workshop 1. Architecting Innovative Graph Applications
Join this hands-on workshop for beginners led by Neo4j experts guiding you to systematically uncover contextual intelligence. Using a real-life dataset we will build step-by-step a graph solution; from building the graph data model to running queries and data visualization. The approach will be applicable across multiple use cases and industries.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
UI5con 2024 - Bring Your Own Design SystemPeter Muessig
How do you combine the OpenUI5/SAPUI5 programming model with a design system that makes its controls available as Web Components? Since OpenUI5/SAPUI5 1.120, the framework supports the integration of any Web Components. This makes it possible, for example, to natively embed own Web Components of your design system which are created with Stencil. The integration embeds the Web Components in a way that they can be used naturally in XMLViews, like with standard UI5 controls, and can be bound with data binding. Learn how you can also make use of the Web Components base class in OpenUI5/SAPUI5 to also integrate your Web Components and get inspired by the solution to generate a custom UI5 library providing the Web Components control wrappers for the native ones.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
8 Best Automated Android App Testing Tool and Framework in 2024.pdfkalichargn70th171
Regarding mobile operating systems, two major players dominate our thoughts: Android and iPhone. With Android leading the market, software development companies are focused on delivering apps compatible with this OS. Ensuring an app's functionality across various Android devices, OS versions, and hardware specifications is critical, making Android app testing essential.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
E-commerce Development Services- Hornet DynamicsHornet Dynamics
For any business hoping to succeed in the digital age, having a strong online presence is crucial. We offer Ecommerce Development Services that are customized according to your business requirements and client preferences, enabling you to create a dynamic, safe, and user-friendly online store.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Flutter is a popular open source, cross-platform framework developed by Google. In this webinar we'll explore Flutter and its architecture, delve into the Flutter Embedder and Flutter’s Dart language, discover how to leverage Flutter for embedded device development, learn about Automotive Grade Linux (AGL) and its consortium and understand the rationale behind AGL's choice of Flutter for next-gen IVI systems. Don’t miss this opportunity to discover whether Flutter is right for your project.
E-Invoicing Implementation: A Step-by-Step Guide for Saudi Arabian CompaniesQuickdice ERP
Explore the seamless transition to e-invoicing with this comprehensive guide tailored for Saudi Arabian businesses. Navigate the process effortlessly with step-by-step instructions designed to streamline implementation and enhance efficiency.
Most important New features of Oracle 23c for DBAs and Developers. You can get more idea from my youtube channel video from https://youtu.be/XvL5WtaC20A
How Can Hiring A Mobile App Development Company Help Your Business Grow?ToXSL Technologies
ToXSL Technologies is an award-winning Mobile App Development Company in Dubai that helps businesses reshape their digital possibilities with custom app services. As a top app development company in Dubai, we offer highly engaging iOS & Android app solutions. https://rb.gy/necdnt
Graspan: A Big Data System for Big Code AnalysisAftab Hussain
We built a disk-based parallel graph system, Graspan, that uses a novel edge-pair centric computation model to compute dynamic transitive closures on very large program graphs.
We implement context-sensitive pointer/alias and dataflow analyses on Graspan. An evaluation of these analyses on large codebases such as Linux shows that their Graspan implementations scale to millions of lines of code and are much simpler than their original implementations.
These analyses were used to augment the existing checkers; these augmented checkers found 132 new NULL pointer bugs and 1308 unnecessary NULL tests in Linux 4.4.0-rc5, PostgreSQL 8.3.9, and Apache httpd 2.2.18.
- Accepted in ASPLOS ‘17, Xi’an, China.
- Featured in the tutorial, Systemized Program Analyses: A Big Data Perspective on Static Analysis Scalability, ASPLOS ‘17.
- Invited for presentation at SoCal PLS ‘16.
- Invited for poster presentation at PLDI SRC ‘16.
KLARNA - Language Models and Knowledge Graphs: A Systems Approach
1. Unlocking the Potential of Knowledge
Graphs and LLMs: A Systems Approach
Lucas Smedley | Applied AI at Klarna
2. LLMs and knowledge graphs are currently
in a state of purgatory
Heaven*
Hell
KGs x LLMs
* heaven = production grade applications and systems
3. Rebuild our knowledge graph stack from the ground up with LLMs embedded
throughout, in order to maximise their potential.
We must take a systems approach to integrating
LLMs with knowledge graphs
4. ● Knowledge is patterns
○ LLMs and patterns
○ Knowledge graphs and patterns
● Current state of integration between knowledge graphs and LLMs
○ The challenges of production use cases
● The future and how we can get there
○ Taking a systems approach
○ Helpful tips
Today we will cover:
6. How do we know a dog is a dog?
Knowledge | Current State | Future State
7. Pattern: Four legs, a tail, barking, a wet nose, arguing with a cat.
How do we know a dog is a dog?
Knowledge | Current State | Future State
8. How do we know when someone likes us?
Knowledge | Current State | Future State
9. Pattern: Eye contact, making time for you and showing interest. Laughing at our bad
jokes.
How do we know when someone likes us?
Knowledge | Current State | Future State
10. How do we detect fraud in financial transactions?
Knowledge | Current State | Future State
11. Pattern: Unusual spending patterns; transactions across multiple geographies in a short
space of time, transactions at unusual times, transactions on unexpected good or
services.
How do we detect fraud in financial transactions?
Knowledge | Current State | Future State
12. For LLMs pattern prediction is grounded in predicting the next token accurately. In
order to do this, they in turn develop intelligence.
LLMs are just large scale pattern recognition and
prediction engines
Knowledge | Current State | Future State
13. Knowledge graphs are our opinionated version of the
patterns in the world we care about
Our knowledge graph schemas, ontologies and taxonomies are effectively the data
points and patterns in the world we care about within a certain context.
Knowledge | Current State | Future State
14. Pattern in a regular DB:
Dog Cat
Knowledge | Current State | Future State
15. Graphs are much richer:
Dog Cat
Eats
Knowledge | Current State | Future State
16. Graphs are much richer:
Dog Cat
Loves
Knowledge | Current State | Future State
17. By focusing on patterns, we can design systems where LLMs and knowledge graphs are
interoperable and intrinsically connected.
Both technologies are complementary. LLMs are very good at at synthesizing large
quantities of unstructured data. Knowledge graphs provide frameworks for structured,
easily navigable representations of that information.
Knowledge Graphs and LLMs are similar in their focus
on patterns
Knowledge | Current State | Future State
18. Knowledge Graphs and LLM’s:
The Current State of the Play
Knowledge | Current State | Future State
19. Typically involves turning a natural language query into a structured cypher query,
which is executed against a graph. The result is then turned into a natural language
answer.
Use Case 1: Using LLMs to Query graphs
Knowledge | Current State | Future State
20. Involves taking unstructured information and processing to into more structured
information of a knowledge graph.
Use Case 2: Using LLM’s to build knowledge graphs
Knowledge | Current State | Future State
21. ● Often results don’t work at production scale and complexity graphs
But we are encountering a lot issues
Knowledge | Current State | Future State
22. ● Often results don’t work at production scale and complexity graphs
● Querying graphs
○ LLMs don’t receive enough information to generate correct cypher queries
○ Schemas have not been designed with querying in mind
○ Infrastructure is not optimised for LLMs
But we are encountering a lot issues
Knowledge | Current State | Future State
23. ● Often results don’t work at production scale and complexity graphs
● Querying graphs
○ LLMs don’t receive enough information to generate correct cypher queries
○ Schemas have not been designed with querying in mind
○ Infrastructure is not optimised for LLMs
● Building Graphs
○ Examples are often shown with ‘schemaless’ graphs
○ Unable to handle edge cases
○ Unreliable output when populating graphs
○ Generated schemas are of low quality
But we are encountering a lot issues
Knowledge | Current State | Future State
24. Current solution space for KGs and LLMs
Building Querying
Knowledge | Current State | Future State
25. Building Querying
Informs
To reach production we need to have a greater focus
on building the system
Knowledge | Current State | Future State
26. So how do we get there?
Solutions Systems
Knowledge | Current State | Future State
27. Objective: building schemas effectively
We should treat our LLM like a new employee in terms of the context we provide
Building and populating graphs
Knowledge | Current State | Future State
28. Node 1:
* Supplier name: Firenze Inc
* Location: Italy
Building and populating graphs with the system
in mind
Node 2:
* Supplier name: Firenze Inc
* Location: Italy
* Desc: Producer of high
quality knitwear and wool
garments
Query: find me a jumper supplier in italy?
Knowledge | Current State | Future State
29. Objective: turn the messy into structured
Let’s lean into our better documentation and structured frameworks like Pydantic to do
better extraction.
Populating graphs
Knowledge | Current State | Future State
30. Objective: use queries and responses to feedback and improve the system
Cache queries and cypher and pass as few-shot examples to the LLM
Adjust schemas or add descriptions and relationships based on querying difficulties.
Querying and improving the system
Knowledge | Current State | Future State
31. Today we covered:
● Knowledge is patterns
○ LLMs and patterns
○ Knowledge graphs and patterns
● Current state of integration between knowledge graphs and LLMs
○ The challenges of production use cases
● The future and how we can get there
○ Taking a systems approach
○ Helpful tips
Knowledge | Current State | Future State
32. ● Intro to LLMS
● Pydantic is all you need
● Ilya on Dwarkesh podcast
Essential reading:
Solutions Systems