Relationships Matter: Using Connected Data for Better Machine LearningNeo4j
Relationships are highly predictive of behavior, yet most data science models overlook this information because it's difficult to extract network structure for use in machine learning (ML).
With graphs, relationships are embedded in the data itself, making it practical to add these predictive capabilities to your existing practices.
That’s why we’re presenting and demoing the use of graph-native ML to make breakthrough predictions. This will cover:
- Different approaches to graph feature engineering, from queries and algorithms to embeddings
- How ML techniques leverage everything from classical network science to deep learning and graph convolutional neural networks
- How to generate representations of your graph using graph embeddings, create ML models for link prediction or node classification, and apply these models to add missing information to an existing graph/incoming data
- Why no-code visualization and prototyping is important
A Connections-first Approach to Supply Chain OptimizationNeo4j
Supply chain optimization is an unusual balancing act that requires finesse, skill and timely data. Every supply chain’s the key questions to be answered are:
What to Buy? -- what are the factors in determining your optimal product mix and set of suppliers.
How much to Buy? -- what are the most and least popular items at any given time interval
When to Buy? -- long lags in delivery timing may tax limit your flexibility and influence your inventory management practices.
We will illustrate an API-based solution that utilizes a Graph database platform to add demonstrable value to Supply Planning.
Relationships Matter: Using Connected Data for Better Machine LearningNeo4j
Relationships are highly predictive of behavior, yet most data science models overlook this information because it's difficult to extract network structure for use in machine learning (ML).
With graphs, relationships are embedded in the data itself, making it practical to add these predictive capabilities to your existing practices.
That’s why we’re presenting and demoing the use of graph-native ML to make breakthrough predictions. This will cover:
- Different approaches to graph feature engineering, from queries and algorithms to embeddings
- How ML techniques leverage everything from classical network science to deep learning and graph convolutional neural networks
- How to generate representations of your graph using graph embeddings, create ML models for link prediction or node classification, and apply these models to add missing information to an existing graph/incoming data
- Why no-code visualization and prototyping is important
A Connections-first Approach to Supply Chain OptimizationNeo4j
Supply chain optimization is an unusual balancing act that requires finesse, skill and timely data. Every supply chain’s the key questions to be answered are:
What to Buy? -- what are the factors in determining your optimal product mix and set of suppliers.
How much to Buy? -- what are the most and least popular items at any given time interval
When to Buy? -- long lags in delivery timing may tax limit your flexibility and influence your inventory management practices.
We will illustrate an API-based solution that utilizes a Graph database platform to add demonstrable value to Supply Planning.
Maximize the Value of Your Data: Neo4j Graph Data PlatformNeo4j
In this 60-minute conversation with IDC, we will highlight the momentum and reasons why a graph data platform is a breakthrough solution for businesses in need of a flexible data model.
Please join Mohit Sagar, Group Managing Director of CIO Network, as he hosts the conversation with Dr. Christopher Lee Marshall, Associate VP at IDC, and Nik Vora, Vice President of APAC at Neo4. During this very exciting discussion, you'll discover the insights and knowledge unlocked with the graph data platform.
Data is both our most valuable asset and our biggest ongoing challenge. As data grows in volume, variety and complexity, across applications, clouds and siloed systems, traditional ways of working with data no longer work.
Unlike traditional databases, which arrange data in rows, columns and tables, Neo4j has a flexible structure defined by stored relationships between data records.
We'll discuss the primary use cases for graph databases
Explore the properties of Neo4j that make those use cases possible
Look into the visualisation of graphs
Introduce how to write queries.
Webinar, 23 July 2020
Graphs & the Police: How Law Enforcement Analyze Connected Data at ScaleNeo4j
Law enforcement agencies are trailblazers of using graph analysis to understand connections. Manual and partially automated link analysis tools have been crucial in an investigation and situational awareness capacity for several decades.
Meanwhile, the global explosion in data volumes and sources hasn't been limited to the private sector. Law Enforcement agencies, departments and fusion centers use a vast array of databases and sources, including Record Management Systems (RMS), Computer Aided Dispatch (CAD) and countless other sources.
In this webinar, Christian Miles of Cambridge Intelligence (makers of KeyLines) will introduce the benefits of graph technologies for law enforcement. He will show how to use Neo4j with compelling graph visualization techniques to improve performance and analytics when working with large volumes of law enforcement data.
The Five Graphs of Government: How Federal Agencies can Utilize Graph TechnologyGreta Workman
In this talk from the Neo4j Government Graphday in DC, Philip Rathle discusses how government agencies are leveraging graph technology to power their applications.
A comparison of relational and graph model theories, with an eye towards DataStax's implementation of Graph. Note: I'm working on a concise, formal mathematical definition of relational, based on Codd's 1970 paper. (Thanks to Artem Chebotko for suggesting this.)
AI, ML and Graph Algorithms: Real Life Use Cases with Neo4jIvan Zoratti
I gave this presentation at DataOps 19 in Barcelona.
You will find information about Neo4j and how to use it with Graph Algorithms for Machine Learning and Artificial Intelligence.
Software Analytics with Jupyter, Pandas, jQAssistant, and Neo4j [Neo4j Online...Markus Harrer
Let’s tackle problems in software development in an automated, data-driven and reproducible way!
As developers, we often feel that there might be something wrong with the way we develop software. Unfortunately, a gut feeling alone isn’t sufficient for the complex, interconnected problems in software systems.
We need solid, understandable arguments to gain budgets for improvement projects or to defend us against political decisions. Though, we can help ourselves: Every step in the development or use of software leaves valuable, digital traces. With clever analysis, these data can show us root causes of problems in our software and deliver new insights – understandable for everybody.
If concrete problems and their impact are known, developers and managers can create solutions and take sustainable actions aligned to existing business goals.
In this meetup, I talk about the analysis of software data by using a digital notebook approach. This allows you to express your gut feelings explicitly with the help of hypotheses, explorations and visualizations step by step.
I show the collaboration of open source analysis tools (Jupyter, Pandas, jQAssistant and, of course, Neo4j) to inspect problems in Java applications and their environment. We have a look at performance hotspots, knowledge loss and worthless code parts – completely automated from raw data up to visualizations for management.
Participants learn how they can translate their unsafe gut feelings into solid evidence for obtaining budgets for dedicated improvement projects with the help of data analysis.
Massive-Scale Analytics Applied to Real-World Problemsinside-BigData.com
In this deck from PASC18, David Bader from Georgia Tech presents: Massive-Scale Analytics Applied to Real-World Problems.
"Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams. In this talk, Bader will discuss the opportunities and challenges in massive data-intensive computing for applications in social sciences, physical sciences, and engineering."
Watch the video: https://wp.me/p3RLHQ-iPk
Learn more: https://pasc18.pasc-conference.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Maximize the Value of Your Data: Neo4j Graph Data PlatformNeo4j
In this 60-minute conversation with IDC, we will highlight the momentum and reasons why a graph data platform is a breakthrough solution for businesses in need of a flexible data model.
Please join Mohit Sagar, Group Managing Director of CIO Network, as he hosts the conversation with Dr. Christopher Lee Marshall, Associate VP at IDC, and Nik Vora, Vice President of APAC at Neo4. During this very exciting discussion, you'll discover the insights and knowledge unlocked with the graph data platform.
Data is both our most valuable asset and our biggest ongoing challenge. As data grows in volume, variety and complexity, across applications, clouds and siloed systems, traditional ways of working with data no longer work.
Unlike traditional databases, which arrange data in rows, columns and tables, Neo4j has a flexible structure defined by stored relationships between data records.
We'll discuss the primary use cases for graph databases
Explore the properties of Neo4j that make those use cases possible
Look into the visualisation of graphs
Introduce how to write queries.
Webinar, 23 July 2020
Graphs & the Police: How Law Enforcement Analyze Connected Data at ScaleNeo4j
Law enforcement agencies are trailblazers of using graph analysis to understand connections. Manual and partially automated link analysis tools have been crucial in an investigation and situational awareness capacity for several decades.
Meanwhile, the global explosion in data volumes and sources hasn't been limited to the private sector. Law Enforcement agencies, departments and fusion centers use a vast array of databases and sources, including Record Management Systems (RMS), Computer Aided Dispatch (CAD) and countless other sources.
In this webinar, Christian Miles of Cambridge Intelligence (makers of KeyLines) will introduce the benefits of graph technologies for law enforcement. He will show how to use Neo4j with compelling graph visualization techniques to improve performance and analytics when working with large volumes of law enforcement data.
The Five Graphs of Government: How Federal Agencies can Utilize Graph TechnologyGreta Workman
In this talk from the Neo4j Government Graphday in DC, Philip Rathle discusses how government agencies are leveraging graph technology to power their applications.
A comparison of relational and graph model theories, with an eye towards DataStax's implementation of Graph. Note: I'm working on a concise, formal mathematical definition of relational, based on Codd's 1970 paper. (Thanks to Artem Chebotko for suggesting this.)
AI, ML and Graph Algorithms: Real Life Use Cases with Neo4jIvan Zoratti
I gave this presentation at DataOps 19 in Barcelona.
You will find information about Neo4j and how to use it with Graph Algorithms for Machine Learning and Artificial Intelligence.
Software Analytics with Jupyter, Pandas, jQAssistant, and Neo4j [Neo4j Online...Markus Harrer
Let’s tackle problems in software development in an automated, data-driven and reproducible way!
As developers, we often feel that there might be something wrong with the way we develop software. Unfortunately, a gut feeling alone isn’t sufficient for the complex, interconnected problems in software systems.
We need solid, understandable arguments to gain budgets for improvement projects or to defend us against political decisions. Though, we can help ourselves: Every step in the development or use of software leaves valuable, digital traces. With clever analysis, these data can show us root causes of problems in our software and deliver new insights – understandable for everybody.
If concrete problems and their impact are known, developers and managers can create solutions and take sustainable actions aligned to existing business goals.
In this meetup, I talk about the analysis of software data by using a digital notebook approach. This allows you to express your gut feelings explicitly with the help of hypotheses, explorations and visualizations step by step.
I show the collaboration of open source analysis tools (Jupyter, Pandas, jQAssistant and, of course, Neo4j) to inspect problems in Java applications and their environment. We have a look at performance hotspots, knowledge loss and worthless code parts – completely automated from raw data up to visualizations for management.
Participants learn how they can translate their unsafe gut feelings into solid evidence for obtaining budgets for dedicated improvement projects with the help of data analysis.
Massive-Scale Analytics Applied to Real-World Problemsinside-BigData.com
In this deck from PASC18, David Bader from Georgia Tech presents: Massive-Scale Analytics Applied to Real-World Problems.
"Emerging real-world graph problems include: detecting and preventing disease in human populations; revealing community structure in large social networks; and improving the resilience of the electric power grid. Unlike traditional applications in computational science and engineering, solving these social problems at scale often raises new challenges because of the sparsity and lack of locality in the data, the need for research on scalable algorithms and development of frameworks for solving these real-world problems on high performance computers, and for improved models that capture the noise and bias inherent in the torrential data streams. In this talk, Bader will discuss the opportunities and challenges in massive data-intensive computing for applications in social sciences, physical sciences, and engineering."
Watch the video: https://wp.me/p3RLHQ-iPk
Learn more: https://pasc18.pasc-conference.org/
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Abstract: Knowledge has played a significant role on human activities since his development. Data mining is the process of
knowledge discovery where knowledge is gained by analyzing the data store in very large repositories, which are analyzed
from various perspectives and the result is summarized it into useful information. Due to the importance of extracting
knowledge/information from the large data repositories, data mining has become a very important and guaranteed branch of
engineering affecting human life in various spheres directly or indirectly. The purpose of this paper is to survey many of the
future trends in the field of data mining, with a focus on those which are thought to have the most promise and applicability
to future data mining applications.
Keywords: Current and Future of Data Mining, Data Mining, Data Mining Trends, Data mining Applications.
BIMCV, Banco de Imagen Medica de la Comunidad Valenciana. María de la IglesiaMaria de la Iglesia
Según Hal Varian (experto en microeconomía y economía de la información y, desde el año 2002, Chief Economist de Google) “En los próximos años, el trabajo más atractivo será el de los estadísticos: La capacidad de recoger datos, comprenderlos, procesarlos, extraer su valor, visualizarlos, comunicarlos serán todas habilidades importantes en las próximas décadas. Ahora disponemos de datos gratuitos y omnipresentes. Lo que aún falta es la capacidad de comprender estos datos“.
Data Science in 2016: Moving up by Paco Nathan at Big Data Spain 2015Big Data Spain
The term 'Data Science' was first described in scientific literature about 15 years ago. It started to become a major trend in industry about 7 years ago.
O'Reilly Media surveys the industry extensively each year. In addition we get a good birds-eye view of industry trends through our conference programs and publications, working closely with some of the best practitioners in Data Science.
By now, the field has evolved far beyond its origins eclipsing an earlier generation of Business Intelligence and Data Warehousing approaches. Data Science is moving up, into the business verticals and government spheres of influence where it has true global impact.
This talk considers Data Science trends from the past three years in particular. What is emerging? Which parts are evolving? Which seem cluttered and poised for consolidation or other change?
Session presented at Big Data Spain 2015 Conference
15th Oct 2015
Kinépolis Madrid
http://www.bigdataspain.org
Event promoted by: http://www.paradigmatecnologico.com
Abstract: http://www.bigdataspain.org/program/thu/slot-2.html
International Journal of Data Mining & Knowledge Management Process ( IJDKP )IJDKP
Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. There is an urgent need for a new generation of computational theories and tools to assist researchers in extracting useful information from the rapidly growing volumes of digital data.
International Journal of Data Mining & Knowledge Management Process ( IJDKP )IJDKP
Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. There is an urgent need for a new generation of computational theories and tools to assist researchers in extracting useful information from the rapidly growing volumes of digital data.
This Journal provides a forum for researchers who address this issue and to present their work in a peer-reviewed open access forum. Authors are solicited to contribute to the Journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to these topics only.
Data Mining & Knowledge Management Process (IJDKP)IJDKP
Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. There is an urgent need for a new generation of computational theories and tools to assist researchers in extracting useful information from the rapidly growing volumes of digital data.
International Journal of Data Mining & Knowledge Management Process ( IJDKP )IJDKP
.
This Journal provides a forum for researchers who address this issue and to present their work in a peer-reviewed open access forum. Authors are solicited to contribute to the Journal by submitting articles that illustrate research results, projects, surveying works and industrial experiences that describe significant advances in the following areas, but are not limited to these topics only.
Topics of interest include, but are not limited to the following
Data mining foundations
Parallel and Distributed Data Mining Algorithms, Data Streams Mining, Graph Mining, Spatial Data Mining, Text video, Multimedia Data Mining, Web Mining,Pre-Processing Techniques, Visualization, Security and Information Hiding in Data Mining
Data mining Applications
Databases, Bioinformatics, Biometrics, Image Analysis, Financial Modeling, Forecasting, Classification, Clustering, Social Networks, Educational Data Mining
Knowledge Processing
Data and Knowledge Representation, Knowledge Discovery Framework and Process, Including Pre- and Post-Processing, Integration of Data Warehousing, OLAP and Data Mining, Integrating Constraints and Knowledge in the KDD Process , Exploring Data Analysis, Inference of Causes, Prediction, Evaluating, Consolidating and Explaining Discovered Knowledge, Statistical Techniques for Generation a Robust, Consistent Data Model, Interactive Data Exploration/Visualization and Discovery, Languages and Interfaces for Data Mining, Mining Trends, Opportunities and Risks, Mining from Low-Quality Information Sources
Important Dates
Submission Deadline : August 23, 2020
Notification : September 23, 2020
Final Manuscript Due : October 01, 2020
Publication Date : Determined by the Editor-in-Chief
Call for Papers
Scope & Topics
Ethics
Archives
Most Cited Articles
Download leaflet
FAQ
ijdkp 10th year logo
Top 10 Cited Papers
From
2011 Volumes
10
Issues
54 Articles
227
Conferences
DMML 2020 - India
DTMN 2020 - Sydney
DMS 2020 - India
DBDM 2020 - Dubai
DMDBS 2020 - India
Courtesy
Smiley face
This talk presents areas of investigation underway at the Rensselaer Institute for Data Exploration and Applications. First presented at Flipkart, Bangalore India, 3/2015.
International Journal of Data Mining & Knowledge Management Process ( IJDKP )IJDKP
Data mining and knowledge discovery in databases have been attracting a significant amount of
research, industry, and media attention of late. There is an urgent need for a new generation of
computational theories and tools to assist researchers in extracting useful information from the
rapidly growing volumes of digital data.
Similar to Using Graphs to Enable National-Scale Analytics (20)
SOPRA STERIA - GraphRAG : repousser les limitations du RAG via l’utilisation ...Neo4j
Romain CAMPOURCY – Architecte Solution, Sopra Steria
Patrick MEYER – Architecte IA Groupe, Sopra Steria
La Génération de Récupération Augmentée (RAG) permet la réponse à des questions d’utilisateur sur un domaine métier à l’aide de grands modèles de langage. Cette technique fonctionne correctement lorsque la documentation est simple mais trouve des limitations dès que les sources sont complexes. Au travers d’un projet que nous avons réalisé, nous vous présenterons l’approche GraphRAG, une nouvelle approche qui utilise une base Neo4j générée pour améliorer la compréhension des documents et la synthèse d’informations. Cette méthode surpasse l’approche RAG en fournissant des réponses plus holistiques et précises.
ADEO - Knowledge Graph pour le e-commerce, entre challenges et opportunités ...Neo4j
Charles Gouwy, Business Product Leader, Adeo Services (Groupe Leroy Merlin)
Alors que leur Knowledge Graph est déjà intégré sur l’ensemble des expériences d’achat de leur plateforme e-commerce depuis plus de 3 ans, nous verrons quelles sont les nouvelles opportunités et challenges qui s’ouvrent encore à eux grâce à leur utilisation d’une base de donnée de graphes et l’émergence de l’IA.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Sudheer Mechineni, Head of Application Frameworks, Standard Chartered Bank
Discover how Standard Chartered Bank harnessed the power of Neo4j to transform complex data access challenges into a dynamic, scalable graph database solution. This keynote will cover their journey from initial adoption to deploying a fully automated, enterprise-grade causal cluster, highlighting key strategies for modelling organisational changes and ensuring robust disaster recovery. Learn how these innovations have not only enhanced Standard Chartered Bank’s data infrastructure but also positioned them as pioneers in the banking sector’s adoption of graph technology.
GraphAware - Transforming policing with graph-based intelligence analysisNeo4j
Petr Matuska, Sales & Sales Engineering Lead, GraphAware
Western Australia Police Force’s adoption of Neo4j and the GraphAware Hume graph analytics platform marks a significant advancement in data-driven policing. Facing the challenges of growing volumes of valuable data scattered in disconnected silos, the organisation successfully implemented Neo4j database and Hume, consolidating data from various sources into a dynamic knowledge graph. The result was a connected view of intelligence, making it easier for analysts to solve crime faster. The partnership between Neo4j and GraphAware in this project demonstrates the transformative impact of graph technology on law enforcement’s ability to leverage growing volumes of valuable data to prevent crime and protect communities.
GraphSummit Stockholm - Neo4j - Knowledge Graphs and Product UpdatesNeo4j
David Pond, Lead Product Manager, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Shirley Bacso, Data Architect, Ingka Digital
“Linked Metadata by Design” represents the integration of the outcomes from human collaboration, starting from the design phase of data product development. This knowledge is captured in the Data Knowledge Graph. It not only enables data products to be robust and compliant but also well-understood and effectively utilized.
Your enemies use GenAI too - staying ahead of fraud with Neo4jNeo4j
Delivered by Michael Down at Gartner Data & Analytics Summit London 2024 - Your enemies use GenAI too: Staying ahead of fraud with Neo4j.
Fraudsters exploit the latest technologies like generative AI to stay undetected. Static applications can’t adapt quickly enough. Learn why you should build flexible fraud detection apps on Neo4j’s native graph database combined with advanced data science algorithms. Uncover complex fraud patterns in real-time and shut down schemes before they cause damage.
BT & Neo4j _ How Knowledge Graphs help BT deliver Digital Transformation.pptxNeo4j
Delivered by Sreenath Gopalakrishna, Director of Software Engineering at BT, and Dr Jim Webber, Chief Scientist at Neo4j, at Gartner Data & Analytics Summit London 2024 this presentation examines how knowledge graphs and GenAI combine in real-world solutions.
BT Group has used the Neo4j Graph Database to enable impressive digital transformation programs over the last 6 years. By re-imagining their operational support systems to adopt self-serve and data lead principles they have substantially reduced the number of applications and complexity of their operations. The result has been a substantial reduction in risk and costs while improving time to value, innovation, and process automation. Future innovation plans include the exploration of uses of EKG + Generative AI.
Workshop: Enabling GenAI Breakthroughs with Knowledge Graphs - GraphSummit MilanNeo4j
Look beyond the hype and unlock practical techniques to responsibly activate intelligence across your organization’s data with GenAI. Explore how to use knowledge graphs to increase accuracy, transparency, and explainability within generative AI systems. You’ll depart with hands-on experience combining relationships and LLMs for increased domain-specific context and enhanced reasoning.
Workshop 1. Architecting Innovative Graph Applications
Join this hands-on workshop for beginners led by Neo4j experts guiding you to systematically uncover contextual intelligence. Using a real-life dataset we will build step-by-step a graph solution; from building the graph data model to running queries and data visualization. The approach will be applicable across multiple use cases and industries.
LARUS - Galileo.XAI e Gen-AI: la nuova prospettiva di LARUS per il futuro del...Neo4j
Roberto Sannino, Larus Business Automation
Nel panorama sempre più complesso dei progetti basati su grafi, LARUS ha consolidato una solida esperienza pluriennale, costruendo un rapporto di fiducia e collaborazione con Neo4j. Attraverso il LARUS Labs, ha sviluppato componenti e connettori che arricchiscono l’ecosistema Neo4j, contribuendo alla sua continua evoluzione. Tutto questo know-how è stato incanalato nell’innovativa soluzione Galileo.XAI di LARUS, un prodotto all’avanguardia che, integrato con la Generative AI, offre una nuova prospettiva nel mondo dell’Intelligenza Artificiale Spiegabile applicata ai grafi. In questo speech, si esplorerà il percorso di crescita di LARUS in questo settore, mettendo in luce le potenzialità della soluzione Galileo.XAI nel guidare l’innovazione e la trasformazione digitale.
GraphSummit Milan - Visione e roadmap del prodotto Neo4jNeo4j
van Zoratti, VP of Product Management, Neo4j
Scoprite le ultime innovazioni di Neo4j che consentono un’intelligenza guidata dalle relazioni su scala. Scoprite le più recenti integrazioni nel cloud e i miglioramenti del prodotto che rendono Neo4j una scelta essenziale per gli sviluppatori che realizzano applicazioni con dati interconnessi e IA generativa.
GraphSummit Milan & Stockholm - Neo4j: The Art of the Possible with GraphNeo4j
Dr Jesús Barrasa, Head of Solutions Architecture for EMEA, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Leading Change strategies and insights for effective change management pdf 1.pdf
Using Graphs to Enable National-Scale Analytics
1. Using Graphs to Enable
National-Scale Analytics
http://www.cs.njit.edu/~bader
Connections:
Graphs in Government
2. David A. Bader
Distinguished Professor and
Director, Institute for Data Science
• IEEE Fellow, SIAM Fellow, AAAS Fellow
• Recent Service:
• White House's National Strategic Computing Initiative (NSCI) panel
• Computing Research Association Board
• NSF Advisory Committee on Cyberinfrastructure
• Council on Competitiveness HPC Advisory Committee
• IEEE Computer Society Board of Governors
• IEEE IPDPS Steering Committee
• Editor-in-Chief, ACM Transactions on Parallel Computing
• Editor-in-Chief, IEEE Transactions on Parallel and Distributed Systems
• Over $184M of research awards
• 250+ publications, ≥ 9,700 citations, h-index ≥ 57
• National Science Foundation CAREER Award recipient
• Directed: Facebook AI Systems
• Directed: NVIDIA GPU Center of Excellence, NVIDIA AI Lab (NVAIL)
• Directed: Sony-Toshiba-IBM Center for the Cell/B.E. Processor
• Founder: Graph500 List benchmarking “Big Data” platforms
• Recognized as a “RockStar” of High Performance Computing by InsideHPC in 2012 and as
HPCwire’s People to Watch in 2012 and 2014.
15 September 2020 David Bader 2
3. Solving real-world challenges
• Urban sustainability
• Healthcare analytics
• Trustworthy, Free and Fair Elections
• Insider threat detection
• Utility infrastructure protection
• Cyberattack defense
• Disease outbreak and epidemic monitoring
15 September 2020 David Bader 3
4. Strategic Intelligence
4
“Advances in communications and the
democratization of other technologies have also
generated an ability to create and share vast and
exponentially growing amounts of information farther
and faster than ever before. This abundance of data
provides significant opportunities for the IC, including
new avenues for collection and the potential for
greater insight, but it also challenges the IC’s ability to
collect, process, evaluate, and analyze such enormous
volumes of data quickly enough to provide relevant
and useful insight to its customers.”
à “Develop and maintain capabilities to acquire and
evaluate data to obtain a deep understanding of the
global political, diplomatic, military, economic,
security, and informational environment. “
15 September 2020 David Bader
5. Data Science:
Discovery and Innovation
The National Strategic Computing
Initiative (NSCI) The NSCI was
launched by Executive Order (EO)
13702 in July 2015 to advance U.S.
leadership in high performance
computing (HPC).
McKinsey predicts that data-driven technologies will
bring an additional $300 billion of value to the U.S.
health care sector alone, and by 2020, 1.5 million more
“data-savvy managers” will be needed to capitalize on
the potential of data, “big” and otherwise.
Manyika, J. et al. (2011). Big data: The next frontier for innovation,
competition, and productivity. McKinsey Global Institute. Retrieved from
http://www.mckinsey.com/business-functions/business-technology/our-
insights/big-data-the-next-frontier-for-innovation
The ability to manipulate data and understand Data Science
is becoming increasingly critical to current and future
discovery and innovation.
REALIZING THE POTENTIAL OF DATA SCIENCE Final Report from
the National Science Foundation Computer and Information
Science and Engineering Advisory Committee Data Science
Working Group. Francine Berman and Rob Rutenbar, co-Chairs
Henrik Christensen, Susan Davidson, Deborah Estrin, Michael
Franklin, Brent Hailpern, Margaret Martonosi, Padma Raghavan,
Victoria Stodden, Alex Szalay. December 2016
15 September 2020 David Bader 5
6. National Strategic Computing
Initiative (NSCI) Update
14 Nov 2019
• Computing hardware, with a
focus on the 10-year horizon
and beyond;
• Software infrastructure that
will enable effective and
sustainable use of new
computing;
• Overall infrastructure, from
data usage and management to
cybersecurity, foundries, and
prototypes;
• And the development of new
real-world applications,
systems, and opportunities for
future computing.
In recognition of the fast-changing computing
landscape, the updated plan places new emphasis on
the following areas as compared to the 2016 plan:
15 September 2020 David Bader 6
7. The Reality
• This image is a
visualization of a
personal friendster
network (circa
February 2004) to 3
hops out. The
network consists of
47,471 people
connected by
432,430 edges.
Credit: Jeffrey Heer, UC Berkeley
15 September 2020 David Bader 7
8. 8
Advantages of Graph Analytics
• Much smaller than raw data. Can fit in memory of
large computer
• Fast response to queries
• Pre-join of database
• Combine data from different sources and of
different types
• Some common intelligence and law enforcement
queries are naturally posed on graphs
• Particularly for the terrorist threat
15 September 2020 David Bader
10. Query Example II: Motif Finding
Image Source:
T. Coffman,
S. Greenblatt,
S. Marcus,
Graph-based
technologies for
intelligence
analysis,
CACM, 47
(3, March 2004):
pp 45-47
David Bader 10
11. The Big Picture
Analyst makes
queries.
Massive
Databases
Fast
Graph
Query
Extract “Window”
High Latency Query
Graph resides in memory.
15 September 2020 David Bader 11
13. Graph Data Science: Real-world challenges
All involve exascale streaming graphs:
• Health care à disease spread, detection and prevention of
epidemics/pandemics (e.g. SARS, Avian flu, H1N1 “swine” flu)
• Massive social networks à understanding communities, intentions,
population dynamics, pandemic spread, transportation and evacuation
• Intelligence à business analytics, anomaly detection, security, knowledge
discovery from massive data sets
• Systems Biology à understanding complex life systems, drug design,
microbial research, unravel the mysteries of the HIV virus; understand life,
disease,
• Electric Power Grid à communication, transportation, energy, water, food
supply
• Modeling and Simulation à Perform full-scale economic-social-political
simulations
REQUIRES PREDICTING / INFLUENCE CHANGE IN REAL-TIME AT SCALE
15 September 2020 David Bader 13
14. Graphs are pervasive in large-scale data analysis
• Sources of massive data: peta- and exa-scale simulations, experimental
devices, the Internet, scientific applications.
• New challenges for analysis: data sizes, heterogeneity, uncertainty, data
quality.
Astrophysics
Problem: Outlier detection.
Challenges: massive datasets,
temporal variations.
Graph problems: clustering,
matching.
Bioinformatics
Problem: Identifying drug target
proteins.
Challenges: Data heterogeneity,
quality.
Graph problems: centrality,
clustering.
Social Informatics
Problem: Discover emergent
communities, model spread of
information.
Challenges: new analytics routines,
uncertainty in data.
Graph problems: clustering,
shortest paths, flows.
Image sources: (1) http://physics.nmt.edu/images/astro/hst_starfield.jpg
(2,3) www.visualComplexity.com15 September 2020 David Bader 14
16. Massive Data Analytics: Infrastructure
• The U.S. high-voltage transmission
grid has >150,000 miles of line.
• Real-time detection of changes and
anomalies in the grid is a large-scale
problem.
• May mitigate impact of widespread
blackouts due to equipment failure or
intentional damage.
15 September 2020 David Bader 16
17. Network Analysis for Intelligence and Surveillance
• [Krebs ’04] Post 9/11 Terrorist
Network Analysis from public domain
information
• Plot masterminds correctly identified
from interaction patterns: centrality
• A global view of entities is often more
insightful
• Detect anomalous activities by
exact/approximate graph matching
Image Source: http://www.orgnet.com/hijackers.html
Image Source: T. Coffman, S. Greenblatt, S. Marcus, Graph-based technologies
for intelligence analysis, CACM, 47 (3, March 2004): pp 45-47
15 September 2020 David Bader 17
18. Massive Data Analytics: Public Health
• CDC/national-scale surveillance of public health
• Cancer genomics and drug design
• Computed Betweenness Centrality of Human Proteome
Human Genome core protein interactions
Degree vs. Betweenness Centrality
Degree
1 10 100
BetweennessCentrality
1e-7
1e-6
1e-5
1e-4
1e-3
1e-2
1e-1
1e+0
ENSG000001
45332.2
Kelch-like
protein
implicated in
breast cancer
15 September 2020 David Bader 18
19. Massive Streaming Graph Analytics
(A, B, t1, poke)
(A, C, t2, msg)
(A, D, t3, view wall)
(A, D, t4, post)
(B, A, t2, poke)
(B, A, t3, view wall)
(B, A, t4, msg)
Billions of nodes
… n9 n8 n7 n6 n5 n4 n3 n2 n1 …
Analysts
15 September 2020 David Bader 19
20. Centrality in Massive Social Network Analysis
• Centrality metrics: Quantitative measures to capture the
importance of person in a social network
• Betweenness is a global index related to shortest paths that
traverse through the person
• Can be used for community detection as well
• Identifying central nodes in large complex networks is the
key metric in several applications:
• Biological networks, protein-protein interactions
• Sexual networks and AIDS
• Identifying key actors in terrorist networks
• Organizational behavior
• Supply chain management
• Transportation networks
15 September 2020 David Bader 20
21. Betweenness Centrality (BC)
• Key metric in social network analysis
[Freeman ’77, Goh ’02, Newman ’03, Brandes ’03]
• : Number of shortest paths between vertices s and t
• : Number of shortest paths between vertices s and t
passing through v
( )
( )st
s v t V st
v
BC v
s
s¹ ¹ Î
= å
)(vsts
sts
15 September 2020 David Bader 21
22. Mining Twitter for Social Good
ICPP 2010
Image credit: bioethicsinstitute.org
15 September 2020 David Bader 22
23. Conclusions
• Graph Data Science is an important technique for
solving real-world grand challenges
• Graphs are a natural abstraction for Big Data and
connect people, places, and things
• Graphs are useful in problems such as Data Tagging,
Triage, Exploratory Data Analysis, Anomaly Detection,
Finding Patterns, Insider Threats, Fraud Detection, and
Advanced Analytics
• Graph technologies such as Neo4j provide Enterprise-
class performance
• Getting started with Graph Databases is easy
15 September 2020 David Bader 23
24. Graph500 Benchmark, www.graph500.org
• Cybersecurity
• 15 Billion Log Entries/Day (for large
enterprises)
• Full Data Scan with End-to-End Join
Required
• Medical Informatics
• 50M patient records, 20-200
records/patient, billions of individuals
• Entity Resolution Important
• Social Networks
• Example, Facebook, Twitter
• Nearly Unbounded Dataset Size
• Data Enrichment
• Easily PB of data
• Example: Maritime Domain Awareness
• Hundreds of Millions of Transponders
• Tens of Thousands of Cargo Ships
• Tens of Millions of Pieces of Bulk Cargo
• May involve additional data (images,
etc.)
• Symbolic Networks
• Example, the Human Brain
• 25B Neurons
• 7,000+ Connections/Neuron
Defining a new set of benchmarks to guide the design of hardware architectures and
software systems intended to support such applications and to help procurements.
Graph algorithms are a core part of many analytics workloads.
Executive Committee: D.A. Bader, R. Murphy, M. Snir, A. Lumsdaine
• Five Business Area Data Sets:
15 September 2020 David Bader 24