The Linked Data Benchmark Council (LDBC) is a non-profit organization dedicated to establishing benchmarks, benchmark practices and benchmark results for graph data management software. The Graph Query Language task force of LDBC is studying query languages for graph data management systems, and specifically those systems storing so-called Property Graph data. The goals of the GraphQL task force are to:
Devise a list of desired features and functionalities of a graph query language.
Evaluate a number of existing languages (i.e. Cypher, Gremlin, PGQL, SPARQL, SQL), and identify possible issues.
Provide a better understanding of the design space and state-of-the-art.
Develop proposals for changes to existing query languages or even a new graph query language.
This query language should cover the needs of the most important use-cases for such systems, such as social network and Business Intelligence workloads.
This talk will present an update of the work accomplished by the LDBC GraphQL task force. We also look for input from the graph community.
Virtualizing Relational Databases as Graphs: a multi-model approachJuan Sequeda
Talk given at Smart Data 2017
Relational Databases are inflexible due to the rigid constraints of the relational data model. If you have new data that doesn’t fit your schema, you will need to alter your schema (add a column or a new table). This is a task that is not always possible. IT departments don't have time, or they won't allow it - just more nulls that can lead to query performance degradation, etc.
A goal of graph databases is to address this problem with their schema-less graph data model. However, many businesses have large investments in commercial RDBMSs and their associated applications and can't expect to move all of their data to a graph database.
In this talk, I will present a multi-model graph/relational architecture solution. Keep your relational data where it is, virtualize it as a graph, and then connect it with additional data stored in a graph database. This way, both graph and relational technologies can seamlessly interact together.
Integrating Semantic Web in the Real World: A Journey between Two Cities Juan Sequeda
Keynote at The 9th International Conference on Knowledge Capture (KCAP2017), Austin, Texas, Dec 2017
An early vision in Computer Science has been to create intelligent systems capable of reasoning on large amounts of data. Today, this vision can be delivered by integrating Relational Databases with the Semantic Web using the W3C standards: a graph data model (RDF), ontology language (OWL), mapping language (R2RML) and query language (SPARQL). The research community has successfully been showing how intelligent systems can be created with Semantic Web technologies, dubbed now as Knowledge Graphs.
However, where is the mainstream industry adoption? What are the barriers to adoption? Are these engineering and social barriers or are they open scientific problems that need to be addressed?
This talk will chronicle our journey of deploying Semantic Web technologies with real world users to address Business Intelligence and Data Integration needs, describe technical and social obstacles that are present in large organizations, and scientific challenges that require attention.
Presentation at Data/Graph Day Texas Conference.
Austin, Texas
January 14, 2017
This talk grew out Juan Sequeda's office hours following the Seattle Graph Meetup. Some of the questions posed were: How do I recognize problem best solved with a graph solution? How do I determine the best type of graph to solve the problem? How do I manage the data where both graph and relational operations will be performed? Juan did such a great job of explaining the options, we asked him to develop his responses into a formal talk.
Integrating Semantic Web with the Real World - A Journey between Two Cities ...Juan Sequeda
(The original version of this talk was a Keynote at KCAP2017. This is the final version of the slides after giving this talk 14 times in 2018)
An early vision in Computer Science has been to create intelligent systems capable of reasoning on large amounts of data. Today, this vision can be delivered by integrating Relational Databases with the Semantic Web using the W3C standards: a graph data model (RDF), ontology language (OWL), mapping language (R2RML) and query language (SPARQL). The research community has successfully been showing how intelligent systems can be created with Semantic Web technologies, dubbed now as Knowledge Graphs.
However, where is the mainstream industry adoption? What are the barriers to adoption? Are these engineering and social barriers or are they open scientific problems that need to be addressed?
This talk will chronicle our journey of deploying Semantic Web technologies with real world users to address Business Intelligence and Data Integration needs, describe technical and social obstacles that are present in large organizations, and scientific and engineering challenges that require attention.
Integrating Relational Databases with the Semantic Web: A ReflectionJuan Sequeda
This is a lecture given at the 2017 Reasoning Web Summer School
It has been clear from the beginning that the success of the Semantic Web hinges on integrating the vast amount of data stored in Relational Databases. In 2007, the W3C organized a workshop on RDF Access to Relational Databases. In 2012, two standards were ratified that map relational data to RDF: Direct Mapping and R2RML.
In this lecture, I will reflect on the last 10 years of research results and systems to integrate Relational Databases with the Semantic web. I will provide an answer to the following question: how and to what extent can Relational Databases be integrated with the Semantic Web? I will review how these standards and systems are being used in practice for data integration and discuss open challenges.
Database is the new black. Ever the backbone of information architectures, database technology continually evolves to meet growing and changing business needs. New types of data and applications make the database more important than ever, and understanding which technology best serves your use case is paramount to building durable systems. These days, the choices are many, so users should be careful when deciding which direction to go. Register for this Exploratory Webcast to hear veteran database Analyst Dr. Robin Bloor explain why the database market has exploded in recent years. He'll outline the current database landscape, and provide insights about which kinds of technologies are suitable for the growing variety of business needs today. He'll also focus on key auxiliary technologies that enable modern databases to do perform efficiently.
Virtualizing Relational Databases as Graphs: a multi-model approachJuan Sequeda
Talk given at Smart Data 2017
Relational Databases are inflexible due to the rigid constraints of the relational data model. If you have new data that doesn’t fit your schema, you will need to alter your schema (add a column or a new table). This is a task that is not always possible. IT departments don't have time, or they won't allow it - just more nulls that can lead to query performance degradation, etc.
A goal of graph databases is to address this problem with their schema-less graph data model. However, many businesses have large investments in commercial RDBMSs and their associated applications and can't expect to move all of their data to a graph database.
In this talk, I will present a multi-model graph/relational architecture solution. Keep your relational data where it is, virtualize it as a graph, and then connect it with additional data stored in a graph database. This way, both graph and relational technologies can seamlessly interact together.
Integrating Semantic Web in the Real World: A Journey between Two Cities Juan Sequeda
Keynote at The 9th International Conference on Knowledge Capture (KCAP2017), Austin, Texas, Dec 2017
An early vision in Computer Science has been to create intelligent systems capable of reasoning on large amounts of data. Today, this vision can be delivered by integrating Relational Databases with the Semantic Web using the W3C standards: a graph data model (RDF), ontology language (OWL), mapping language (R2RML) and query language (SPARQL). The research community has successfully been showing how intelligent systems can be created with Semantic Web technologies, dubbed now as Knowledge Graphs.
However, where is the mainstream industry adoption? What are the barriers to adoption? Are these engineering and social barriers or are they open scientific problems that need to be addressed?
This talk will chronicle our journey of deploying Semantic Web technologies with real world users to address Business Intelligence and Data Integration needs, describe technical and social obstacles that are present in large organizations, and scientific challenges that require attention.
Presentation at Data/Graph Day Texas Conference.
Austin, Texas
January 14, 2017
This talk grew out Juan Sequeda's office hours following the Seattle Graph Meetup. Some of the questions posed were: How do I recognize problem best solved with a graph solution? How do I determine the best type of graph to solve the problem? How do I manage the data where both graph and relational operations will be performed? Juan did such a great job of explaining the options, we asked him to develop his responses into a formal talk.
Integrating Semantic Web with the Real World - A Journey between Two Cities ...Juan Sequeda
(The original version of this talk was a Keynote at KCAP2017. This is the final version of the slides after giving this talk 14 times in 2018)
An early vision in Computer Science has been to create intelligent systems capable of reasoning on large amounts of data. Today, this vision can be delivered by integrating Relational Databases with the Semantic Web using the W3C standards: a graph data model (RDF), ontology language (OWL), mapping language (R2RML) and query language (SPARQL). The research community has successfully been showing how intelligent systems can be created with Semantic Web technologies, dubbed now as Knowledge Graphs.
However, where is the mainstream industry adoption? What are the barriers to adoption? Are these engineering and social barriers or are they open scientific problems that need to be addressed?
This talk will chronicle our journey of deploying Semantic Web technologies with real world users to address Business Intelligence and Data Integration needs, describe technical and social obstacles that are present in large organizations, and scientific and engineering challenges that require attention.
Integrating Relational Databases with the Semantic Web: A ReflectionJuan Sequeda
This is a lecture given at the 2017 Reasoning Web Summer School
It has been clear from the beginning that the success of the Semantic Web hinges on integrating the vast amount of data stored in Relational Databases. In 2007, the W3C organized a workshop on RDF Access to Relational Databases. In 2012, two standards were ratified that map relational data to RDF: Direct Mapping and R2RML.
In this lecture, I will reflect on the last 10 years of research results and systems to integrate Relational Databases with the Semantic web. I will provide an answer to the following question: how and to what extent can Relational Databases be integrated with the Semantic Web? I will review how these standards and systems are being used in practice for data integration and discuss open challenges.
Database is the new black. Ever the backbone of information architectures, database technology continually evolves to meet growing and changing business needs. New types of data and applications make the database more important than ever, and understanding which technology best serves your use case is paramount to building durable systems. These days, the choices are many, so users should be careful when deciding which direction to go. Register for this Exploratory Webcast to hear veteran database Analyst Dr. Robin Bloor explain why the database market has exploded in recent years. He'll outline the current database landscape, and provide insights about which kinds of technologies are suitable for the growing variety of business needs today. He'll also focus on key auxiliary technologies that enable modern databases to do perform efficiently.
Should a Graph Database Be in Your Next Data Warehouse Stack?Cambridge Semantics
In this webinar, AnzoGraph’s graph database guru Barry Zane (former co-founder of Netezza) and data governance author Steve Sarsfield talk about how graph databases fit into the data warehouse modernization trend. They also explore how certain workloads can be better served with an analytical graph database and how today’s technology stacks offer new paradigms for deployment like the cloud, containers and graph analytics.
Scaling up business value with real-time operational graph analyticsConnected Data World
Graph-based solutions have been in the market for over a decade with deployments in financial services, healthcare, retail, and manufacturing. The graph technology of the past limited them to simple queries (1 or 2 hops), modest data sizes, or slow response times, which limited their value.
A new generation of fast, scalable graph databases, led by TigerGraph, is opening up a new world of business insight and performance. Join us, as we explore some new exciting use cases powered by native parallel graph database with storage and computation capability for each node:
A large financial services payment provider is using graph-based pattern detection (7 to 11 hop queries) to detect more fraud and money laundering in real time, handling peak volume of 256,000 transactions per second.
IceKredit, an innovative FinTech is transforming the near-prime and sub-prime credit market in United States, China and South Asian countries with customer 360 analytics for credit approval and ongoing monitoring.
A biotech and pharmaceutical giant is building a prescriber and patient 360 graph and using multi-hop exploratory and analytic queries to understand the most efficient ways of launching a new drug for maximum return.
Wish.com is delivering real-time personalized recommendations to increase eCommerce revenue.
The most profitable insurance organizations will outperform competitors in key areas as personalized customer service, claims processing, subrogation recovery, fraud detection and product innovation. This requires thinking beyond the traditional data warehouse to the data fabric - an emerging data management architecture.
In this webinar Andy Sohn, Senior Advisor at NewVantage Partners, and Bob Parker, Senior Director for Insurance at Cambridge Semantics, explore the role of the data discovery and integration layer in an enterprise data fabric for the Insurance industry. These are their slides.
How to Reveal Hidden Relationships in Data and Risk AnalyticsOntotext
Imagine risk analysis manager or compliance officer who can discover easily relationships like this: Big Bucks Café out of Seattle controls My Local Café in NYC through an offshore company. Such discovery can be a game changer if My Local Café pretends to be an independent small enterprise, while recently Big Bucks experiences financial difficulties.
Risk Analytics Using Knowledge Graphs / FIBO with Deep LearningCambridge Semantics
This EDM Council webinar, sponsored by Cambridge Semantics Inc. and featuring FI Consulting, explores the challenges common to a risk analytics pipeline, application of graph analytics to mortgage loan data and use cases in adjacent areas including customer service, collections, fraud and AML.
How Graphs Continue to Revolutionize The Prevention of Financial Crime & Frau...Connected Data World
Financial crime prevention is something that affects everyone in one way or another. From the Deutsche Banks of the world to small and medium online merchants, regulations for anti-money laundering, know your customer, and customer due diligence apply.
Failing to comply with such regulations can bring on substantial fines. Even more importantly, it can hurt the bottom line and reputation of businesses, having far-reaching side effects. Complying with such regulations, and actively cracking down on financial crime, however, is not easy.
Cross-referencing interconnected data across various datasets, and trying to apply detection rules and to discover patterns in the data is complicated. It takes expertise, effort, and the right technology to be able to do this efficiently.
A natural and efficient way of looking for patterns and applying rules in troves of interconnected data is to model and view that data as a graph. By modeling data as a graph, and applying graph-based algorithms such as PageRank or Centrality, traversing paths, discovering connections and getting insights becomes possible.
Graphs and graph databases are the fastest-growing area of data management technology for a number of reasons. One of the reasons is because they are a perfect match for use cases involving interconnected data.
Queries that would be very complicated to express and very slow to execute using relational databases or other NoSQL database technology, are feasible using graph databases. With the rise in complexity of modern financial markets, financial crimes require going 4 to 11 levels deep into the account – payment graph: this requires a different solution than either relational or NoSQL databases.
How are organizations such as Alibaba, OpenCorporates, and Visa using graph database technology to not just stay on top of regulation, but be one step ahead in the race against financial crime?
Is it possible to do this in real time?
What do graph query languages have to do with this?
In this webinar, data analytics gurus Sathish Thyagarajan and Steve Sarsfield introduce AnzoGraph™, our graph OLAP database, demonstrate the different types of analyses you can perform with it and how it complements Neo4j, AWS Neptune and other OLTP systems. Finally, they’ll show how you can get it up and running on your laptop in about 5 minutes.
How Semantics Solves Big Data ChallengesDATAVERSITY
Today, organizations want both IT simplicity and innovation, but reliance on traditional databases only leads to more complexity, longer development cycles, and more silos. In fact, organizations report that the #1 impediment to big data success is having too many silos. In this webinar, we will discuss how a new database technology, semantics, solves this problem by providing a new approach to modeling data that focuses on relationships and context, making it easier for data to be understood, searched, and shared. With semantics, world-leading organizations are integrating disparate data faster and easier and building smarter applications with richer analytic capabilities—benefits that we look forward to diving into during the webinar.
Knowledge graphs generation is outpacing the ability to intelligently use the information that they contain. Octavian's work is pioneering Graph Artificial Intelligence to provide the brains to make knowledge graphs useful.
Our neural networks can take questions and knowledge graphs and return answers. Imagine:
a google assistant that reads your own knowledge graph (and actually works)
a BI tool reads your business' knowledge graph
a legal assistant that reads the graph of your case
Taking a neural network approach is important because neural networks deal better with the noise in data and variety in schema. Using neural networks allows people to ask questions of the knowledge graph in their own words, not via code or query languages.
Octavian's approach is to develop neural networks that can learn to manipulate graph knowledge into answers. This approach is radically different to using networks to generate graph embeddings. We believe this approach could transform how we interact with databases.
Graph intelligence: the future of data-driven investigationsConnected Data World
RDF and graph databases are on the rise. The performances, flexibility, and scalability of these systems are attracting a large number of organizations struggling with complex and connected data. While the graph approach offers several advantages, finding insights into the enormous volume of data remains a challenge.
In this presentation, we will introduce Graph Intelligence, an advanced combination of human and computer-based intelligence to find insights faster in complex connected datasets. We will explain why we believe this approach is the future for teams of investigators fighting financial crime, national security threats or cyber attacks.
From this presentation, you will learn:
The nature and benefits of the Graph Intelligence approach
How to build a platform leveraging graph technology
Real-life examples of money laundering and financial crimes detection and investigation
GraphDB Cloud: Enterprise Ready RDF Database on DemandOntotext
GraphDB Cloud is an enterprise grade RDF graph database providing high-performance querying over large volumes of RDF data. On this webinar, Ontotext demonstrates how to instantly create and deploy a fully managed Graph Database, then import & query data with the (OpenRDF) GraphDB Workbench, and finally explore and visualize data with the build in visualization tools.
Robin Bloor and Mark Madsen offer their theories on where the rapidly-changing database market stands today: What’s new? What’s standard? What is the trajectory of this evolving market? Each Analyst will present for 10-15 minutes, then will engage in a dialogue with the moderator and attendees.
The webcast audio and video archive can be found at https://bloorgroup.webex.com/bloorgroup/lsr.php?AT=pb&SP=EC&rID=4695777&rKey=4b284990a1db4ec0
Supporting GDPR Compliance through effectively governing Data Lineage and Dat...Connected Data World
General Data Protection Regulation (GDPR) is a new set of EU guidelines governing how organisations handle personal data replacing the current Data Protection Act (DPA) and has been enforced since May 2018. With GDPR in place organizations need to process personal data lawfully, maintain this accurately for no longer than necessary, and in a secure way.
They should be able to report on the purposes of processing, the categories of personal data they control, and be able to demonstrate compliance with regards to GDPR policies. The challenge organizations face with regards to GDPR, being able to record every point where processing activities of personal data takes place and to showcase accountability with regards to this activity, has made data governance even more critical on the data lineage and data provenance aspects.
Governing data lineage enables the understanding of the organization’s data flow activities and to identify and document the legal justification for each type of activity. In addition GDPR requires evidence of records for the processing of personal data which implies the need to effectively record and govern data provenance.
In the current talk we are going to showcase how effectively governing data lineage and data provenance gives us the ability to verify that the processing of private data within an organization is compliant with GDPR regulatory requirements.
Webinar: MongoDB and Analytics: Building Solutions with the MongoDB BI ConnectorMongoDB
MongoDB is known for being a developers database of choice, but what about data analysts? MongoDB 3.2 has introduced the MongoDB BI Connector – to allow users to connect to an instance using their analytics tool of choice. Now users of Tableau, QlikView, Excel, Cognos, and countless others can connect to MongoDB and immediately begin building reporting solutions. In this webinar, we will cover the architecture needed to use the BI Connector with MongoDB. We will also demonstrate how to build reports with your data.
Every year the financial industry loses billions because of fraud while in the meantime fraudsters are coming up with more and more sophisticated patterns.
Financial institutions have to find the balance between fraud protection and negative customer experience. Fraudsters bury their patterns in lots of data, but the traditional technologies are not designed to detect fraud in real-time or to see patterns beyond the individual account.
Analyzing relations with graph databases helps uncover these larger complex patterns and speeds up suspicious behavior identification.
Furthermore, graph databases enable fast and effective real-time link queries and passing context to machine learning models.
The earlier fraud pattern or network is identified, the faster the activity is blocked. As a result, losses and fines are minimized.
Should a Graph Database Be in Your Next Data Warehouse Stack?Cambridge Semantics
In this webinar, AnzoGraph’s graph database guru Barry Zane (former co-founder of Netezza) and data governance author Steve Sarsfield talk about how graph databases fit into the data warehouse modernization trend. They also explore how certain workloads can be better served with an analytical graph database and how today’s technology stacks offer new paradigms for deployment like the cloud, containers and graph analytics.
Scaling up business value with real-time operational graph analyticsConnected Data World
Graph-based solutions have been in the market for over a decade with deployments in financial services, healthcare, retail, and manufacturing. The graph technology of the past limited them to simple queries (1 or 2 hops), modest data sizes, or slow response times, which limited their value.
A new generation of fast, scalable graph databases, led by TigerGraph, is opening up a new world of business insight and performance. Join us, as we explore some new exciting use cases powered by native parallel graph database with storage and computation capability for each node:
A large financial services payment provider is using graph-based pattern detection (7 to 11 hop queries) to detect more fraud and money laundering in real time, handling peak volume of 256,000 transactions per second.
IceKredit, an innovative FinTech is transforming the near-prime and sub-prime credit market in United States, China and South Asian countries with customer 360 analytics for credit approval and ongoing monitoring.
A biotech and pharmaceutical giant is building a prescriber and patient 360 graph and using multi-hop exploratory and analytic queries to understand the most efficient ways of launching a new drug for maximum return.
Wish.com is delivering real-time personalized recommendations to increase eCommerce revenue.
The most profitable insurance organizations will outperform competitors in key areas as personalized customer service, claims processing, subrogation recovery, fraud detection and product innovation. This requires thinking beyond the traditional data warehouse to the data fabric - an emerging data management architecture.
In this webinar Andy Sohn, Senior Advisor at NewVantage Partners, and Bob Parker, Senior Director for Insurance at Cambridge Semantics, explore the role of the data discovery and integration layer in an enterprise data fabric for the Insurance industry. These are their slides.
How to Reveal Hidden Relationships in Data and Risk AnalyticsOntotext
Imagine risk analysis manager or compliance officer who can discover easily relationships like this: Big Bucks Café out of Seattle controls My Local Café in NYC through an offshore company. Such discovery can be a game changer if My Local Café pretends to be an independent small enterprise, while recently Big Bucks experiences financial difficulties.
Risk Analytics Using Knowledge Graphs / FIBO with Deep LearningCambridge Semantics
This EDM Council webinar, sponsored by Cambridge Semantics Inc. and featuring FI Consulting, explores the challenges common to a risk analytics pipeline, application of graph analytics to mortgage loan data and use cases in adjacent areas including customer service, collections, fraud and AML.
How Graphs Continue to Revolutionize The Prevention of Financial Crime & Frau...Connected Data World
Financial crime prevention is something that affects everyone in one way or another. From the Deutsche Banks of the world to small and medium online merchants, regulations for anti-money laundering, know your customer, and customer due diligence apply.
Failing to comply with such regulations can bring on substantial fines. Even more importantly, it can hurt the bottom line and reputation of businesses, having far-reaching side effects. Complying with such regulations, and actively cracking down on financial crime, however, is not easy.
Cross-referencing interconnected data across various datasets, and trying to apply detection rules and to discover patterns in the data is complicated. It takes expertise, effort, and the right technology to be able to do this efficiently.
A natural and efficient way of looking for patterns and applying rules in troves of interconnected data is to model and view that data as a graph. By modeling data as a graph, and applying graph-based algorithms such as PageRank or Centrality, traversing paths, discovering connections and getting insights becomes possible.
Graphs and graph databases are the fastest-growing area of data management technology for a number of reasons. One of the reasons is because they are a perfect match for use cases involving interconnected data.
Queries that would be very complicated to express and very slow to execute using relational databases or other NoSQL database technology, are feasible using graph databases. With the rise in complexity of modern financial markets, financial crimes require going 4 to 11 levels deep into the account – payment graph: this requires a different solution than either relational or NoSQL databases.
How are organizations such as Alibaba, OpenCorporates, and Visa using graph database technology to not just stay on top of regulation, but be one step ahead in the race against financial crime?
Is it possible to do this in real time?
What do graph query languages have to do with this?
In this webinar, data analytics gurus Sathish Thyagarajan and Steve Sarsfield introduce AnzoGraph™, our graph OLAP database, demonstrate the different types of analyses you can perform with it and how it complements Neo4j, AWS Neptune and other OLTP systems. Finally, they’ll show how you can get it up and running on your laptop in about 5 minutes.
How Semantics Solves Big Data ChallengesDATAVERSITY
Today, organizations want both IT simplicity and innovation, but reliance on traditional databases only leads to more complexity, longer development cycles, and more silos. In fact, organizations report that the #1 impediment to big data success is having too many silos. In this webinar, we will discuss how a new database technology, semantics, solves this problem by providing a new approach to modeling data that focuses on relationships and context, making it easier for data to be understood, searched, and shared. With semantics, world-leading organizations are integrating disparate data faster and easier and building smarter applications with richer analytic capabilities—benefits that we look forward to diving into during the webinar.
Knowledge graphs generation is outpacing the ability to intelligently use the information that they contain. Octavian's work is pioneering Graph Artificial Intelligence to provide the brains to make knowledge graphs useful.
Our neural networks can take questions and knowledge graphs and return answers. Imagine:
a google assistant that reads your own knowledge graph (and actually works)
a BI tool reads your business' knowledge graph
a legal assistant that reads the graph of your case
Taking a neural network approach is important because neural networks deal better with the noise in data and variety in schema. Using neural networks allows people to ask questions of the knowledge graph in their own words, not via code or query languages.
Octavian's approach is to develop neural networks that can learn to manipulate graph knowledge into answers. This approach is radically different to using networks to generate graph embeddings. We believe this approach could transform how we interact with databases.
Graph intelligence: the future of data-driven investigationsConnected Data World
RDF and graph databases are on the rise. The performances, flexibility, and scalability of these systems are attracting a large number of organizations struggling with complex and connected data. While the graph approach offers several advantages, finding insights into the enormous volume of data remains a challenge.
In this presentation, we will introduce Graph Intelligence, an advanced combination of human and computer-based intelligence to find insights faster in complex connected datasets. We will explain why we believe this approach is the future for teams of investigators fighting financial crime, national security threats or cyber attacks.
From this presentation, you will learn:
The nature and benefits of the Graph Intelligence approach
How to build a platform leveraging graph technology
Real-life examples of money laundering and financial crimes detection and investigation
GraphDB Cloud: Enterprise Ready RDF Database on DemandOntotext
GraphDB Cloud is an enterprise grade RDF graph database providing high-performance querying over large volumes of RDF data. On this webinar, Ontotext demonstrates how to instantly create and deploy a fully managed Graph Database, then import & query data with the (OpenRDF) GraphDB Workbench, and finally explore and visualize data with the build in visualization tools.
Robin Bloor and Mark Madsen offer their theories on where the rapidly-changing database market stands today: What’s new? What’s standard? What is the trajectory of this evolving market? Each Analyst will present for 10-15 minutes, then will engage in a dialogue with the moderator and attendees.
The webcast audio and video archive can be found at https://bloorgroup.webex.com/bloorgroup/lsr.php?AT=pb&SP=EC&rID=4695777&rKey=4b284990a1db4ec0
Supporting GDPR Compliance through effectively governing Data Lineage and Dat...Connected Data World
General Data Protection Regulation (GDPR) is a new set of EU guidelines governing how organisations handle personal data replacing the current Data Protection Act (DPA) and has been enforced since May 2018. With GDPR in place organizations need to process personal data lawfully, maintain this accurately for no longer than necessary, and in a secure way.
They should be able to report on the purposes of processing, the categories of personal data they control, and be able to demonstrate compliance with regards to GDPR policies. The challenge organizations face with regards to GDPR, being able to record every point where processing activities of personal data takes place and to showcase accountability with regards to this activity, has made data governance even more critical on the data lineage and data provenance aspects.
Governing data lineage enables the understanding of the organization’s data flow activities and to identify and document the legal justification for each type of activity. In addition GDPR requires evidence of records for the processing of personal data which implies the need to effectively record and govern data provenance.
In the current talk we are going to showcase how effectively governing data lineage and data provenance gives us the ability to verify that the processing of private data within an organization is compliant with GDPR regulatory requirements.
Webinar: MongoDB and Analytics: Building Solutions with the MongoDB BI ConnectorMongoDB
MongoDB is known for being a developers database of choice, but what about data analysts? MongoDB 3.2 has introduced the MongoDB BI Connector – to allow users to connect to an instance using their analytics tool of choice. Now users of Tableau, QlikView, Excel, Cognos, and countless others can connect to MongoDB and immediately begin building reporting solutions. In this webinar, we will cover the architecture needed to use the BI Connector with MongoDB. We will also demonstrate how to build reports with your data.
Every year the financial industry loses billions because of fraud while in the meantime fraudsters are coming up with more and more sophisticated patterns.
Financial institutions have to find the balance between fraud protection and negative customer experience. Fraudsters bury their patterns in lots of data, but the traditional technologies are not designed to detect fraud in real-time or to see patterns beyond the individual account.
Analyzing relations with graph databases helps uncover these larger complex patterns and speeds up suspicious behavior identification.
Furthermore, graph databases enable fast and effective real-time link queries and passing context to machine learning models.
The earlier fraud pattern or network is identified, the faster the activity is blocked. As a result, losses and fines are minimized.
EgoSystem: Presentation to LITA, American Library Association, Nov 8 2014James Powell
The Internet represents the connections among computers and devices, the world wide web is a network of interconnected documents, and the semantic web is the closest thing we have today to a network of interconnected facts. Noticeably absent from these global networks is any sort of open, formal representation for an online global social network. Each users' online presence, and its immediate social network, are isolated and typically only available within the confines of the social networking site that hosts it. Discovery across explicit online social networks and implicit social networks such as those that can be inferred from co-authorship relationships and affiliations is, for all practical purposes, impossible. And yet there are practical and non-nefarious reasons why an organization might be interested in exploring portions of such a network. Outreach is one such interest. Los Alamos National Laboratory (LANL) prototyped EgoSystem to harvest and explore the professional social networks of post doctoral students. The project's goal is to enlist past students and other Lab alumni as ambassadors and advocates for LANL's ongoing mission. During this talk we will discuss the various technologies that support the EgoSystem and demonstrate some of its capabilities.
Multi-Model Data Query Languages and Processing ParadigmsJiaheng Lu
Specifying users' interests with a formal query language is a typically challenging task, which becomes even harder in the context of multi-model data management because we have to deal with data variety. It usually lacks a unified schema to help the users issuing their queries, or has an incomplete schema as data come from disparate sources. Multi-Model DataBases (MMDBs) have emerged as a promising approach for dealing with this task as they are capable of accommodating and querying the multi-model data in a single system. This tutorial aims to offer a comprehensive presentation of a wide range of query languages for MMDBs and to make comparisons of their properties from multiple perspectives. We will discuss the essence of cross-model query processing and provide insights on the research challenges and directions for future work. The tutorial will also offer the participants hands-on experience in applying MMDBs to issue multi-model data queries.
Application development with Oracle NoSQL Database 3.0Anuj Sahni
Oracle announced Oracle NoSQL Database 3.0 on April 2, 2014. This release offers increased security, simplified data modeling, secondary indices, and multi-datacenter performance enhancement.
For audio/video presentation visit: http://bit.ly/1qLEZW9
Making NumPy-style and Pandas-style code faster and run in parallel. Continuum has been working on scaled versions of NumPy and Pandas for 4 years. This talk describes how Numba and Dask provide scaled Python today.
In social networks, where users send messages to each other, the issue of what triggers communication between unrelated users arises: does communication between previously unrelated users depend on friend-of-a-friend type of relationships, common interests, or other factors? In this work, we study the problem of predicting directed communication
intention between two users. Link prediction is similar to communication intention in that it uses network structure for prediction. However, these two problems exhibit fundamental
differences that originate from their focus. Link prediction uses evidence to predict network structure evolution, whereas our focal point is directed communication initiation between
users who are previously not structurally connected. To address this problem, we employ topological evidence in conjunction to transactional information in order to predict communication intention. It is not intuitive whether methods that work well for
link prediction would work well in this case. In fact, we show in this work that network or content evidence, when considered separately, are not sufficiently accurate predictors. Our novel approach, which jointly considers local structural properties of users in a social network, in conjunction with their generated content, captures numerous interactions, direct and indirect, social and contextual, which have up to date been considered independently. We performed an empirical study to evaluate our method using an extracted network of directed @-messages sent between users of a corporate microblogging service, which resembles Twitter. We find that our method outperforms state of the art techniques for link prediction. Our findings have implications for a wide range of social web applications, such as contextual expert recommendation for Q&A, new friendship relationships creation, and targeted content delivery.
Databases have been around for decades and were highly optimised for data aggregations during that time. Not only Big data has changed the landscape of databases massively in the past years - we nowadays can find many Open Source projects among the most popular dbs.
After this talk you will be enabled to decide if a database can make your work more efficient and which direction to look to.
How Graph Databases used in Police Department?Samet KILICTAS
This presentation delivers basics of graph concept and graph databases to audience. It clearly explains how graph databases are used with sample use cases from industry and how it can be used for police departments. Questions like "When to use a graph DB?" and "Should I solve a problem with Graph DB?" are answered.
Beyond Collaborative Filtering: Learning to Rank Research ArticlesMaya Hristakeva
At Elsevier we work on recommender systems to help researchers connect to their research and to collaborators (e.g. Mendeley Suggest, Science Direct, Funding Opportunities and Evise Reviewer recommenders). This talk focused on the recent improvements the team has made to the Science Direct research articles recommender by deploying ranking models in production.
I gave this presentation at the 7th RecSys London Meetup - https://www.meetup.com/RecSys-London/events/255362180/
Transforming AI with Graphs: Real World Examples using Spark and Neo4jDatabricks
Graphs – or information about the relationships, connection, and topology of data points – are transforming machine learning. We’ll walk through real world examples of how to get transform your tabular data into a graph and how to get started with graph AI. This talk will provide an overview of how we to incorporate graph based features into traditional machine learning pipelines, create graph embeddings to better describe your graph topology, and give you a preview of approaches for graph native learning using graph neural networks. We’ll talk about relevant, real world case studies in financial crime detection, recommendations, and drug discovery. This talk is intended to introduce the concept of graph based AI to beginners, as well as help practitioners understand new techniques and applications. Key take aways: how graph data can improve machine learning, when graphs are relevant to data science applications, what graph native learning is and how to get started.
Transforming AI with Graphs: Real World Examples using Spark and Neo4jFred Madrid
Graphs – or information about the relationships, connection, and topology of data points – are transforming machine learning. We’ll walk through real world examples of how to get transform your tabular data into a graph and how to get started with graph AI. This talk will provide an overview of how we to incorporate graph based features into traditional machine learning pipelines, create graph embeddings to better describe your graph topology, and give you a preview of approaches for graph native learning using graph neural networks. We’ll talk about relevant, real world case studies in financial crime detection, recommendations, and drug discovery. This talk is intended to introduce the concept of graph based AI to beginners, as well as help practitioners understand new techniques and applications. Key take aways: how graph data can improve machine learning, when graphs are relevant to data science applications, what graph native learning is and how to get started.
Data Science Keys to Open Up OpenNASA DatasetsPyData
By Noemi Derzsy
PyData New York City 2017
Open source data has enabled society to engage in community-based research, and has provided government agencies with more visibility and trust from individuals. I will briefly introduce the openNASA platform with over 32,000 open NASA datasets, and I will present open NASA metadata analysis, and tools for applying NLP/topic modeling techniques to understand open government dataset associations.
Similar to Graph Query Languages: update from LDBC (20)
My Linked Data tutorial presentation that I presented at Semtech 2012.
http://semtechbizsf2012.semanticweb.com/sessionPop.cfm?confid=65&proposalid=4724
Consuming Linked Data by Humans - WWW2010Juan Sequeda
These are the Consuming Linked Data by Humans slides that we presented at the Consuming Linked Data tutorial at WWW2010 in Raleigh, NC on April 26, 2010
Consuming Linked Data by Machines - WWW2010Juan Sequeda
These are the Consuming Linked Data by Machines slides that we presented at the Consuming Linked Data tutorial at WWW2010 in Raleigh, NC on April 26, 2010. These slides are originally by Patrick Sinclair from BBC
These are the Linked Data Applications slides that we presented at the Consuming Linked Data tutorial at WWW2010 in Raleigh, NC on April 26, 2010.
This slide set was not part of our tutorial that was presented at ISWC2009
Open Research Problems in Linked Data - WWW2010Juan Sequeda
These are the Open Research Problems of Linked Data slides that we presented at the Consuming Linked Data tutorial at WWW2010 in Raleigh, NC on April 26, 2010
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.