RDF and OWL are powerful tools for making data smart. RDF uses a simple triple format to represent metadata and link data using unique identifiers, allowing for data integration. OWL builds on RDF by adding more formal semantics and defining concepts, properties, and relationships to allow for automated reasoning and inference over data. Combining OWL and RDF results in smart data that computers can understand, enabling intelligent automation and decision making.
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingSemantic Web Company
See how ontologies and taxonomies can play together to reach the ultimate goal, which is the cost-efficient creation and maintenance of an enterprise knowledge graph. The knowledge modelling methodology is supported by approaches taken from NLP, data science, and machine learning.
OWL stands for Web Ontology Language
OWL is built on top of RDF
OWL is for processing information on the web
OWL was designed to be interpreted by computers
OWL was not designed for being read by people
OWL is written in XML
OWL has three sublanguages
- OWL Lite , OWL DL , OWL Full
OWL is a W3C standard
A presentation about Ontology Learning with an overview of the area and some methods used, specially techniques of Ontology Learning from Text. This presentation was part of a seminary in the MSc Course in Computer Science at UFPE - Recife - Brazil.
The Semantic Web #9 - Web Ontology Language (OWL)Myungjin Lee
This is a lecture note #9 for my class of Graduate School of Yonsei University, Korea.
It describes Web Ontology Language (OWL) for authoring ontologies.
LinkML Intro July 2022.pptx PLEASE VIEW THIS ON ZENODOChris Mungall
NOTE THAT I HAVE MOVED AWAY FROM SLIDESHARE TO ZENODO
The identical presentation is now here:
https://doi.org/10.5281/zenodo.7778641
General introduction to LinkML, The Linked Data Modeling Language.
Adapter from presentation given to NIH May 2022
https://linkml.io/linkml
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingSemantic Web Company
See how ontologies and taxonomies can play together to reach the ultimate goal, which is the cost-efficient creation and maintenance of an enterprise knowledge graph. The knowledge modelling methodology is supported by approaches taken from NLP, data science, and machine learning.
OWL stands for Web Ontology Language
OWL is built on top of RDF
OWL is for processing information on the web
OWL was designed to be interpreted by computers
OWL was not designed for being read by people
OWL is written in XML
OWL has three sublanguages
- OWL Lite , OWL DL , OWL Full
OWL is a W3C standard
A presentation about Ontology Learning with an overview of the area and some methods used, specially techniques of Ontology Learning from Text. This presentation was part of a seminary in the MSc Course in Computer Science at UFPE - Recife - Brazil.
The Semantic Web #9 - Web Ontology Language (OWL)Myungjin Lee
This is a lecture note #9 for my class of Graduate School of Yonsei University, Korea.
It describes Web Ontology Language (OWL) for authoring ontologies.
LinkML Intro July 2022.pptx PLEASE VIEW THIS ON ZENODOChris Mungall
NOTE THAT I HAVE MOVED AWAY FROM SLIDESHARE TO ZENODO
The identical presentation is now here:
https://doi.org/10.5281/zenodo.7778641
General introduction to LinkML, The Linked Data Modeling Language.
Adapter from presentation given to NIH May 2022
https://linkml.io/linkml
I used these slides for an introductory lecture (90min) to a seminar on SPARQL. This slideset introduces the RDF query language SPARQL from a user's perspective.
Introduction to DBpedia, the most popular and interconnected source of Linked Open Data. Part of EXPLORING WIKIDATA AND THE SEMANTIC WEB FOR LIBRARIES at METRO http://metro.org/events/598/
Introduction to the Data Web, DBpedia and the Life-cycle of Linked DataSören Auer
Over the past 4 years, the Semantic Web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into
a very promising candidate for addressing one of the biggest challenges
of computer science: the exploitation of the Web as a platform for data
and information integration. To translate this initial success into a
world-scale reality, a number of research challenges need to be
addressed: the performance gap between relational and RDF data
management has to be closed, coherence and quality of data published on
the Web have to be improved, provenance and trust on the Linked Data Web
must be established and generally the entrance barrier for data
publishers and users has to be lowered. This tutorial will discuss
approaches for tackling these challenges. As an example of a successful
Linked Data project we will present DBpedia, which leverages Wikipedia
by extracting structured information and by making this information
freely accessible on the Web. The tutorial will also outline some recent advances in DBpedia, such as the mappings Wiki, DBpedia Live as well as
the recently launched DBpedia benchmark.
Mapping Hierarchical Sources into RDF using the RML Mapping Languageandimou
Incorporating structured data in the Linked Data cloud is still complicated, despite the numerous existing tools. In particular, hierarchical structured data (e.g., JSON) are underrepresented, due to their processing complexity. A uniform mapping formalisation for data in different formats, which would enable reuse and exchange between tools and applied data, is missing. This paper describes a novel approach of mapping heterogeneous and hierarchical data sources into RDF using the RML mapping language, an extension over R2RML (the W3C standard for mapping relational databases into RDF). To facilitate those mappings, we present a toolset for producing RML mapping files using the Karma data modelling tool, and for consuming them using a prototype RML processor. A use case shows how RML facilitates the mapping rules’ definition and execution to map several heterogeneous sources.
http://rml.io
https://github.com/mmlab/RMLProcessor
History of Knowledge Representation (SIKS Course 2010)Rinke Hoekstra
The goal of AI research is the simulation and approximation of human intelligence by computers. To a large extent this comes down to the development of computational reasoning services that allow machines to solve problems. Robots are the stereotypical example: imagine what a robot needs to know before it is able to interact with the world the way we do? It needs to have a highly accurate internal representation of reality. It needs to turn perception into action, know how to reach its goals, what objects it can use to its advantage, what kinds of objects exist, etc.
The field of knowledge representation (KR) tries to deal with the problems surrounding the incorporation of some body of knowledge (in whatever form) in a computer system, for the purpose of automated, intelligent reasoning. In this sense, knowledge representation is the basic research topic in AI. Any artificial intelligence is dependent on knowledge, and thus on a representation of that knowledge. The history of knowledge representation has been nothing less than turbulent. The roller coaster of promise of the 50's and 60's, the heated debates of the 70's, the decline and realism of the 80's and the ontology and knowledge management hype of the 90's each left a clear mark on contemporary knowledge representation technology and its application.
Oracle GoldenGate is the leading real-time data integration software provider in the industry - customers include 3 of the top 5 commercial banks, 3 of the top 3 busiest ATM networks, and 4 of the top 5 telecommunications providers.
Oracle GoldenGate moves transactional data in real-time across heterogeneous database, hardware and operating systems with minimal impact. The software platform captures, routes, and delivers data in real time, enabling organizations to maintain continuous uptime for critical applications during planned and unplanned outages.
Additionally, it moves data from transaction processing environments to read-only reporting databases and analytical applications for accurate, timely reporting and improved business intelligence for the enterprise.
Haystack 2019 - Improving Search Relevance with Numeric Features in Elasticse...OpenSource Connections
Recently Elasticsearch has introduced a number of ways to improve search relevance of your documents based on numeric features. In this talk I will present the newly introduced field types of "rank_feature", "rank_features" ,"dense_field", and "sparse_vector" and discuss in what situations and how they can be used to boost scores of your documents. I will also talk about the inner workings of queries based on these fields, and related performance considerations.
Understanding RDF: the Resource Description Framework in Context (1999)Dan Brickley
Dan Brickley, 3rd European Commission Metadata Workshop, Luxemburg, April 12th 1999
Understanding RDF: the Resource Description Framework in Context
http://ilrt.org/discovery/2001/01/understanding-rdf/
The Business Case for Semantic Web Ontology & Knowledge GraphCambridge Semantics
In this webinar Mark Wallace, Ontologist & Developer, Semantic Arts, and Thomas Cook, Director of Sales AnzoGraph DB, Cambridge Semantics, explore the benefits of building a Semantic Knowledge Graph with RDF*, wrapping up with an airline data demo that illustrates the value of schema, inference and reasoning in it.
These slides were presented as part of a W3C tutorial at the CSHALS 2010 conference (http://www.iscb.org/cshals2010). The slides are adapted from a longer introduction to the Semantic Web available at http://www.slideshare.net/LeeFeigenbaum/semantic-web-landscape-2009 .
A PDF version of the slides is available at http://thefigtrees.net/lee/sw/cshals/cshals-w3c-semantic-web-tutorial.pdf .
I used these slides for an introductory lecture (90min) to a seminar on SPARQL. This slideset introduces the RDF query language SPARQL from a user's perspective.
Introduction to DBpedia, the most popular and interconnected source of Linked Open Data. Part of EXPLORING WIKIDATA AND THE SEMANTIC WEB FOR LIBRARIES at METRO http://metro.org/events/598/
Introduction to the Data Web, DBpedia and the Life-cycle of Linked DataSören Auer
Over the past 4 years, the Semantic Web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into
a very promising candidate for addressing one of the biggest challenges
of computer science: the exploitation of the Web as a platform for data
and information integration. To translate this initial success into a
world-scale reality, a number of research challenges need to be
addressed: the performance gap between relational and RDF data
management has to be closed, coherence and quality of data published on
the Web have to be improved, provenance and trust on the Linked Data Web
must be established and generally the entrance barrier for data
publishers and users has to be lowered. This tutorial will discuss
approaches for tackling these challenges. As an example of a successful
Linked Data project we will present DBpedia, which leverages Wikipedia
by extracting structured information and by making this information
freely accessible on the Web. The tutorial will also outline some recent advances in DBpedia, such as the mappings Wiki, DBpedia Live as well as
the recently launched DBpedia benchmark.
Mapping Hierarchical Sources into RDF using the RML Mapping Languageandimou
Incorporating structured data in the Linked Data cloud is still complicated, despite the numerous existing tools. In particular, hierarchical structured data (e.g., JSON) are underrepresented, due to their processing complexity. A uniform mapping formalisation for data in different formats, which would enable reuse and exchange between tools and applied data, is missing. This paper describes a novel approach of mapping heterogeneous and hierarchical data sources into RDF using the RML mapping language, an extension over R2RML (the W3C standard for mapping relational databases into RDF). To facilitate those mappings, we present a toolset for producing RML mapping files using the Karma data modelling tool, and for consuming them using a prototype RML processor. A use case shows how RML facilitates the mapping rules’ definition and execution to map several heterogeneous sources.
http://rml.io
https://github.com/mmlab/RMLProcessor
History of Knowledge Representation (SIKS Course 2010)Rinke Hoekstra
The goal of AI research is the simulation and approximation of human intelligence by computers. To a large extent this comes down to the development of computational reasoning services that allow machines to solve problems. Robots are the stereotypical example: imagine what a robot needs to know before it is able to interact with the world the way we do? It needs to have a highly accurate internal representation of reality. It needs to turn perception into action, know how to reach its goals, what objects it can use to its advantage, what kinds of objects exist, etc.
The field of knowledge representation (KR) tries to deal with the problems surrounding the incorporation of some body of knowledge (in whatever form) in a computer system, for the purpose of automated, intelligent reasoning. In this sense, knowledge representation is the basic research topic in AI. Any artificial intelligence is dependent on knowledge, and thus on a representation of that knowledge. The history of knowledge representation has been nothing less than turbulent. The roller coaster of promise of the 50's and 60's, the heated debates of the 70's, the decline and realism of the 80's and the ontology and knowledge management hype of the 90's each left a clear mark on contemporary knowledge representation technology and its application.
Oracle GoldenGate is the leading real-time data integration software provider in the industry - customers include 3 of the top 5 commercial banks, 3 of the top 3 busiest ATM networks, and 4 of the top 5 telecommunications providers.
Oracle GoldenGate moves transactional data in real-time across heterogeneous database, hardware and operating systems with minimal impact. The software platform captures, routes, and delivers data in real time, enabling organizations to maintain continuous uptime for critical applications during planned and unplanned outages.
Additionally, it moves data from transaction processing environments to read-only reporting databases and analytical applications for accurate, timely reporting and improved business intelligence for the enterprise.
Haystack 2019 - Improving Search Relevance with Numeric Features in Elasticse...OpenSource Connections
Recently Elasticsearch has introduced a number of ways to improve search relevance of your documents based on numeric features. In this talk I will present the newly introduced field types of "rank_feature", "rank_features" ,"dense_field", and "sparse_vector" and discuss in what situations and how they can be used to boost scores of your documents. I will also talk about the inner workings of queries based on these fields, and related performance considerations.
Understanding RDF: the Resource Description Framework in Context (1999)Dan Brickley
Dan Brickley, 3rd European Commission Metadata Workshop, Luxemburg, April 12th 1999
Understanding RDF: the Resource Description Framework in Context
http://ilrt.org/discovery/2001/01/understanding-rdf/
The Business Case for Semantic Web Ontology & Knowledge GraphCambridge Semantics
In this webinar Mark Wallace, Ontologist & Developer, Semantic Arts, and Thomas Cook, Director of Sales AnzoGraph DB, Cambridge Semantics, explore the benefits of building a Semantic Knowledge Graph with RDF*, wrapping up with an airline data demo that illustrates the value of schema, inference and reasoning in it.
These slides were presented as part of a W3C tutorial at the CSHALS 2010 conference (http://www.iscb.org/cshals2010). The slides are adapted from a longer introduction to the Semantic Web available at http://www.slideshare.net/LeeFeigenbaum/semantic-web-landscape-2009 .
A PDF version of the slides is available at http://thefigtrees.net/lee/sw/cshals/cshals-w3c-semantic-web-tutorial.pdf .
First Steps in Semantic Data Modelling and Search & Analytics in the CloudOntotext
This webinar will break the roadblocks that prevent many from reaping the benefits of heavyweight Semantic Technology in small scale projects. We will show you how to build Semantic Search & Analytics proof of concepts by using managed services in the Cloud.
Demystifying containers and software licensingKylie Fowler
Steve Cropper's slides provide a deep dive into what is the latest in a long line of platform developments that IT Asset Managers have had to cope with. Just as we needed to get our heads around virtualisation and the shift to cloud, all IT Asset Managers need to understand this new technology so we can engage the architects and technologists driving the adoption of containers and enable the development of effective management strategies.
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
Delivering a Linked Data warehouse and realising the power of graphsBen Gardner
Linklaters is one of the world’s leading global law firms. The firm has a wealth of high value information held within our systems however due to the nature of these systems it is not always easy to leverage this value. Our goal was to improve decision making across the firm by transforming access to and ability to query data. To do this we wanted a solution that would combine our information, was easy to extend in an iterative fashion and would leverage our existing investment in business intelligence. To achieve this we chose to create a graph based warehouse using Linked Data. Data from our SAP Business Warehouse was combined with flat file and XML feeds from our systems of record and transformed into RDF via ETL services that loaded it into a triple store. To provide simple integration with our existing environment a SPARQL to OData service was deployed creating an OData compliant endpoint. Finally a model driven, mobile friendly, user interface was created allowing users to query, review results and explore the underlying graph. This talk will describe the approach we took and the lessons learnt.
Data Analytics Week at the San Francisco Loft
Using Data Lakes
A data lake can be used as a source for both structured and unstructured data - but how? We'll look at using open standards including Spark and Presto with Amazon EMR, Amazon Redshift Spectrum and Amazon Athena to process and understand data.
Speakers:
John Mallory - Principal Business Development Manager Storage (Object), AWS
Hemant Borole - Sr. Big Data Consultant, AWS
Tutorial presented at 2012 ACM SIGHIT International Health Informatics Symposium (IHI 2012), January 28-30, 2012. http://sites.google.com/site/web2011ihi/participants/tutorials
This tutorial weaves together three themes and the associated topics:
[1] The role of biomedical ontologies
[2] Key Semantic Web technologies with focus on Semantic provenance and integration
[3] In-practice tools and real world use cases built to serve the needs of sleep medicine researchers, cardiologists involved in clinical practice, and work on vaccine development for human pathogens.
Similar to RDF and OWL : the powerful duo | Tara Raafat (20)
After the amazing breakthroughs of machine learning (deep learning or otherwise) in the past decade, the shortcomings of machine learning are also becoming increasingly clear: unexplainable results, data hunger and limited generalisability are all becoming bottlenecks.
In this talk we will look at how the combination with symbolic AI (in the form of very large knowledge graphs) can give us a way forward, towards machine learning systems that can explain their results, that need less data, and that generalise better outside their training set.
--
Frank van Harmelen leads the Knowledge Representation & Reasoning group in the CS Department of the VU University Amsterdam. He is also Principal investigator of the Hybrid Intelligence Centre, a 20Μ€, 10 year collaboration between researchers at 6 Dutch universities into AI that collaborates with people instead of replacing them.
--
While mathematicians have used graph theory since the 18th century to solve problems, the software patterns for graph data are new to most developers. To enable "mass adoption" of graph technology, we need to establish the right abstractions, access APIs, and data models.
RDF triples, while of paramount importance in establishing RDF graph semantics, are a low-level abstraction, much like using assembly language. For practical and productive “graph programming” we need something different.
Similarly, existing declarative graph query languages (such as SPARQL and Cypher) are not always the best way to access graph data, and sometimes you need a simpler interface (e.g., GraphQL), or even a different approach altogether (e.g., imperative traversals such as with Gremlin).
Ora Lassila is a Principal Graph Technologist in the Amazon Neptune graph database group. He has a long experience with graphs, graph databases, ontologies, and knowledge representation. He was a co-author of the original RDF specification as well as a co-author of the seminal article on the Semantic Web.
Κnowledge Architecture: Combining Strategy, Data Science and Information Arch...Connected Data World
"The most important contribution management needs to make in the 21st Century is to increase the productivity of knowledge work and the knowledge worker", said Peter F. Drucker in 1999, and time has proven him right.
Even NASA is no exception, as it faces a number of challenges. NASA has hundreds of millions of documents, reports, project data, lessons learned, scientific research, medical analysis, geospatial data, IT logs, and all kinds of other data stored nation-wide.
The data is growing in terms of variety, velocity, volume, value and veracity. NASA needs to provide accessibility to engineering data sources, whose visibility is currently limited. To convert data to knowledge a convergence of Knowledge Management, Information Architecture and Data Science is necessary.
This is what David Meza, Acting Branch Chief - People Analytics, Sr. Data Scientist at NASA, calls "Knowledge Architecture": the people, processes, and technology of designing, implementing, and applying the intellectual infrastructure of organizations.
A talk by Aleksa Gordic | Software - Deep Learning engineer, Microsoft | The AI Epiphany
What can you learn about Graph Machine Learning in 2 months?
Aleksa Gordic, Machine Learning engineer @ Microsoft and Founder @ The AI Epiphany, shares his journey in the world of Graph Machine Learning. Aleksa started exploring the basics in the world of Graph Machine Learning, and ended up implementing and open sourcing his own Graph Attention Network on PyTorch.
In this talk, Aleksa will share the fundamentals of Graph Machine Learning, provide real-world examples, resources, and everything his younger self would be grateful for. Aleksa will also be available to answer questions.
What is Graph Machine Learning? Simply put, Graph Machine Learning is a branch of machine learning that deals with graph data.
Graphs consist of nodes, that may have feature vectors associated with them, and edges, which again may or may not have feature vectors attached. The applications are endless. Massive-scale recommender systems, particle physics, computational pharmacology / chemistry / biology, traffic prediction, fake news detection, and the list goes on and on.
In recent years graphs have been increasingly adopted in financial services for everything from fraud detection to Know Your Customer (KYC) to regulatory requirements. At the same time Environmental Social Governance (ESG) investing has become the fastest growing segment of financial services. In this session James discusses how many of these historical graph techniques are now being enhanced for the era of sustainable investing. Going beyond definitions, let's identify use cases, discuss news and trends, and wrap up with an ask me anything session.
What is graph all about, and why should you care? Graphs come in many shapes and forms, and can be used for different applications: Graph Analytics, Graph AI, Knowledge Graphs, and Graph Databases.
Talk by George Anadiotis. Connected Data London Meetup June 29th 2020.
Up until the beginning of the 2010s, the world was mostly running on spreadsheets and relational databases. To a large extent, it still does. But the NoSQL wave of databases has largely succeeded in instilling the “best tool for the job” mindset.
After relational, key-value, document, and columnar, the latest link in this evolutionary proliferation of data structures is graph. Graph analytics, Graph AI, Knowledge Graphs and Graph Databases have been making waves, included in hype cycles for the last couple of years.
The Year of the Graph marked the beginning of it all before the Gartners of the world got in the game. The Year of the Graph is a term coined to convey the fact that the time has come for this technology to flourish.
The eponymous article that set the tone was published in January 2018 on ZDNet by domain expert George Anadiotis. George has been working with, and keeping an eye on, all things Graph since the early 2000s. He was one of the first to note the continuing rise of Graph Databases, and to bring this technology in front of a mainstream audience.
The Year of the Graph has been going strong since 2018. In August 2018, Gartner started including Graph in its hype cycles. Ever since, Graph has been riding the upward slope of the Hype Cycle.
The need for knowledge on these technologies is constantly growing. To respond to that need, the Year of the Graph newsletter was released in April 2018. In addition, a constant flow of graph-related news and resources is being shared on social media.
To help people make educated choices, the Year of the Graph Database Report was released. The report has been hailed as the most comprehensive of its kind in the market, consistently helping people choose the most appropriate solution for their use case since 2018.
The report, articles, news stream, and the newsletter have been reaching thousands of people, helping them understand and navigate this landscape. We’ll talk about the Year of the Graph, the different shapes, forms, and applications for graphs, the latest news and trends, and wrap up with an ask me anything session.
From Taxonomies and Schemas to Knowledge Graphs: Parts 1 & 2Connected Data World
Do you have experience in data modeling, or using taxonomies to classify things, and want to upgrade to modeling knowledge graphs? This hands-on workshop with one of the leading knowledge graph practitioners will help you get started.
Parts 1 & 2
Do you have experience in data modeling, or using taxonomies to classify things, and want to upgrade to modeling knowledge graphs? This hands-on workshop with one of the leading knowledge graph practitioners will help you get started.
Part 3
For as long as people have been thinking about thinking, we have imagined that somewhere in the inner reaches of our minds there are ghostly, intangible things called ideas which can be linked together to create representations of the world around us — a world that has a certain structure, conforms to certain rules, and to a certain extent, can be predicted and manipulated on the basis of our ideas.
Rationalist philosophers have struggled for centuries to make a solid case for this intuitive, almost inborn view of human experience, but it is only with the advent of modern computing that we have the opportunity to build machines which truly think the way we think we think.
For the first time, we can give concrete form to our mental representations as graphs or hypergraphs, explicitly specify our mental schemas as ontologies, and formally define the rules by which we reason and act on new information. If we so choose, we can even use these human-like building blocks to construct systems that carry far more information than any single human brain, and that connect and serve millions of people in real time.
As enterprise knowledge graphs become increasingly mainstream, we appear to be headed in that direction, although there is no guarantee that the momentum will continue unless actively sustained. Where knowledge graphs are likely to be the most essential, in the long run, is at the interface between human and machine; mental representation versus formal knowledge representation.
In this talk, we will take a step back from the many practical and social challenges of building large-scale knowledge graphs, which at this point are well-known. Instead, we will take up the quest for an ideal data model for knowledge representation and data integration, seeking common ground among the most popular data models used in industry and open source software, surveying what we suspect to be true of our own inner models, and previewing structure and process in Apache TinkerPop, version 4. We will also take a tentative step forward into the world of augmented perception via graph stream processing.
Graph in Apache Cassandra. The World’s Most Scalable Graph DatabaseConnected Data World
Graph databases are everywhere right now. The explosive growth in the graph market coupled with the hype of solving graph problems is causing both excitement and confusion. From labeled property graphs to RDF to pure graph analytics to multi-model databases, the breadth of graph offerings is staggering.
The good news? DataStax has been listening—and building.
In this session, we’ll show you how DataStax Graph is architected into Apache Cassandra to deliver the world’s most scalable graph database. You’ll learn how to integrate Cassandra data into mixed workloads, design scalable property graphs, and even turn your existing tables into graphs.
With your high throughput time series data distributed next to its relationships, what will you build next?
Enterprise Data Governance: Leveraging Knowledge Graph & AI in support of a d...Connected Data World
As one of the largest financial institutions worldwide, JP Morgan is reliant on data to drive its day-to-day operations, against an ever evolving regulatory regime. Our global data landscape possesses particular challenges of effectively maintaining data governance and metadata management.
The Data strategy at JP Morgan intends to:
a) generate business value
b) adhere to regulatory & compliance requirements
c) reduce barriers to access
d) democratize access to data
In this talk, we show how JP Morgan leverages semantic technologies to drive the implementation of our data strategy. We demonstrate how we exploit knowledge graph capabilities to answer:
1) What Data do I need?
2) What Data do we have?
3) Where does my Data come from?
4) Where should my Data come from?
5) What Data should be shared most?
Graph applications were once considered “exotic” and expensive. Until recently, few software engineers had much experience putting graphs to work. However, the use cases are now becoming more commonplace.
This talk explores a practical use case, one which addresses key issues of data governance and reproducible research, and depends on sophisticated use of graph technology.
Consider: some academic disciplines such as astronomy enjoy a wealth of data — mostly open data. Popular machine learning algorithms, open source Python libraries, and distributed systems all owe much to those disciplines and their history of big data.
Other disciplines require strong guarantees for privacy and security. Datasets used in social science research involve confidential details about human subjects: medical histories, wages, home addresses for family members, police records, etc.
Those cannot be shared openly, which impedes researchers from learning about related work by others. Reproducibility of research and the pace of science in general are limited. Nonetheless, social science research is vital for civil governance, especially for evidence-based policymaking (US federal law since 2018).
Even when data may be too sensitive to share openly, often the metadata can be shared. Constructing knowledge graphs of metadata about datasets — along with metadata about authors, their published research, methods used, data providers, data stewards, and so on — that provides effective means to tackle hard problems in data governance.
Knowledge graph work supports use cases such as entity linking, discovery and recommendations, axioms to infer about compliance, etc. This talk reviews the Rich Context AI competition and the related ADRF framework used now by more than 15 federal agencies in the US.
We’ll explore knowledge graph use cases, use of open standards and open source, and how this enhances reproducible research. Social science research for the public sector has much in common with data use in industry.
Issues of privacy, security, and compliance overlap, pointing toward what will be required of banks, media channels, etc., and what technologies apply. We’ll look at comparable work emerging in other parts of industry: open source projects, open standards emerging, and in particular a new set of features in Project Jupyter that support knowledge graphs about data governance.
Powering Question-Driven Problem Solving to Improve the Chances of Finding Ne...Connected Data World
Making true “molecule”-“mechanism”-“observation” relationship connections is a time consuming, iterative and laborious process. In addition, it is very easy to miss critical information that affects key decisions or helps make plausible scientific connections.
The current practice for deciphering such relationships frequently involves subject matter experts (SMEs) requesting resource from resource-constrained data science departments to refine and redo highly similar ad hoc searches. The result of this is impairment of both the pace and quality of scientific reviews.
In this presentation, I show how semantic integration can be made to ultimately become part of an integrated learning framework for more informed scientific decision making. I will take the audience through our pilot journey and highlight practical learnings that should inform subsequent endeavours.
Semantic similarity for faster Knowledge Graph delivery at scaleConnected Data World
Knowledge graphs promise a novel platform for better holistic decision making and analytics. Many projects fail to reach their full potential because of the prohibitively high cost of integrating new knowledge from the required information sources.
The talk explains the concept of semantic similarity as a tool for efficient entity clustering and matching based on graph and text embeddings. It will demonstrate the underlying scalable and easy to understand algorithm of Random Indexing.
This work is part of the Ontotext Platform, which increases productivity in developing and maintaining large scale knowledge graphs. The platform enables enterprises to develop and operate on top of such mission-critical systems for decision support, information discovery and metadata management.
Knowledge Graphs and AI to Hyper-Personalise the Fashion Retail Experience at...Connected Data World
What is the key to the holistic success of the fastest growing and most successful companies of our time globally? Well, often the key is the rapid increase in collected and analysed data. Graph databases provide a way to organise semantically by classes, not tables, are web-aware, and are superior for handling deep, complex relationships than traditional relational or NoSQL data stores.
It is these deep, complex relationships that can provide the rich context for hyper-personalising your product offering, inspiring consumers to purchase. In this talk, we describe how we are using artificial intelligence at Farfetch to not only help build a knowledge graph but also to evolve our insights with state-of-the-art graph-based AI.
A world of structured data promises us an incredible future. But most websites struggle to even implement basic schema.org markup. Fewer still represent and connect their pages and content in sophisticated, structured graphs. We can’t reach that incredible future without increasing and improving adoption.
To move forward, we need to make constructing rich structured data as easy as writing a recipe. This isn’t a pipe dream: at Yoast, we think we’ve solved schema for everybody, everywhere. We’d love to share our story.
The relationships between data sets matter. Discovering, analyzing, and learning those relationships is a central part to expanding our understand, and is a critical step to being able to predict and act upon the data. Unfortunately, these are not always simple or quick tasks.
To help the analyst we introduce RAPIDS, a collection of open-source libraries, incubated by NVIDIA and focused on accelerating the complete end-to-end data science ecosystem. Graph analytics is a critical piece of the data science ecosystem for processing linked data, and RAPIDS is pleased to offer cuGraph as our accelerated graph library.
Simply accelerating algorithms only addressed a portion of the problem. To address the full problem space, RAPIDS cuGraph strives to be feature-rich, easy to use, and intuitive. Rather than limiting the solution to a single graph technology, cuGraph supports Property Graphs, Knowledge Graphs, Hyper-Graphs, Bipartite graphs, and the basic directed and undirected graph.
A Python API allows the data to be manipulated as a DataFrame, similar and compatible with Pandas, with inputs and outputs being shared across the full RAPIDS suite, for example with the RAPIDS machine learning package, cuML.
This talk will present an overview of RAPIDS and cuGraph. Discuss and show examples of how to manipulate and analyze bipartite and property graph, plus show how data can be shared with machine learning algorithms. The talk will include some performance and scalability metrics. Then conclude with a preview of upcoming features, like graph query language support, and the general RAPIDS roadmap.
Elegant and Scalable Code Querying with Code Property GraphsConnected Data World
Programming is an unforgiving art form in which even minor flaws can cause rockets to explode, data to be stolen, and systems to be compromised. Today, a system tasked to automatically identify these flaws not only faces the intrinsic difficulties and theoretical limits of the task itself, it must also account for the many different forms in which programs can be formulated and account for the awe-inspiring speed at which developers push new code into CI/CD pipelines. So much code, so little time.
The code property graph – a multi-layered graph representation of code that captures properties of code across different abstractions – (application code, libraries and frameworks) – has been developed over the last six years to provide a foundation for the challenging problem of identifying flaws in program code at scale, whether it is high-level dynamically-typed Javascript, statically-typed Scala in its bytecode form, the syntax trees generated by Roslyn C# compiler, or the bitcode that flows through LLVM.
Based on this graph, we define a common query language based on formal code property graph specification to elegantly analyze code regardless of the source language. Paired with the formulation of a state-of-the-art data flow tracker based on code property graphs, we arrive at a distributed cloud native powerful code analysis. This talk provides an introduction to the technology.
From Knowledge Graphs to AI-powered SEO: Using taxonomies, schemas and knowle...Connected Data World
Do you want to learn how to use the low-hanging fruit of knowledge graphs — schema.org and JSON-LD — to annotate content and improve your SEO with semantics and entities? This hands-on workshop with one of the leading Semantic SEO practitioners will help you get started.
May graph technology improve the deployment of humanitarian projects? The goal of using what we call “Graphs for good at Action Against Hunger” is to be more efficient and transparent, and this can have a crucial impact on people’s lives.
Is there common behaviour factors between different projects? Can elements of different resources or projects be related? For example, security incidents in a city could influence the way other projects run in there.
The explained use case data comes from a project called Kit For Autonomous Cash Transfer in Humanitarian Emergencies (KACHE) whose goal is to deploy electronic cash transfers in emergency situations when no suitable infrastructure is available.
It also offers the opportunity to track transactions in order to better recognize crisis-affected population behaviours, understanding goods distribution network to improve recommendations, identifying the role of culture in transactional patterns, as well as most required items for every place.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
RDF and OWL : the powerful duo | Tara Raafat
1. RDF and OWL the
powerful Duo
Presented by:Tara Raafat
2. • Why go semantic ?
• Should I use RDF or OWL ?
• What is the difference , what is the link ?
• Did you say smart data ?
Questions
3. Data
Wisdom
Knowledge
Information
• Taking decisions using knowledge and
reasoning AI
• Information used with experience
and understanding Semantic
Web
• Data enriched with context & history
ERP / CRM / MDM
• TraditionalApplications
SemanticWeb
goes here
Semantic web formalizes knowledge in a way that improves decisioning today, and can form
the basis for autonomous reasoning in the future
The SemanticWeb
6. RDF
• Simple triple based data model
Subject, Predicate, Object
• Graph- based formalism for representing metadata
• Kafka is the author of the book “Trial.”
• Individual things, and not just files, are given unique identifiers.
• XML serialization (RDF/XML) for ease of data exchange
• Various textual representations for ease of human understanding
• No Schema
Kafka Trial
Is Author of
http://www.mybookstore.com/Trial
http://www.mybookstore.com/isAuthorOf
http://www.mybookstore.com/Kafka
9. 9
English books database: Export data as RDF
http://…isbn/000651409X
Ghosh, Amitav http://www.amitavghosh.com
The Glass Palace
2000
London
Harper Collins
a:name
a:homepage
a:author
10. 10
French books Data Base: Export data as RDF
http://…isbn/000651409X
Ghosh, Amitav
Besse, Christianne
Le palais des miroirs
f:nom
f:traducteur
f:auteur
http://…isbn/202038662
f:nom
11. 11
Merge your data
http://…isbn/000651409X
Ghosh, Amitav
Besse, Christianne
Le palais des miroirs
f:nom
f:traducteur
f:auteur
http://…isbn/2020386682
f:nom
http://…isbn/000651409X
Ghosh, Amitav
http://www.amitavghosh.com
The Glass Palace
2000
London
Harper Collins
a:name
a:homepage
a:author
Same URI!
12. Ghosh, Amitav
Besse, Christianne
Le palais des miroirs
f:original
f:nom
f:traducteur
f:auteur
http://…isbn/2020386682
f:nom
Ghosh, Amitav
http://www.amitavghosh.com
The Glass Palace
2000
London
Harper Collins
a:name
a:homepage
a:author
http://…isbn/000651409X
a:author
same as
f:auteur
13. Merge with external data: Wikipedia
Besse, Christianne
Le palais des miroirs
f:original
f:nom
f:traducteur
f:auteur
http://…isbn/2020386682
f:nom
Ghosh, Amitav http://www.amitavghosh.com
The Glass Palace
2000
London
Harper Collins
a:name
a:homepage
a:author
http://…isbn/000651409X
http://…foaf/Person
r:type
r:type
http://dbpedia.org/../Amitav_Ghosh
http://dbpedia.org/../The_Hungry_Tide
http://dbpedia.org/../The_Calcutta_Chromosome
http://dbpedia.org/../Kolkata
http://dbpedia.org/../The_Glass_Palace
r:type
foaf:name w:reference
w:author_of
w:author_of
w:author_of
w:born_in
w:isbn
w:long w:lat
15. 15
Ontology ( according to Tom Gruber (1992))
An ontology is a formal, explicit specification of a shared conceptualization
Machine Readable
Concepts, properties
Relations, functions,
Constraints, axioms,
Are explicitly defined
Consensual
Knowledge Abstract model
and simplified
view of some
phenomenon in
the world that
we want to
represent
16. DifferentTeams: Different Languages …
11/17/2017 16
Business user with a
data need
Business data analyst
Business -technology
liaison
Data architect Data scientist
Data Consumers Design
Dev – Data Source
Application
owner
Developer
Test
Dev – Data Integration
Developer
Dev Ops
Dev – Data consumer
Code
Dev Ops
TestTest
The problem
that we are
trying to solve
Architectural
concerns.
Enterprise vision
Semantics.
Data dictionary
Source system’s
data terminology
Source system’s data
terminology.
Target system’s terminology
Target system’s
data terminology
The problem
that we are
trying to solve
Functional
solution
Timing &
priorities
Dependencies
17. Ontology
• a knowledge model which defines a set of concepts and the relationship
between those concepts within a specific domain
• Supports automated reasoning and inference of data using logical rules
• Provides Knowledge sharing and reuse among people or software agents
18. OWL ( Web Ontology Language)
• RDF based
• A defacto standard for ontology development
• Main components include
Classes : which define concepts in a domain
Properties: which are of two type
Object properties : define relationships between concepts
Datatype properties: define relationships between a concept and a literal
Individuals: instances of classes
Restrictions: Allow definition of cardinality restrictions as well existential and
universal quantifications
• Has three levels
OWL Full
OWL DL
OWl-Lite
25. Legal Contract
Legal PersonLegal Document
Legal Person
Legal Entity
Item
Invoice Customer
Bill of Lading
Consignee/Shipperhas invoice
Letter of Credit
Freight
Forwarder
Company
Item
buyer
Trade Transaction
Letter of Credit
Bank
Issuing Bank
Account
holds
Vessel
Vessel ID
Journey
Trade Profile
vessel
profile
CIS systemsTransaction Data
Trade
Documents
Marine traffic Website
Port
Date
Price
Has Price
Total Price
Port of Loading/
Port of Discharge
Posted In
Business
Busines
s Sector
Smart Data forTrade Based Anti Money Laundering
26. Legal Contract
Legal EntityLegal Document
Legal Entity
Company
Item
Cotton shirts
Invoice
#JS54
Customer
ABCCorp
Bill of Lading
BOLLOC453
Consignee/Shipper
has invoice
Letter of Credit
ABCCorpLOC453
Freight
Forwarder
Company
ShippersStarItem
buyer
Trade Transaction
#76873
Letter of Credit
Bank
BankUSA
Issuing Bank
Account
#89222
holds
Vessel
BSLE9
Vessel ID
V245
Journey
Route548
Trade Profile
JSmithProfile45
vessel
profile
CIS systemsTransaction Data
Trade
Documents
Marine traffic Website
Port
Bangladesh-Jaipur-
Amsterdam
Date
7/08/2016
Price
2$
Has Price
Total Price
Port of Loading/
Port of Discharge
Posted In
Port
Bangladesh-
Amsterdam
Price
54$
Total Number
27
Business
NAICSCode767
Busines
s Sector
27. Brief Answers
• Why go semantic ?
• To present knowledge about your data.
• To allow data integration
• To bring intelligence to your system
• Should I use RDF or OWL ?
• If you just want to link your data or annotate-> USE RDF
• If you want to make your data smart and apply reasoning and
inference -> USE OWL +RDF
• What is the difference , what is the link ?
• RDF is to present data in triple formats and give it some structure
and unique identifiers so that data can be easily linked
• OWL provides a rich vocabulary to add semantics and context and
allow reasoning and inference
• Did you say smart data ?
• Yes! A data that can be understood by the computer and therefore
allows for intelligent automation