This document discusses formalizing and expressing completeness information about RDF data sources to enable assessing query completeness. It presents a framework for completeness statements and using them to check if a data source can fully answer a query. Specifically:
1. Completeness statements express patterns that a data source claims to contain using SPARQL-like syntax.
2. Query completeness is checked by seeing if the query results can be reconstructed from the completeness statements' patterns.
3. An example shows DBpedia is incomplete for a query about Tarantino movies and actors, while LinkedMDB is complete as its statements cover movies and actors.
RDF is a general method to decompose knowledge into small pieces, with some rules about the semantics or meaning of those pieces. The point is to have a method so simple that it can express any fact, and yet so structured that computer applications can do useful things with knowledge expressed in RDF.
Resource Description Framework (RDF) has entered the metadata scene for libraries in a major way over the last few years. While the promise of its Linked Data capabilities is exciting, the realities of changing data models, encoding practices, and even ontologies can put a check on that excitement. This session will explore these issues and discuss when this is worth doing and how to go about doing it.
RDF is a general method to decompose knowledge into small pieces, with some rules about the semantics or meaning of those pieces. The point is to have a method so simple that it can express any fact, and yet so structured that computer applications can do useful things with knowledge expressed in RDF.
Resource Description Framework (RDF) has entered the metadata scene for libraries in a major way over the last few years. While the promise of its Linked Data capabilities is exciting, the realities of changing data models, encoding practices, and even ontologies can put a check on that excitement. This session will explore these issues and discuss when this is worth doing and how to go about doing it.
Consuming Linked Data by Machines - WWW2010Juan Sequeda
These are the Consuming Linked Data by Machines slides that we presented at the Consuming Linked Data tutorial at WWW2010 in Raleigh, NC on April 26, 2010. These slides are originally by Patrick Sinclair from BBC
This is a lecture note #10 for my class of Graduate School of Yonsei University, Korea.
It describes SPARQL to retrieve and manipulate data stored in Resource Description Framework format
Mathematics & Computer Science Seminar
Emory University
October 2, 2009
Martin Klein & Michael L. Nelson
Department of Computer Science
Old Dominion University
Norfolk VA
SPARQL and the Open Linked Data initiativeFulvio Corno
An introduction to the SPARQL query language and its application to the Open Linked Data initiative. Slides for the PhD Course on Semantic Web (http://elite.polito.it/).
HiBISCuS: Hypergraph-Based Source Selection for SPARQL Endpoint FederationMuhammad Saleem
Efficient federated query processing is of significant importance to tame the large amount of data available on the Web of Data. Previous works have focused on generating optimized query execution plans for fast result retrieval. However, devising source selection approaches beyond triple pattern-wise source selection has not received much attention. This work presents HiBISCuS, a novel hypergraph-based source selection approach to federated SPARQL querying. Our approach can be directly combined with existing SPARQL query federation engines to achieve the same recall while querying fewer data sources. We extend three well-known SPARQL query federation engines with HiBISCus and compare our extensions with the original approaches on FedBench. Our evaluation shows that HiBISCuS can efficiently reduce the total number of sources selected without losing recall. Moreover, our approach significantly reduces the execution time of the selected engines on most of the benchmark queries.
From: Linked Data: what cataloguers need to know. A CIG event. 25 November 2013, Birmingham. #cigld
http://www.cilip.org.uk/cataloguing-and-indexing-group/events/linked-data-what-cataloguers-need-know-cig-event
Accompanying write-up from Catalogue & Index 174: http://discovery.ucl.ac.uk/1449458/
ESWC 2013 Poster: Representing and Querying Negative Knowledge in RDFFariz Darari
Typically, only positive data can be represented in RDF. However, negative knowledge representation is required in some application domains such as food allergies, software incompatibility, and school absence. We present an approach to represent and query RDF data with negative data. We provide the syntax, semantics, and an example. We argue that this approach fits into the open-world semantics of RDF according to the notion of certain answers.
Get more information at http://dx.doi.org/10.1007/978-3-642-41242-4_40
Consuming Linked Data by Machines - WWW2010Juan Sequeda
These are the Consuming Linked Data by Machines slides that we presented at the Consuming Linked Data tutorial at WWW2010 in Raleigh, NC on April 26, 2010. These slides are originally by Patrick Sinclair from BBC
This is a lecture note #10 for my class of Graduate School of Yonsei University, Korea.
It describes SPARQL to retrieve and manipulate data stored in Resource Description Framework format
Mathematics & Computer Science Seminar
Emory University
October 2, 2009
Martin Klein & Michael L. Nelson
Department of Computer Science
Old Dominion University
Norfolk VA
SPARQL and the Open Linked Data initiativeFulvio Corno
An introduction to the SPARQL query language and its application to the Open Linked Data initiative. Slides for the PhD Course on Semantic Web (http://elite.polito.it/).
HiBISCuS: Hypergraph-Based Source Selection for SPARQL Endpoint FederationMuhammad Saleem
Efficient federated query processing is of significant importance to tame the large amount of data available on the Web of Data. Previous works have focused on generating optimized query execution plans for fast result retrieval. However, devising source selection approaches beyond triple pattern-wise source selection has not received much attention. This work presents HiBISCuS, a novel hypergraph-based source selection approach to federated SPARQL querying. Our approach can be directly combined with existing SPARQL query federation engines to achieve the same recall while querying fewer data sources. We extend three well-known SPARQL query federation engines with HiBISCus and compare our extensions with the original approaches on FedBench. Our evaluation shows that HiBISCuS can efficiently reduce the total number of sources selected without losing recall. Moreover, our approach significantly reduces the execution time of the selected engines on most of the benchmark queries.
From: Linked Data: what cataloguers need to know. A CIG event. 25 November 2013, Birmingham. #cigld
http://www.cilip.org.uk/cataloguing-and-indexing-group/events/linked-data-what-cataloguers-need-know-cig-event
Accompanying write-up from Catalogue & Index 174: http://discovery.ucl.ac.uk/1449458/
ESWC 2013 Poster: Representing and Querying Negative Knowledge in RDFFariz Darari
Typically, only positive data can be represented in RDF. However, negative knowledge representation is required in some application domains such as food allergies, software incompatibility, and school absence. We present an approach to represent and query RDF data with negative data. We provide the syntax, semantics, and an example. We argue that this approach fits into the open-world semantics of RDF according to the notion of certain answers.
Get more information at http://dx.doi.org/10.1007/978-3-642-41242-4_40
Linked Data Quality Assessment – daQ and Luzzujerdeb
Presentation at the Ontology Engineering Group at UPM related to Linked Data Quality and the work done in the Enterprise Information System Group at Universität Bonn
Applied semantic technology and linked dataWilliam Smith
Mapping a human brain generates petabytes of gene listings and the corresponding locations of these genes throughout the human brain. Due to the large dataset a prototype Semantic Web application was created with the unique ability to link new datasets from similar fields of research, and present these new models to an online community. The resulting application presents a large set of gene to location mappings and provides new information about diseases, drugs, and side effects in relation to the genes and areas of the human brain.
In this presentation we will discuss the normalization processes and tools for adding new datasets, the user experience throughout the publishing process, the underlying technologies behind the application, and demonstrate the preliminary use cases of the project.
"Methodology for Assessment of Linked Data Quality: A Framework" at Workshop on Linked Data Quality
Paper: https://dl.dropboxusercontent.com/u/2265375/LDQ/ldq2014_submission_3.pdf
Linked data for Enterprise Data IntegrationSören Auer
The Web evolves into a Web of Data. In parallel Intranets of large companies will evolve into Data Intranets based on the Linked Data principles. Linked Data has the potential to complement the SOA paradigm with a light-weight, adaptive data integration approach.
The Semantic Web is about to grow up. By efforts such as the Linked Open Data initiative, we finally find ourselves at the edge of a Web of Data becoming reality. Standards such as OWL 2, RIF and SPARQL 1.1 shall allow us to reason with and ask complex structured queries on this data, but still they do not play together smoothly and robustly enough to cope with huge amounts of noisy Web data. In this talk, we discuss open challenges relating to querying and reasoning with Web data and raise the question: can the emerging Web of Data ever catch up with the now ubiquitous HTML Web?
Expressing No-Value Information in RDFFariz Darari
To be presented at The 14th International Semantic Web Conference (ISWC 2015) in Pennsylvania, the USA
Credits for icons:
- https://www.iconfinder.com/icons/322493/sad_icon#size=512
- https://www.iconfinder.com/icons/339913/help_info_information_notice_icon#size=512
- https://pixabay.com/en/barack-obama-president-united-17380/
- https://www.iconfinder.com/icons/80819/child_male_icon#size=256
This poster will be presented at the Research Day at UniBZ
Credits for icons:
https://www.iconfinder.com/icons/254240/document_note_paper_text_icon#size=512
https://www.iconfinder.com/icons/271507/connect_connection_data_global_link_network_social_icon#size=512
https://www.iconfinder.com/icons/85304/complete_file_icon#size=256
https://www.iconfinder.com/icons/299110/check_sign_icon#size=512
https://www.iconfinder.com/icons/702991/bulb_business_creative_idea_lamp_light_lightning_icon#size=512
https://www.iconfinder.com/icons/309059/electricity_energy_idea_light_bulb_icon#size=512
http://iconbug.com/detail/icon/92/metal-hurdle/
https://www.iconfinder.com/icons/702991/bulb_business_creative_idea_lamp_light_lightning_icon#size=128
https://www.iconfinder.com/icons/463010/about_help_info_information_message_bubble_question_support_icon#size=512
Expressing No-Value Information in RDFFariz Darari
Minute Madness Slide for ISWC 2015
Narration:
We all know Obama (yet another Semantic Web example with Obama :-) We know Obama has no sons. However, RDF does not know. So then, when asking for schools of Obama's sons, we know there must be no answer. SPARQL gives no answer,
but does it know if it's due to incompleteness or non-existence of information?
"What is left to do?", Dublin Core 2012 KeynoteDan Brickley
http://dcevents.dublincore.org/index.php/IntConf/index/pages/view/speakers-2012
Abstract: "The original 1995 Dublin Core vision of simple, publisher-provided metadata records for Web pages has finally entered the mainstream. From its earliest days, the Dublin Core community was positioned somewhere between the world of search, and the world of the library. The RDF-based approaches long championed by DCMI have recently enjoyed high profile adoption amongst both search engines and libraries. Where does this leave the Dublin Core as a community? Do we settle down into a quiet life of long-term metadata vocabulary maintenance, or are there larger challenges that emerge from this landscape of newly linked, networked information? Dan Brickley will revisit the history of the Dublin Core, outline the state of the art for bibliographic and Web metadata, and outline possible new roles, information-linking problems and practical opportunities for the Dublin Core as a project and as a growing community."
Sieve - Data Quality and Fusion - LWDM2012Pablo Mendes
Presentation at the LWDM workshop at EDBT 2012.
The Web of Linked Data grows rapidly and already contains data originating from hundreds of data sources. The quality of data from those sources is very diverse, as values may be out of date, incomplete or incorrect. Moreover, data sources
may provide conflicting values for a single real-world object. In order for Linked Data applications to consume data from this global data space in an integrated fashion, a number of challenges have to be overcome. One of these challenges is to rate and to integrate data based on their quality.
However, quality is a very subjective matter, and nding a canonical judgement that is suitable for each and every task is not feasible.
To simplify the task of consuming high-quality data, we present Sieve, a framework for flexibly expressing quality assessment methods as well as fusion methods. Sieve is integrated into the Linked Data Integration Framework (LDIF), which handles Data Access, Schema Mapping and Identity
Resolution, all crucial preliminaries for quality assessment and fusion.
We demonstrate Sieve in a data integration scenario importing data from the English and Portuguese versions of DBpedia, and discuss how we increase completeness, conciseness and consistency through the use of our framework.
An exploration of a possible pipeline for RDF datasets from Timbuctoo instances to the digital archive EASY.
- Get, verify, ingest archive and disseminate (linked) data and metadata.
- What are the implications for an archive: serving linked data over (longer periods of) time
- Practical stuff.
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
Similar to Poster - Completeness Statements about RDF Data Sources and Their Use for Query Answering (20)
When we created this quiz of Java programming course, we did that with Fasilkom UI students in mind.
Fast forward, we now thought that the quiz could be of greater use if it's shared to anyone, not just Fasilkom UI students.
Yes, our students of our course are everyone, including you!
So please find attached, fresh from the oven, Java programming quiz part 01 (with key answers). More parts are coming whenever they are ready.
#java #programming #universitasindonesia #opencourse #openaccess #openeducation #opentridharma
Featuring pointers for: Single-layer neural networks and multi-layer neural networks, gradient descent, backpropagation. Slides are for introduction, for deep explanation on deep learning, please consult other slides.
Current situation: focus is limited to only implement Tridharma, that is, education, research, and community service, with little concern on openness aspect.
The openness of Tridharma can potentially be a breakthrough in mitigating the quality gap issue: opening Tridharma outputs for public would help to increase the citizen inclusion in accessing the quality content of Tridharma, hence narrowing the quality gap in higher education.
[ISWC 2013] Completeness statements about RDF data sources and their use for ...Fariz Darari
This was presented at ISWC 2013 in Sydney, Australia.
Abstract:
With thousands of RDF data sources available on the Web covering disparate and possibly overlapping knowledge domains, the problem of providing high-level descriptions (in the form of metadata) of their content becomes crucial. In this paper we introduce a theoretical framework for describing data sources in terms of their completeness. We show how existing data sources can be described with completeness statements expressed in RDF. We then focus on the problem of the completeness of query answering over plain and RDFS data sources augmented with completeness statements. Finally, we present an extension of the completeness framework for federated data sources.
Dissertation Defense - Managing and Consuming Completeness Information for RD...Fariz Darari
The ever increasing amount of Semantic Web data gives rise to the question: How complete is the data? Though generally data on the Semantic Web is incomplete, many parts of data are indeed complete, such as the children of Barack Obama and the crew of Apollo 11. This thesis aims to study how to manage and consume completeness information about Semantic Web data. In particular, we first discuss how completeness information can guarantee the completeness of query answering. Next, we propose optimization techniques of completeness reasoning and conduct experimental evaluations to show the feasibility of our approaches. We also provide a technique to check the soundness of queries with negation via reduction to query completeness checking. We further enrich completeness information with timestamps, enabling query answers to be checked up to when they are complete. We then introduce two demonstrators, i.e., CORNER and COOL-WD, to show how our completeness framework can be realized. Finally, we investigate an automated method to generate completeness statements from text on the Web via relation cardinality extraction.
KOI - Knowledge Of Incidents - SemEval 2018Fariz Darari
We present KOI (Knowledge Of Incidents), a system that given news articles as input, builds a knowledge graph (KOI-KG) of incidental events.
KOI-KG can then be used to efficiently answer questions such as "How many killing incidents happened in 2017 that involve Sean?" The required steps in building the KG include:
(i) document preprocessing involving word sense disambiguation, named-entity recognition, temporal expression recognition and normalization, and semantic role labeling;
(ii) incidental event extraction and coreference resolution via document clustering; and (iii) KG construction and population.
Slides made and presented by Paramita.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Poster - Completeness Statements about RDF Data Sources and Their Use for Query Answering
1. Completeness Statements about RDF Data Sources
and Their Use for Query Answering
Fariz Darari
joint work with Werner Nutt, Giuseppe Pirrò, and Simon Razniewski
KRDB, Free University of Bozen-Bolzano, Italy
Context
Problem
Thousands of RDF data sources are today
available on the Web.
Machine-readable qualitative descriptions
of their content are crucial.
We focus on data completeness,
an important aspect of data quality.
Contributions
How to formalize and express in
a machine-readable way
completeness information
about RDF data sources?
How to leverage
such completeness information?
Completeness statement on the Web
1. Formal framework for expressing
completeness information.
2. Study of query completeness from
completeness information
in various settings.
Completeness statement on the Semantic Web
lv:lmdbdataset rdf:type void:Dataset.
lv:lmdbdataset c:hasComplStmt lv:st1.
lv:st1 c:hasPattern
[c:subject[spin:varName "m"]; c:predicate schema:actor; c:object[spin:varName "a"]].
lv:st1 c:hasCondition
[c:subject [spin:varName "m"]; c:predicate rdf:type; c:object schema:Movie].
lv:st1 c:hasCondition
[c:subject [spin:varName "m"]; c:predicate schema:director; c:object dbp:Tarantino].
Semantics of completeness statements
For each completeness statement, all the triple patterns defined
via hasPattern are collected into a set P1 and all the triple patterns defined
via hasCondition are collected into a set P2. A completeness statement is
interpreted as: CONSTRUCT {P1} WHERE {P1 . P2}
When a data source has a completeness statement (defined via
hasComplStmt), it means that if the query above is evaluated over
an “ideal” graph then all the results are in the data source.
Users visiting this source can prefer it
to other sources.
Checking query completeness
Given a query Q and a data source with completeness statements S:
1. Create a template answer graph GQ of Q.
2. Over GQ , evaluate all CONSTRUCT queries derived from S
3. Check whether GQ can be obtained after the evaluation.
If yes, the query is complete, otherwise might be incomplete.
However, the completeness
statement verified as complete is
only human readable!
Query completeness in a single data source scenario
@prefix
@prefix
@prefix
@prefix
@prefix
@prefix
@prefix
@prefix
c: <http://inf.unibz.it/ontologies/completeness#>
rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
spin: <http://spinrdf.org/sp#>
void: <http://rdfs.org/ns/void#>
dv: <http://dbpedia.org/void/>
lv: <http://linkedmdb.org/void/>
dbp: <http://dbpedia.org/resource/>
schema: <http://schema.org>
dv:dbpdataset rdf:type void:Dataset;
dv:dbpdataset c:hasComplStmt dv:st1.
dv:st1 c:hasPattern [c:subject [spin:varName "m"];
c:predicate rdf:type; c:object schema:Movie
].
dv:st1 c:hasPattern [c:subject [spin:varName "m"];
c:predicate schema:director;c:object dbp:Tarantino].
Endpoint IRI
DBPe
lv:lmdbdataset rdf:type void:Dataset;
lv:lmdbdataset c:hasComplStmt lv:st1.
lv:st1 c:hasPattern [c:subject [spin:varName "m"];
c:predicate rdf:type; c:object schema:Movie
].
lv:st1 c:hasPattern [c:subject [spin:varName "m"];
c:predicate schema:director;c:object dbp:Tarantino ].
lv:lmdbdataset c:hasComplStmt lv:st2.
lv:st2 c:hasPattern
[c:subject[spin:varName "m"];
c:predicate schema:actor; c:object[spin:varName "a"]].
lv:st2 c:hasCondition [c:subject [spin:varName "m"];
c:predicate rdf:type; c:object schema:Movie].
lv:st2 c:hasCondition [c:subject [spin:varName "m"];
c:predicate schema:director; c:object dbp:Tarantino].
Select all the movies for which
Tarantino is the director and also an actor
SPARQL
endpoint
DBPedia is complete
for all Tarantino's movies
The answer is
incomplete
Endpoint IRI
LMDBe
SELECT ?m
SPARQL
WHERE {?m rdf:type schema:Movie. The answer is
endpoint
complete
?m schema:director dbp:Tarantino.
?m schema:actor dbp:Tarantino}
LinkedMDB is completeall Tarantino’s movies and
LMDB is complete for for all Tarantino's movies
Q
and also moviestheir actors. is an actor
all for which he
Extensions
SPARQL queries with OPT
Completeness with RDFS inference
Federated query completeness
Work In Progress
SPARQL queries with negations and comparisons
Live, Web-based CoRner
Empirical evaluation of query completeness checking
Why is DBpedia
not complete for the query ?
The completeness statement
in DBpedia says that
it is complete for Tarantino’s
movies (dv:st1). However, the
query asks about all movies for
which Tarantino is the director,
and also an actor.
It is not stated that DBpedia
includes all the actors of
Tarantino’s movies.
Therefore, DBpedia is possibly
not complete for this query.
Why is LinkedMDB
complete ?
The completeness statements in
LMDB say that they are complete
for Tarantino’s movies (lv:st1)
and also the actors (lv:st2).
Implementation
CoRner:
Completeness Reasoner
http://rdfcorner.wordpress.com