This book explains the Linked Data domain by adopting a bottom-up approach: it introduces the fundamental Semantic Web technologies and building blocks, which are then combined into methodologies and end-to-end examples for publishing datasets as Linked Data, and use cases that harness scholarly information and sensor data. It presents how Linked Data is used for web-scale data integration, information management and search. Special emphasis is given to the publication of Linked Data from relational databases as well as from real-time sensor data streams. The authors also trace the transformation from the document-based World Wide Web into a Web of Data. Materializing the Web of Linked Data is addressed to researchers and professionals studying software technologies, tools and approaches that drive the Linked Data ecosystem, and the Web in general.
This chapter provides an overview of the methodologies and technologies that support Linked Data designing and publishing. More specifically, this chapter starts with a presentation of the rationale and a discussion about how data can be opened up (i.e. published under an open license). Basic principles are first introduced regarding the cases in which content can be opened up and also, the most common approaches are presented in accomplishing this. Next, we discuss about how data can be modeled, authored, serialized and stored. In this chapter we also provide an overview of the most common technical solutions and widely used software tools that can serve this purpose. Overall, the chapter aims to provide an analysis of the sub-problems into which the Linked Open Data publishing task is to be broken down, namely opening, modeling, linking, processing, and visualizing content, followed by a presentation of the most representative software solutions.
In this chapter, we introduce and discuss the problems that Linked Data solve and the concepts that are related to these problems. We introduce and analyze the basic concepts that are related to the generation of Linked Data and the Semantic Web in general. We provide a brief history of the Semantic Web and the associated evolution of concepts, problem frameworks and solution approaches, all targeted at offering efficient and intelligent solutions to information representation, management and exploitation. More specifically, we introduce the main reasons for the creation of the Semantic Web and the problems that it addresses. Next, we discuss the distinctions between basic terms such as data, information, knowledge, metadata, ontologies, semantic annotations etc. We introduce the notions of interoperability, integration, merging, mapping, and continue with introducing ontologies, reasoners, knowledge bases, all fundamental concepts in the Linked Data ecosystem.
In this Chapter, we consider relational databases as a data source for the generation of Linked Data, given that they constitute one of the most popular data storage media, containing huge data volumes that feed the vast majority of information systems worldwide. In this context, we review the related literature and reveal the main motivations that fuel the relevant approaches, and the benefits that arise from their application. We present a categorization of approaches that map relational databases to the Semantic Web and identify tool implementations that extract RDF graphs from relational database instances. We also sketch a proof-of-concept use case scenario regarding how a repository with scholarly information can be converted to a Linked Data endpoint. The Chapter ends with a discussion of the open issues and future outlook for the problem of RDF generation from relational databases.
In this Chapter, we summarize and discuss the material presented throughout this book. We recapitulate what is presented and discussed in each Chapter. We discuss the most interesting aspects of the Web of Data landscape, highlighting its main contributions, and then continue with a discussion, mentioning our most important observations, including domain-specific benefits in the LOD domain. We conclude the Chapter with a discussion of open research challenges in the Linked Data domain.
E FFICIENT D ATA R ETRIEVAL F ROM C LOUD S TORAGE U SING D ATA M ININ...IJCI JOURNAL
Cloud computing is an emanating technology allowing
users to perform data processing, use as storage
and data admission services from around the world t
hrough internet. The Cloud service providers charge
depending on the user’s usage. Imposing confidentia
lity and scalability on cloud data increases the
complexity of cloud computing. As sensitive informa
tion is centralized into the cloud, this informatio
n must
be encrypted and uploaded to cloud for the data pri
vacy and efficient data utilization. As the data be
comes
complex and number of users are increasing searchin
g of the files must be allowed through multiple
keyword of the end users interest. The traditional
searchable encryption schemes allows users to searc
h in
the encrypted cloud data through keywords, which su
pport only Boolean search, i.e., whether a keyword
exists in a file or not, without any relevance of d
ata files and the queried keyword. Searching of dat
a in the
cloud using Single keyword ranked search results to
o coarse output and the data privacy is opposed usi
ng
server side ranking based on order-preserving encry
ption (OPE)
This chapter provides an overview of the methodologies and technologies that support Linked Data designing and publishing. More specifically, this chapter starts with a presentation of the rationale and a discussion about how data can be opened up (i.e. published under an open license). Basic principles are first introduced regarding the cases in which content can be opened up and also, the most common approaches are presented in accomplishing this. Next, we discuss about how data can be modeled, authored, serialized and stored. In this chapter we also provide an overview of the most common technical solutions and widely used software tools that can serve this purpose. Overall, the chapter aims to provide an analysis of the sub-problems into which the Linked Open Data publishing task is to be broken down, namely opening, modeling, linking, processing, and visualizing content, followed by a presentation of the most representative software solutions.
In this chapter, we introduce and discuss the problems that Linked Data solve and the concepts that are related to these problems. We introduce and analyze the basic concepts that are related to the generation of Linked Data and the Semantic Web in general. We provide a brief history of the Semantic Web and the associated evolution of concepts, problem frameworks and solution approaches, all targeted at offering efficient and intelligent solutions to information representation, management and exploitation. More specifically, we introduce the main reasons for the creation of the Semantic Web and the problems that it addresses. Next, we discuss the distinctions between basic terms such as data, information, knowledge, metadata, ontologies, semantic annotations etc. We introduce the notions of interoperability, integration, merging, mapping, and continue with introducing ontologies, reasoners, knowledge bases, all fundamental concepts in the Linked Data ecosystem.
In this Chapter, we consider relational databases as a data source for the generation of Linked Data, given that they constitute one of the most popular data storage media, containing huge data volumes that feed the vast majority of information systems worldwide. In this context, we review the related literature and reveal the main motivations that fuel the relevant approaches, and the benefits that arise from their application. We present a categorization of approaches that map relational databases to the Semantic Web and identify tool implementations that extract RDF graphs from relational database instances. We also sketch a proof-of-concept use case scenario regarding how a repository with scholarly information can be converted to a Linked Data endpoint. The Chapter ends with a discussion of the open issues and future outlook for the problem of RDF generation from relational databases.
In this Chapter, we summarize and discuss the material presented throughout this book. We recapitulate what is presented and discussed in each Chapter. We discuss the most interesting aspects of the Web of Data landscape, highlighting its main contributions, and then continue with a discussion, mentioning our most important observations, including domain-specific benefits in the LOD domain. We conclude the Chapter with a discussion of open research challenges in the Linked Data domain.
E FFICIENT D ATA R ETRIEVAL F ROM C LOUD S TORAGE U SING D ATA M ININ...IJCI JOURNAL
Cloud computing is an emanating technology allowing
users to perform data processing, use as storage
and data admission services from around the world t
hrough internet. The Cloud service providers charge
depending on the user’s usage. Imposing confidentia
lity and scalability on cloud data increases the
complexity of cloud computing. As sensitive informa
tion is centralized into the cloud, this informatio
n must
be encrypted and uploaded to cloud for the data pri
vacy and efficient data utilization. As the data be
comes
complex and number of users are increasing searchin
g of the files must be allowed through multiple
keyword of the end users interest. The traditional
searchable encryption schemes allows users to searc
h in
the encrypted cloud data through keywords, which su
pport only Boolean search, i.e., whether a keyword
exists in a file or not, without any relevance of d
ata files and the queried keyword. Searching of dat
a in the
cloud using Single keyword ranked search results to
o coarse output and the data privacy is opposed usi
ng
server side ranking based on order-preserving encry
ption (OPE)
For Biodiversity Informatics workshop in Stockholm, Friday September 13. Describing some of the tools in the mx system for mx; a collaborative web-based content management system for evolutionary systematists, particularly those working on descriptive taxonomy.
Yoder, M.J., Dole, K., Seltmann, K., and Deans, A. 2006-Present. Mx, a collaborative web based
content management for biological systematists.
Data is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. However, there is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, system improves storage utilization while reducing reliability
Linked Data for the Masses: The approach and the SoftwareIMC Technologies
Title: Linked Data for the Masses: The approach and the Software
@ EELLAK (GFOSS) Conference 2010
Athens, Greece
15/05/2010
Creator: George Anadiotis (R&D Director)
LDCache - a cache for linked data-driven web applicationsMetaSolutions AB
Presentation of LDCache at the Developers Workshop at the International Semantic Web Conference 2014. See http://ceur-ws.org/Vol-1268/paper12.pdf for the full paper and http://entrystore.org/ldcache/ for the project's website.
Linked Data Notifications Distributed Update Notification and Propagation on ...Aksw Group
Distributed Update Notification and Propagation on the Web of Data with: Linked Data Notifications, PubSubHubbub, Semantic Pingback and Structured Feedback
Drupal, CKAN and Public Data. DrupalGov 08 february 2016Steven De Costa
Main points from this presentation are:
DKAN is not CKAN
CKAN is owning Australian Government
Data.Vic, Data.NSW, Data.SA and Data.Brisbane use Drupal and CKAN together
Single Sign on – https://github.com/ckan/ckanext-drupal7
Taxonomies and CKAN - pulling from CKAN into Drupal to enhance content for Government websites.
Webforms to CKAN - for an 'open data' form collection process.
Resource Views for Drupal - configured for a CKAN portal and orgsanisation.
Telling stories with data...
2016 BE Final year Projects in chennai - 1 Crore Projects 1crore projects
IEEE PROJECTS 2016 - 2017
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering,
2. IEEE based on mobile computing,
3. IEEE based on networking,
4. IEEE based on Image processing,
5. IEEE based on Multimedia,
6. IEEE based on Network security,
7. IEEE based on parallel and distributed systems
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2016
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
5. IOT Projects
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US:-
1 CRORE PROJECTS
Door No: 66 ,Ground Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 7708150152
Secure Data Sharing For Dynamic Groups in Multi-Attorney Manner Using Cloudpaperpublications3
Abstract: Cloud computing provides an economical and efficient solution for sharing data among the cloud users in the group , users sharing data in a multi-attorney manner preserving data and identity privacy from an untrusted cloud, it is still a challenging issue, due to frequent change of the membership in the group. In this paper, we propose a multi-attorney data sharing scheme for the dynamic groups in the cloud. By combing group signature and Tripple DES encryption techniques, any cloud user anonymously share the data with others. In addition, we analyze the security of our scheme with rigorous proofs, and demonstrate the efficiency of our scheme in experiments.Keywords: cloud computing, data sharing, privacy-preserving, access control, and dynamic groups.
Title: Secure Data Sharing For Dynamic Groups in Multi-Attorney Manner Using Cloud
Author: Vijaya Kumar Patil C, Manjunath H
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Incremental Export of Relational Database Contents into RDF GraphsNikolaos Konstantinou
In addition to tools offering RDF views over databases, a variety of tools exist that allow exporting database contents into RDF graphs; tools proven that in many cases demonstrate better performance than the former. However, in cases when database contents are exported into RDF, it is not always optimal or even necessary to dump the whole database contents every time. In this paper, the problem of incremental generation and storage of the resulting RDF graph is investigated. An implementation of the R2RML standard is used in order to express mappings that associate tuples from the source database to triples in the resulting RDF graph. Next, a methodology is proposed that enables incremental generation and storage of an RDF graph based on a source relational database, and it is evaluated through a set of performance measurements. Finally, a discussion is presented regarding the authors’ most important findings and conclusions.
An Approach for the Incremental Export of Relational Databases into RDF GraphsNikolaos Konstantinou
Several approaches have been proposed in the literature for offering RDF views over databases. In addition to these, a variety of tools exist that allow exporting database contents into RDF graphs. The approaches in the latter category have often been proved demonstrating better performance than the ones in the former. However, when database contents are exported into RDF, it is not always optimal or even necessary to export, or dump as this procedure is often called, the whole database contents every time. This paper investigates the problem of incremental generation and storage of the RDF graph that is the result of exporting relational database contents. In order to express mappings that associate tuples from the source database to triples in the resulting RDF graph, an implementation of the R2RML standard is subject to testing. Next, a methodology is proposed and described that enables incremental generation and storage of the RDF graph that originates from the source relational database contents. The performance of this methodology is assessed, through an extensive set of measurements. The paper concludes with a discussion regarding the authors' most important findings.
This chapter introduces the semantic modeling procedure, detailing its technical characteristics, possibilities and limitations. First, we present the languages that are used for semantic description. We present RDF, RDFS and OWL, describe their expressiveness in terms of describing Web Resources, and the abilities they provide in order to describe, query, administer and manage resources at a semantic layer. Next, we present the vocabularies that are used in order to provide common grounds in understanding and communicating ideas and concepts. The technologies, together with the vocabularies used, altogether comprise the modern landscape of Semantic Web/Linked Data applications and serve as the basis for maintaining, analyzing datasets and building applications on top of them.
Transient and persistent RDF views over relational databases in the context o...Nikolaos Konstantinou
As far as digital repositories are concerned, numerous benefits emerge from the disposal of their contents as Linked Open Data (LOD). This leads more and more repositories towards this direction. However, several factors need to be taken into account in doing so, among which is whether the transition needs to be materialized in real-time or in asynchronous time intervals. In this paper we provide the problem framework in the context of digital repositories, we discuss the benefits and drawbacks of both approaches and draw our conclusions after evaluating a set of performance measurements. Overall, we argue that in contexts with infrequent data updates, as is the case with digital repositories, persistent RDF views are more efficient than real-time SPARQL-to-SQL rewriting systems in terms of query response times, especially when expensive SQL queries are involved.
For Biodiversity Informatics workshop in Stockholm, Friday September 13. Describing some of the tools in the mx system for mx; a collaborative web-based content management system for evolutionary systematists, particularly those working on descriptive taxonomy.
Yoder, M.J., Dole, K., Seltmann, K., and Deans, A. 2006-Present. Mx, a collaborative web based
content management for biological systematists.
Data is a technique for eliminating duplicate copies of data, and has been widely used in cloud storage to reduce storage space and upload bandwidth. However, there is only one copy for each file stored in cloud even if such a file is owned by a huge number of users. As a result, system improves storage utilization while reducing reliability
Linked Data for the Masses: The approach and the SoftwareIMC Technologies
Title: Linked Data for the Masses: The approach and the Software
@ EELLAK (GFOSS) Conference 2010
Athens, Greece
15/05/2010
Creator: George Anadiotis (R&D Director)
LDCache - a cache for linked data-driven web applicationsMetaSolutions AB
Presentation of LDCache at the Developers Workshop at the International Semantic Web Conference 2014. See http://ceur-ws.org/Vol-1268/paper12.pdf for the full paper and http://entrystore.org/ldcache/ for the project's website.
Linked Data Notifications Distributed Update Notification and Propagation on ...Aksw Group
Distributed Update Notification and Propagation on the Web of Data with: Linked Data Notifications, PubSubHubbub, Semantic Pingback and Structured Feedback
Drupal, CKAN and Public Data. DrupalGov 08 february 2016Steven De Costa
Main points from this presentation are:
DKAN is not CKAN
CKAN is owning Australian Government
Data.Vic, Data.NSW, Data.SA and Data.Brisbane use Drupal and CKAN together
Single Sign on – https://github.com/ckan/ckanext-drupal7
Taxonomies and CKAN - pulling from CKAN into Drupal to enhance content for Government websites.
Webforms to CKAN - for an 'open data' form collection process.
Resource Views for Drupal - configured for a CKAN portal and orgsanisation.
Telling stories with data...
2016 BE Final year Projects in chennai - 1 Crore Projects 1crore projects
IEEE PROJECTS 2016 - 2017
1 crore projects is a leading Guide for ieee Projects and real time projects Works Provider.
It has been provided Lot of Guidance for Thousands of Students & made them more beneficial in all Technology Training.
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering,
2. IEEE based on mobile computing,
3. IEEE based on networking,
4. IEEE based on Image processing,
5. IEEE based on Multimedia,
6. IEEE based on Network security,
7. IEEE based on parallel and distributed systems
Project Domain list 2016
1. IEEE based on datamining and knowledge engineering
2. IEEE based on mobile computing
3. IEEE based on networking
4. IEEE based on Image processing
5. IEEE based on Multimedia
6. IEEE based on Network security
7. IEEE based on parallel and distributed systems
ECE IEEE Projects 2016
1. Matlab project
2. Ns2 project
3. Embedded project
4. Robotics project
5. IOT Projects
Eligibility
Final Year students of
1. BSc (C.S)
2. BCA/B.E(C.S)
3. B.Tech IT
4. BE (C.S)
5. MSc (C.S)
6. MSc (IT)
7. MCA
8. MS (IT)
9. ME(ALL)
10. BE(ECE)(EEE)(E&I)
TECHNOLOGY USED AND FOR TRAINING IN
1. DOT NET
2. C sharp
3. ASP
4. VB
5. SQL SERVER
6. JAVA
7. J2EE
8. STRINGS
9. ORACLE
10. VB dotNET
11. EMBEDDED
12. MAT LAB
13. LAB VIEW
14. Multi Sim
CONTACT US:-
1 CRORE PROJECTS
Door No: 66 ,Ground Floor,
No. 172, Raahat Plaza, (Shopping Mall) ,Arcot Road, Vadapalani, Chennai,
Tamin Nadu, INDIA - 600 026
Email id: 1croreprojects@gmail.com
website:1croreprojects.com
Phone : +91 97518 00789 / +91 7708150152
Secure Data Sharing For Dynamic Groups in Multi-Attorney Manner Using Cloudpaperpublications3
Abstract: Cloud computing provides an economical and efficient solution for sharing data among the cloud users in the group , users sharing data in a multi-attorney manner preserving data and identity privacy from an untrusted cloud, it is still a challenging issue, due to frequent change of the membership in the group. In this paper, we propose a multi-attorney data sharing scheme for the dynamic groups in the cloud. By combing group signature and Tripple DES encryption techniques, any cloud user anonymously share the data with others. In addition, we analyze the security of our scheme with rigorous proofs, and demonstrate the efficiency of our scheme in experiments.Keywords: cloud computing, data sharing, privacy-preserving, access control, and dynamic groups.
Title: Secure Data Sharing For Dynamic Groups in Multi-Attorney Manner Using Cloud
Author: Vijaya Kumar Patil C, Manjunath H
International Journal of Recent Research in Mathematics Computer Science and Information Technology
ISSN 2350-1022
Paper Publications
Incremental Export of Relational Database Contents into RDF GraphsNikolaos Konstantinou
In addition to tools offering RDF views over databases, a variety of tools exist that allow exporting database contents into RDF graphs; tools proven that in many cases demonstrate better performance than the former. However, in cases when database contents are exported into RDF, it is not always optimal or even necessary to dump the whole database contents every time. In this paper, the problem of incremental generation and storage of the resulting RDF graph is investigated. An implementation of the R2RML standard is used in order to express mappings that associate tuples from the source database to triples in the resulting RDF graph. Next, a methodology is proposed that enables incremental generation and storage of an RDF graph based on a source relational database, and it is evaluated through a set of performance measurements. Finally, a discussion is presented regarding the authors’ most important findings and conclusions.
An Approach for the Incremental Export of Relational Databases into RDF GraphsNikolaos Konstantinou
Several approaches have been proposed in the literature for offering RDF views over databases. In addition to these, a variety of tools exist that allow exporting database contents into RDF graphs. The approaches in the latter category have often been proved demonstrating better performance than the ones in the former. However, when database contents are exported into RDF, it is not always optimal or even necessary to export, or dump as this procedure is often called, the whole database contents every time. This paper investigates the problem of incremental generation and storage of the RDF graph that is the result of exporting relational database contents. In order to express mappings that associate tuples from the source database to triples in the resulting RDF graph, an implementation of the R2RML standard is subject to testing. Next, a methodology is proposed and described that enables incremental generation and storage of the RDF graph that originates from the source relational database contents. The performance of this methodology is assessed, through an extensive set of measurements. The paper concludes with a discussion regarding the authors' most important findings.
This chapter introduces the semantic modeling procedure, detailing its technical characteristics, possibilities and limitations. First, we present the languages that are used for semantic description. We present RDF, RDFS and OWL, describe their expressiveness in terms of describing Web Resources, and the abilities they provide in order to describe, query, administer and manage resources at a semantic layer. Next, we present the vocabularies that are used in order to provide common grounds in understanding and communicating ideas and concepts. The technologies, together with the vocabularies used, altogether comprise the modern landscape of Semantic Web/Linked Data applications and serve as the basis for maintaining, analyzing datasets and building applications on top of them.
Transient and persistent RDF views over relational databases in the context o...Nikolaos Konstantinou
As far as digital repositories are concerned, numerous benefits emerge from the disposal of their contents as Linked Open Data (LOD). This leads more and more repositories towards this direction. However, several factors need to be taken into account in doing so, among which is whether the transition needs to be materialized in real-time or in asynchronous time intervals. In this paper we provide the problem framework in the context of digital repositories, we discuss the benefits and drawbacks of both approaches and draw our conclusions after evaluating a set of performance measurements. Overall, we argue that in contexts with infrequent data updates, as is the case with digital repositories, persistent RDF views are more efficient than real-time SPARQL-to-SQL rewriting systems in terms of query response times, especially when expensive SQL queries are involved.
Presentation created for the CILIP Cataloguing Interest Group event on Linked Data, 25th November 2013 (http://www.cilip.org.uk/cataloguing-and-indexing-group/events/linked-data-what-cataloguers-need-know-cig-event)
Linked Open Data Principles, benefits of LOD for sustainable developmentMartin Kaltenböck
Presentation held on 18.09.2013 at the OKCon 2013 in Geneva, Switzerland in the course of the workshop: How Linked Open data supports Sustainable Development and Climate Change Development by Martin Kaltenböck (SWC), Florian Bauer (REEEP) and Jens Laustsen (GBPN).
In this Chapter, we focus on dealing with data originating from sensor data streams, in order to materialize an intelligent, semantically-enabled data layer. First, we introduce the concepts that are covered in this Section: real-time, context-awareness, windowing, information fusion. Next, we mention the difficulties associated with the attempt of creating a semantic sensor network, we note our architectural concerns by presenting a number of issues that have to be dealt with when designing a system for the real-time information integration from distributed data sources and sensors. Finally, the anatomy of a system for the end-to-end data multi-sensor data fusion and semantic enrichment is illustrated, while the end-to-end information flow and respective steps are analyzed.
Entity Linking in Queries: Tasks and EvaluationFaegheh Hasibi
Slides for the ICTIR 2015 paper "Entity Linking in Queries: Tasks and Evaluation"
Annotating queries with entities is one of the core problem areas in query understanding. While seeming similar, the task of entity linking in queries is different from entity linking in documents and requires a methodological departure due to the inherent ambiguity of queries. We differentiate between two specific tasks, semantic mapping and interpretation finding, discuss current evaluation methodology, and propose refinements. We examine publicly available datasets for these tasks and introduce a new manually curated dataset for interpretation finding. To further deepen the understanding of task differences, we present a set of approaches for effectively addressing these tasks and report on experimental results.
My Linked Data tutorial presentation that I presented at Semtech 2012.
http://semtechbizsf2012.semanticweb.com/sessionPop.cfm?confid=65&proposalid=4724
In search of lost knowledge: joining the dots with Linked Datajonblower
These slides are from my seminar to the University of Reading Department of Meteorology, November 2013. They contain a (hopefully not very technical) introduction to the concepts of Linked Data and how we are applying them in the CHARMe project (http://www.charme.org.uk). In CHARMe we are using Open Annotation to connect users of climate data with community-generated "commentary information" that helps them to understand a dataset's strengths and weaknesses.
The slide notes contain some helpful context, so you might like to download the PPT file!
The slides are licensed as "Creative Commons Attribution 3.0", meaning that you can do what you like with these slides provided that you credit the University of Reading for their creation. See http://creativecommons.org/licenses/by/3.0/.
Buildvoc Introduction to linked data digital construction week 2018Phil Stacey ICIOB
Learning outcomes
• structured data how can it be used in the building industry
• controlled vocabulary benefits
• standard naming convention (SMP)
• linked data vocabulary with document repository
• live demonstration of linked data knowledge library
Slides from our tutorial on Linked Data generation in the energy domain, presented at the Sustainable Places 2014 conference on October 2nd in Nice, France
This talk was given at SEMANTiCS 2014 in Leipzig. It gives an overview how to develop an enterprise linked data strategy around controlled vocabularies based on SKOS. It discusses how knowledge graphs based on SKOS can extended step by step due to the needs of the organization.
Keynote given at the workshop for Artificial Intelligence meets the Web of Data on Pragmatic Semantics.
In this keynote I argue that the Web of Data is a Complex System or Marketplace of Ideas rather than a classical Database, and that the model theory on which classical semantics are based is not appropriate in all situations, and propose an alternative "Pragmatic Semantics" based on optimisation of possible interpretations. .
In this PPT, you will learn:
• The difference between data and information
• What a database is, the various types of databases, and why they are valuable assets for
decision making
• The importance of database design
• How modern databases evolved from file systems
• About flaws in file system data management
• The main components of the database system
• The main functions of a database management system (DBMS)
Web Content Mining Based on Dom Intersection and Visual Features Conceptijceronline
Structured Data extraction from deep Web pages is a challenging task due to the underlying complex structures of such pages. Also website developer generally follows different web page design technique. Data extraction from webpage is highly useful to build our own database from number applications. A large number of techniques have been proposed to address this problem, but all of them have inherent limitations because they present different limitations and constraints for extracting data from such webpages. This paper presents two different approaches to get structured data extraction. The first approach is non-generic solution which is based on template detection using intersection of Document Object Model Tree of various webpages from the same website. This approach is giving better result in terms of efficiency and accurately locating the main data at the particular webpage. The second approach is based on partial tree alignment mechanism based on using important visual features such as length, size, and position of web table available on the webpages. This approach is a generic solution as it does not depend on one particular website and its webpage template. It is perfectly locating the multiple data regions, data records and data items within a given web page. We have compared our work's result with existing mechanism and found our result much better for number webpage
CLARIAH Toogdag 2018: A distributed network of digital heritage informationEnno Meijers
Slides of my keynote at the CLARIAH Toogdag 2018 on 9 March at the National Library of the Netherlands. The main topics were the development of the distributed digital heritage network and the alignment to and cooperation with the CLARIAH infrastructure and data. It also points at some of the current limitations of the semantic web technology.
The slides are all about the description of database, database management system , different levels of abstraction in databases, limitations of traditional file systems and the advantages over the database.
Unlock Your Data for ML & AI using Data VirtualizationDenodo
How Denodo Complement’s Logical Data Lake in Cloud
● Denodo does not substitute data warehouses, data lakes,
ETLs...
● Denodo enables the use of all together plus other data
sources
○ In a logical data warehouse
○ In a logical data lake
○ They are very similar, the only difference is in the main
objective
● There are also use cases where Denodo can be used as data
source in a ETL flow
The FAIR data movement and 22 Feb 2023.pdfAlan Morrison
To realize the promise of FAIR data, companies must be data mature. They must adopt data-centric architecture and the #FAIR (findable, accessible, interoperable and reusable) principles. When they do, the data they need will be linked and self-describing. The data when queried will tell you where it is.
A desiloed, #semantic graph data abstraction--the only feasible means behind creating FAIR data at this point--is not only the means to data discovery, but also a path to model-driven development and data sharing at scale, both of which will break an organization's habit of duplicating data and logic.
This webinar highlights fresh enterprise case studies that are starting to realize the dream of #FAIRdata, as well as how these companies are succeeding:
- Zero copy integration: How to think about eliminating #dataduplication and stop the application buying binge that only exacerbates the problem.
- Dynamic, unified data model: Standard graphs provide a means of modeling once, use anywhere, for conceptual, logical and physical purposes all at once.
- Persuasion and teamwork: The #graph approach provides an ideal way to loop business units and domain experts in and empower them to recommend model changes that are easily implemented.
The whole process is bringing #enterprises like Walmart, Uber, Goldman Sachs and Nokia into the age of #contextualcomputing. Learn how to be a fast follower by thinking big, but starting small.
Similar to Materializing the Web of Linked Data (20)
Exposing Bibliographic Information as Linked Open Data using Standards-based ...Nikolaos Konstantinou
The Linked Open Data (LOD) movement is constantly gaining worldwide acceptance. In this paper we describe how LOD is generated in the case of digital repositories that contain bibliographic information, adopting international standards. The available options and respective choices are presented and justified while we also provide a technical description, the methodology we followed, the possibilities and difficulties in the way, and the respective benefits and drawbacks. Detailed examples are provided regarding the implementation and query capabilities, and the paper concludes after a discussion over the results and the challenges associated with our approach, and our most important observations and future plans.
Priamos: A Middleware Architecture for Real-Time Semantic Annotation of Conte...Nikolaos Konstantinou
This paper proposes a middleware architecture for the automated, real-time, unsupervised annotation of low-level context features and their mapping to high-level semantics. The distinguishing characteristic of this architecture is that both low level components such as sensors, feature extraction algorithms and data sources, and high level components such as application-specific ontologies are pluggable to the middleware architecture thus facilitating application development and system configuration to different real-world scenarios. A prototype implementation based on Semantic Web tools is presented in depth, while the benefits and drawbacks of this approach are underlined. We argue that the use of Semantic Web provides powerful answers to context awareness challenges. Furthermore, it enables the composition of simple rules through human-centric interfaces, which may launch a context-aware system that will annotate content without the need for user technical expertise. A test case of system operation in a laboratory environment is presented. Emphasis is given, along with the theoretical justification, to practical issues that arise in real-world scenarios.
From Sensor Data to Triples: Information Flow in Semantic Sensor NetworksNikolaos Konstantinou
Sensor networks constitute a technological approach of increasing popularity when it comes to monitoring an area, offering context-aware solutions. This shift from Desktop computing to Ubiquitous computing entails numerous options and challenges in designing, implementing and shaping the behavior of systems that consume, integrate, fuse and exploit sensor data. Things tend to be more complicated when, in order to extract meaning from the collected information, the Semantic Web paradigm is adopted. In this talk, we discuss the information flow in systems that collect sensor data into semantically annotated repositories. Specifically, we analyze the journey that information makes, from its capture as electromagnetic pulses by the sensors, to its storage as a semantic web triples, along with its semantics in the system’s knowledge base. We introduce the main related concepts, we analyze the main components that such systems comprise, the choices that can be made, and the respective benefits, drawbacks, and effect to the overall system properties.
A rule-based approach for the real-time semantic annotation in context-aware ...Nikolaos Konstantinou
Typically, a context-aware system is able to collect vast amounts of information coming from data collected by sensors. The problem that occurs lies mainly in how this information can be integrated and used at a semantic level, without a significant reduction in system performance. In the scope of this talk, we analyse a middleware-based pilot system, in order to study problems that concern context-aware systems that incorporate and exploit semantic information in real time. We analyze the data flow in the system and, more specifically, we present how with the use of a middleware, rules, and web services, (experimental) data can flow into the system and form a semantic Knowledge Base, able to answer semantic queries. Particular reference is made to the real-time processing of the results but also to the synchronous and asynchronous procedures that can take place in order to assure system operation and scalability.
Welocme to ViralQR, your best QR code generator.ViralQR
Welcome to ViralQR, your best QR code generator available on the market!
At ViralQR, we design static and dynamic QR codes. Our mission is to make business operations easier and customer engagement more powerful through the use of QR technology. Be it a small-scale business or a huge enterprise, our easy-to-use platform provides multiple choices that can be tailored according to your company's branding and marketing strategies.
Our Vision
We are here to make the process of creating QR codes easy and smooth, thus enhancing customer interaction and making business more fluid. We very strongly believe in the ability of QR codes to change the world for businesses in their interaction with customers and are set on making that technology accessible and usable far and wide.
Our Achievements
Ever since its inception, we have successfully served many clients by offering QR codes in their marketing, service delivery, and collection of feedback across various industries. Our platform has been recognized for its ease of use and amazing features, which helped a business to make QR codes.
Our Services
At ViralQR, here is a comprehensive suite of services that caters to your very needs:
Static QR Codes: Create free static QR codes. These QR codes are able to store significant information such as URLs, vCards, plain text, emails and SMS, Wi-Fi credentials, and Bitcoin addresses.
Dynamic QR codes: These also have all the advanced features but are subscription-based. They can directly link to PDF files, images, micro-landing pages, social accounts, review forms, business pages, and applications. In addition, they can be branded with CTAs, frames, patterns, colors, and logos to enhance your branding.
Pricing and Packages
Additionally, there is a 14-day free offer to ViralQR, which is an exceptional opportunity for new users to take a feel of this platform. One can easily subscribe from there and experience the full dynamic of using QR codes. The subscription plans are not only meant for business; they are priced very flexibly so that literally every business could afford to benefit from our service.
Why choose us?
ViralQR will provide services for marketing, advertising, catering, retail, and the like. The QR codes can be posted on fliers, packaging, merchandise, and banners, as well as to substitute for cash and cards in a restaurant or coffee shop. With QR codes integrated into your business, improve customer engagement and streamline operations.
Comprehensive Analytics
Subscribers of ViralQR receive detailed analytics and tracking tools in light of having a view of the core values of QR code performance. Our analytics dashboard shows aggregate views and unique views, as well as detailed information about each impression, including time, device, browser, and estimated location by city and country.
So, thank you for choosing ViralQR; we have an offer of nothing but the best in terms of QR code services to meet business diversity!
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Le nuove frontiere dell'AI nell'RPA con UiPath Autopilot™UiPathCommunity
In questo evento online gratuito, organizzato dalla Community Italiana di UiPath, potrai esplorare le nuove funzionalità di Autopilot, il tool che integra l'Intelligenza Artificiale nei processi di sviluppo e utilizzo delle Automazioni.
📕 Vedremo insieme alcuni esempi dell'utilizzo di Autopilot in diversi tool della Suite UiPath:
Autopilot per Studio Web
Autopilot per Studio
Autopilot per Apps
Clipboard AI
GenAI applicata alla Document Understanding
👨🏫👨💻 Speakers:
Stefano Negro, UiPath MVPx3, RPA Tech Lead @ BSP Consultant
Flavio Martinelli, UiPath MVP 2023, Technical Account Manager @UiPath
Andrei Tasca, RPA Solutions Team Lead @NTT Data
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
3. Contents
1. Introduction: Linked Data and the Semantic Web
2. Technical Background
3. Deploying Linked Open Data: Methodologies and
Software Tools
4. Creating Linked Data from Relational Databases
5. Generating Linked Data in Real-time from Sensor Data
Streams
6. Conclusions: Summary and Outlook
Materializing the Web of Linked Data
4. Chapter 1
Introduction
Linked Data and the Semantic Web
NIKOLAOS KONSTANTINOU
DIMITRIOS-EMMANUEL SPANOS
Materializing the Web of Linked Data
6. The Origin of the Semantic Web
Semantic Web
◦ Term primarily coined by Tim Berners-Lee
◦ For the Web to be understandable both by humans and software,
it should incorporate its meaning
◦ Its semantics
Linked Data implements the Semantic Web vision
Chapter 1 Materializing the Web of Linked Data 6
7. Why a Semantic Web? (1)
Information management
◦ Sharing, accessing, retrieving, consuming information
◦ Increasingly tedious task
Search engines
◦ Rely on Information Retrieval techniques
◦ Keyword-based searches
◦ Do not release the information potential
◦ Large quantity of existing data on the web is not stored in HTML
◦ The Deep Web
Chapter 1 Materializing the Web of Linked Data 7
8. Why a Semantic Web? (2)
Content meaning should be taken into account
◦ Structure to the meaningful Web page content
◦ Enable software agents to
◦ Roam from page to page
◦ Carry out sophisticated tasks for users
Semantic Web
◦ Complement, not replace current Web
◦ Transform content into an exploitable source of knowledge
Chapter 1 Materializing the Web of Linked Data 8
9. Why a Semantic Web? (3)
Web of Data
◦ An emerging web of interconnected published datasets in the form
of Linked Data
◦ Implements the Semantic Web vision
Chapter 1 Materializing the Web of Linked Data 9
10. The Need for Adding Semantics (1)
Semantics
◦ From the Greek word σημαντικός
◦ Pronounced si̱mantikós, means significant
◦ Term typically used to denote the study of meaning
◦ Using semantics
◦ Capture the interpretation of a formal or natural language
◦ Enables entailment, application of logical consequence
◦ About relationships between the statements that are expressed in this language
Chapter 1 Materializing the Web of Linked Data 10
11. The Need for Adding Semantics (2)
Syntax
◦ The study of the principles and processes by which sentences can
be formed
◦ Also of Greek origin
◦ συν and τάξις
◦ Pronounced sin and taxis
◦ Mean together and ordering, respectively
Chapter 1 Materializing the Web of Linked Data 11
12. The Need for Adding Semantics (3)
Two statements can be syntactically different but
semantically equivalent
◦ Their meaning (semantics) is the same, the syntax is different
◦ More than one ways to state an assumption
Chapter 1 Materializing the Web of Linked Data 12
13. The Need for Adding Semantics (4)
Search engines rely on keyword matches
◦ Keyword-based technologies
◦ Have come a long way since the dawn of the internet
◦ Will return accurate results
◦ Based on keyword matches
◦ Extraction and indexing of keywords contained in web pages
◦ But
◦ More complex queries with semantics still partially covered
◦ Most of the information will not be queried
Chapter 1 Materializing the Web of Linked Data 13
14. Traditional search engines
Rely on keyword matches
Low precision
◦ Important results may not be fetched, or
◦ Ranked low
◦ No exact keyword matches
Chapter 1 Materializing the Web of Linked Data 14
15. Keyword-based searches
Do not return usually the desired results
Despite the amounts of information
Typical user behavior
◦ Change query rather than navigate beyond the first page of the
search results
Chapter 1 Materializing the Web of Linked Data 15
16. The Semantic Web (1)
Tackle these issues
◦ By offering ways to describe of Web resources
Enrich existing information
◦ Add semantics that specify the resources
◦ Understandable both by humans and computers
Information easier to discover, classify and sort
Enable semantic information integration
Increase serendipitous discovery of information
Allow inference
Chapter 1 Materializing the Web of Linked Data 16
17. The Semantic Web (2)
Example
◦ A researcher would not have to visit every result page and
distinguish the relevant from the irrelevant results
◦ Instead, retrieve a list of more related, machine-processable
information
◦ Ideally containing links to more relevant information
Chapter 1 Materializing the Web of Linked Data 17
19. Data-Information-Knowledge (1)
Data
Information
◦ “facts about a situation, person, event, etc.”
Smallest information particle
◦ The bit
◦ Replies with a yes or no (1 or 0)
◦ Can carry data, but in the same time, it can carry the result of a process
◦ For instance whether an experiment had a successful conclusion or not
Chapter 1 Materializing the Web of Linked Data 19
Information, especially facts or numbers, collected to be examined
and considered and used to help decision-making, or information in
an electronic form that can be stored and used by a computer
20. Data-Information-Knowledge (2)
E.g. “temperature” in a relational database could be
◦ The information produced after statistical analysis over numerous
measurements
◦ By a researcher wishing to extract the average value in a region over the
years
◦ A sensor measurement
◦ Part of the data that will contribute in drawing conclusions regarding climate
change
Chapter 1 Materializing the Web of Linked Data 20
21. Data-Information-Knowledge (3)
Knowledge
◦ Key term: “by experience or study”
◦ Implies processing of the underlying information
Chapter 1 Materializing the Web of Linked Data 21
Understanding of or information about a subject that
you get by experience or study, either known by one
person or by people generally
22. Data-Information-Knowledge (4)
No clear lines between them
◦ Term that is used depends on the respective point of view
When we process collected data in order to extract meaning, we
create new information
Addition of semantics
◦ Indispensable in generating new knowledge
◦ Semantic enrichment of the information
◦ Makes it unambiguously understood by any interested party (by people generally)
Data → Information → Knowledge
Chapter 1 Materializing the Web of Linked Data 22
23. Heterogeneity
Distributed data sources
◦ Interconnected, or not
◦ Different models and schemas
◦ Differences in the vocabulary
◦ Syntactic mismatch
◦ Semantic mismatch
Chapter 1 Materializing the Web of Linked Data 23
24. Interoperability (1)
Interoperable
◦ Two systems in position to successfully exchange information
3 non-mutually exclusive approaches
◦ Mapping among the concepts of each source
◦ Intermediation in order to translate queries
◦ Query-based
Protocols and standards (recommendations) are crucial
◦ E.g. SOAP, WSDL, microformats
Chapter 1 Materializing the Web of Linked Data 24
25. Interoperability (2)
Key concept: Schema
◦ Word comes from the Greek word σχήμα
◦ Pronounced schíma
◦ Means the shape, the outline
◦ Can be regarded as a common agreement regarding the interchanged data
◦ Data schema
◦ Defines how the data is to be structured
Chapter 1 Materializing the Web of Linked Data 25
26. Information Integration (1)
Combine information from heterogeneous systems, sources of
storage and processing
◦ Ability to process and handle it as a whole
Global-As-View
◦ Every element of the global schema is expressed as a query/view over
the schemas of the sources
◦ Preferable when the source schemas are not subject to frequent
changes
Chapter 1 Materializing the Web of Linked Data 26
27. Information Integration (2)
Local-As-View
◦ Every element of the local schemas is expressed as a query/view over
the global schema
P2P
◦ Mappings among the sources exist but no common schema
Chapter 1 Materializing the Web of Linked Data 27
28. Information Integration Architecture (1)
A source Π1 with schema S1
A source Π2 with a schema S2
…
A source Πν with source Sν
A global schema S in which the higher level queries are posed
Chapter 1 Materializing the Web of Linked Data 28
29. Information Integration Architecture (2)
The goal in the information integration problem
◦ Submit queries to the global schema S
◦ Receive answers from S1, S2, …, Sν
◦ Without having to deal with or even being aware of the heterogeneity
in the information sources
Chapter 1 Materializing the Web of Linked Data 29
30. Data Integration (1)
A data integration system
◦ A triple 𝐼 = 𝐺; 𝑆; 𝑀
◦ G is the global schema,
◦ S is the source schema and
◦ M is a set of mappings between G and S
Chapter 1 Materializing the Web of Linked Data 30
31. Data Integration (2)
Local-As-View
◦ Each declaration in M maps an element from the source schema S to
a query (a view) over the global schema G
Global-As-View
◦ Each declaration in M maps an element of the global schema G to a
query (a view) over the source schema S
The global schema G is a unified view over the heterogeneous
set of data sources
Chapter 1 Materializing the Web of Linked Data 31
32. Data Integration (3)
Same goal
◦ Unifying the data sources under the same common schema
Semantic information integration
◦ Addition of its semantics in the resulting integration scheme
Chapter 1 Materializing the Web of Linked Data 32
33. Mapping
Using the definition of data integration systems, we can
define the concept of mapping
◦ A mapping m (member of M) from a schema S to a schema T
◦ A declaration of the form QS ⇝ QT
◦ QS is a query over S
◦ QT a query over T
Chapter 1 Materializing the Web of Linked Data 33
34. Mapping vs. Merging
(Data) mapping
◦ A mapping is the specification of a mechanism
◦ The members of a model are transformed to members of another model
◦ The meta-model that can be the same, or different
◦ A mapping can be declared as a set of relationships, constraints, rules,
templates or parameters
◦ Defined during the mapping process, or through other forms that have not yet been defined
Merging
◦ Implies unifying the information at the implementation/storage layer
Chapter 1 Materializing the Web of Linked Data 34
35. Annotation (1)
Addition of metadata to the data
Data can be encoded in any standard
Especially important in cases when data is not human-
understandable in its primary form
◦ E.g. multimedia
Chapter 1 Materializing the Web of Linked Data 35
36. Annotation (2)
(Simple) annotation
◦ Uses keywords or other ad hoc serialization
◦ Impedes further processing of the metadata
Semantic annotation
◦ Describe the data using a common, established way
◦ Addition of the semantics in the annotation (i.e. the metadata)
◦ In a way that the annotation will be machine-processable
◦ In order to be able to infer additional knowledge
Chapter 1 Materializing the Web of Linked Data 36
37. Problems with (Semantic) Annotation
Time-consuming
◦ Users simply do not have enough time or do not consider it important enough
in order to invest time to annotate their content
Familiarity required with both the conceptual and technical parts of
the annotation
◦ User performing the annotation must be familiar with both the technical as
well as the conceptual part
Outdated annotations
◦ A risk, especially in rapidly changing environments with lack of automation in
the annotation process
Chapter 1 Materializing the Web of Linked Data 37
38. Automated Annotation
Incomplete annotation is preferable to absent
◦ E.g. in video streams
Limitations of systems performing automated annotation
◦ Limited recall and precision (lost or inaccurate annotations)
Chapter 1 Materializing the Web of Linked Data 38
39. Metadata (1)
Data about data
Efficient materialization with the use of ontologies
Ontologies can be used to describe Web resources
◦ Adding descriptions about their semantics and their relations
◦ Aims to make resources machine-understandable
◦ Each (semantic) annotation can correspond to a piece of
information
◦ It is important to follow a common standard
Chapter 1 Materializing the Web of Linked Data 39
40. Metadata (2)
Annotation is commonly referred to as “metadata”
Usually in a semi-structured form
◦ Semi-structured data sources
◦ Structure accompanies the metadata
◦ E.g. XML, JSON
◦ Structured data sources
◦ Structure is stored separately
◦ E.g. relational databases
Chapter 1 Materializing the Web of Linked Data 40
41. Metadata (3)
Powerful languages
◦ Practically unlimited hierarchical structure
◦ Covers most of the description requirements that may occur
◦ Storage in separate files allows collaboration with communication protocols
◦ Files are independent from the environment in which they reside
◦ Resilience to technological evolutions
Negatives
◦ Expressivity of the description model
◦ Limited way of structuring the information
Chapter 1 Materializing the Web of Linked Data 41
42. Metadata (4)
Inclusion of semantics
◦ Need for more expressive capabilities and terminology
Ontologies
◦ Offer a richer way of describing information
◦ E.g. defining relationships among concepts such as subclass/superclass,
mutually disjoint concepts, inverse concepts, etc.
◦ RDF can be regarded as the evolution of XML
◦ OWL enables more comprehensive, precise and consistent
description of Web Resources
Chapter 1 Materializing the Web of Linked Data 42
43. Ontologies (1)
Additional description to an unclear model that aims to
further clarify it
Conceptual description model of a domain
◦ Describe its related concepts and their relationships
Can be understood both by human and computer
◦ A shared conceptualization of a given specific domain
Aim at bridging and integrating multiple and heterogeneous
digital content on a semantic level
Chapter 1 Materializing the Web of Linked Data 43
44. Ontologies (2)
In Philosophy, a systematic recording of “Existence”
For a software system, something that “exists” is something
that can be represented
◦ A set of definitions that associate the names of entities in the
universe of discourse with human-readable text describing the
meaning of the names
◦ A set of formal axioms that constrain the interpretation and well-
formed use of these terms
Chapter 1 Materializing the Web of Linked Data 44
The specification of a conceptualization (Gruber 1995)
45. Ontologies (3)
Can be used to model concepts regardless to how general or specific
these concepts are
According to the degree of generalization
◦ Top-level ontology
◦ Domain ontology
◦ Task ontology
◦ Application ontology
Content is made suitable for machine consumption
◦ Automated increase of the system knowledge
◦ Logical rules to infer implicitly declared facts
Chapter 1 Materializing the Web of Linked Data 45
46. Reasoners (1)
Software components
Validate consistency of an ontology
◦ Perform consistency checks
◦ Concept satisfiability and classification
Infer implicitly declared knowledge
◦ Apply simple rules of deductive reasoning
Chapter 1 Materializing the Web of Linked Data 46
47. Reasoners (2)
Basic reasoning procedures
◦ Consistency checking
◦ Concept satisfiablility
◦ Concept subsumption
◦ Instance checking (Realization)
Properties of special interest
◦ Termination
◦ Soundness
◦ Completeness
Chapter 1 Materializing the Web of Linked Data 47
48. Reasoners (3)
Research in Reasoning
◦ Tradeoff between
◦ Expressiveness of ontology definition languages
◦ Computational complexity of the reasoning procedure
◦ Discovery of efficient reasoning algorithms
Commercial
◦ E.g. RacerPro
Free of charge
◦ E.g. Pellet, FaCT++
Chapter 1 Materializing the Web of Linked Data 48
49. Knowledge Bases
Terminological Box (TBox)
◦ Concept descriptions
◦ Intensional knowledge
◦ Terminology and vocabulary
◦Assertional Box (ABox)
◦ The real data
◦ Extensional knowledge
◦ Assertions about named individuals in
terms of the TBox vocabulary
Chapter 1 Materializing the Web of Linked Data 49
TBox
ABox
Description
Language
Reasoning
Knowledge Base
Application Programs Rules
50. Knowledge Bases vs. Databases (1)
A naïve approach:
◦ TBox ≡ Schema of the relational database
◦ ABox ≡ Schema instance
However, things are more complex than that
Chapter 1 Materializing the Web of Linked Data 50
51. Knowledge Bases vs. Databases (2)
Relational model
◦ Supports only untyped relationships among relations
◦ Does not provide enough features to assert complex relationships among data
◦ Used in order to manipulate large and persistent models of relatively simple
data
Ontological scheme
◦ Allows more complex relationships
◦ Can provide answers about the model that have not been explicitly stated to
it, with the use of a reasoner
◦ Contains fewer but more complex data
Chapter 1 Materializing the Web of Linked Data 51
52. Closed vs. Open World Assumption (1)
Closed world assumption
◦Relational databases
◦ Everything that has not been stated as true is false
◦ What is not currently known to be true is false
◦ A null value about a subject’s property denotes the non-existence
◦ A NULL value in the isCapital field of a table Cities claims that the city is not a capital
◦ The database answers with certainty
◦ A query “select cities that are capitals” will not return a city with a null value at a supposed
boolean isCapital field
Chapter 1 Materializing the Web of Linked Data 52
53. Closed vs. Open World Assumption (2)
Open world assumption
◦ Knowledge Bases
◦ A query can return three types of answers
◦ True, false, cannot tell
◦ Information that is not explicitly declared as true is not necessarily false
◦ It can also be unknown
◦ Lack of knowledge does not imply falsity
◦ A question “Is Athens a capital city?” in an appropriate schema will
return “cannot tell” if the schema is not informed
◦ A database schema would return false, in the case of a null value
Chapter 1 Materializing the Web of Linked Data 53
54. Monotonicity
A feature present in Knowledge Bases
A system is considered monotonic when new facts do not
discard existing ones
First version of SPARQL did not include update/delete
functionality
◦ Included in SPARQL 1.1
◦ Recommendation describes that these functions should be supported
◦ Does not describe the exact behavior
Chapter 1 Materializing the Web of Linked Data 54
56. The LOD Cloud (1)
Structured data
Open format
Available for everyone to use it
Published on the Web and connected using Web
technologies
Related data that was not previously linked
◦ Or was linked using other methods
Chapter 1 Materializing the Web of Linked Data 56
57. The LOD Cloud (2)
Using URIs and RDF for this is goal is very convenient
◦ Data can be interlinked
◦ Create a large pool of data
◦ Ability to search, combine and exploit
◦ Navigate between different data sources, following RDF links
◦ Browse a potentially endless Web of connected data sources
Chapter 1 Materializing the Web of Linked Data 57
58. The LOD Cloud (3)
Applications in many cases out of the academia
Technology maturity
Open state/government data
◦ data.gov (US)
◦ data.gov.uk (UK)
◦ data.gov.au (Australia)
◦ opengov.se (Sweden)
Open does not necessarily mean Linked
Chapter 1 Materializing the Web of Linked Data 58
59. The LOD Cloud (4)
Published datasets span several domains of human
activities
◦ Much more beyond government data
◦ Form the LOD cloud
◦ Constantly increasing in terms of volume
Chapter 1 Materializing the Web of Linked Data 59
60. The LOD Cloud (5)
Evolution
Chapter 1 Materializing the Web of Linked Data 60
Geographic
Media
Publications
User-Generated
Content
Government
Cross-Domain
Life Sciences
Social Networking
2007 2009 2014
62. Outline
Introduction
RDF and RDF Schema
Description Logics
Querying RDF data with SPARQL
Mapping relational data with R2RML
Other technologies
Ontologies
Datasets
Chapter 2 Materializing the Web of Linked Data 62
63. Introduction
Linked Data
◦ A set of technologies
◦ Focused on the web
◦ Use the Web as a storage and communication layer
◦ Provide meaning to web content
Semantics
◦ Added value when part of a larger context
◦ Data modeled as a graph
Technologies fundamental to the web
Chapter 2 Materializing the Web of Linked Data 63
64. HTTP – HyperText Transfer Protocol
An application protocol for the management and transfer of
hypermedia documents in decentralized information systems
Defines a number of request types and the expected actions
that a server should carry out when receiving such requests
Serves as a mechanism to
◦ Serialize resources as a stream of bytes
◦ E.g. a photo of a person
◦ Retrieve descriptions about resources that cannot be sent over network
◦ E.g. the person itself
Chapter 2 Materializing the Web of Linked Data 64
65. URI – Uniform Resource Identifier
URL
◦ Identifies a document location
◦ Address of a document or other entity that can be found online
URI
◦ Provides a more generic means to identify anything that exists in the
world
IRI
◦ Internationalized URI
IRI ⊇ URI ⊇ URL
Chapter 2 Materializing the Web of Linked Data 65
66. HTML – HyperText Markup Language
A markup language for the composition and presentation of
various types of content into web pages
◦ E.g. text, images, multimedia
Documents delivered through HTTP are usually expressed in
HTML
◦ HTML5
◦ Current recommendation
◦ Additional markup tags
◦ Extends support for multimedia and mathematical content
Chapter 2 Materializing the Web of Linked Data 66
67. XML – eXtensible Markup Language
Allows for strict definition of the structure of information
◦ Markup tags
The RDF model also follows an XML syntax
Chapter 2 Materializing the Web of Linked Data 67
68. Outline
Introduction
RDF and RDF Schema
Description Logics
Querying RDF data with SPARQL
Mapping relational data with R2RML
Other technologies
Ontologies
Datasets
Chapter 2 Materializing the Web of Linked Data 68
69. Modeling Data Using RDF Graphs
Ontologies in the Semantic Web
◦ Model a system’s knowledge
◦ Based on RDF
◦ Model the perception of the world as a graph
◦ OWL builds on top of RDF
RDF
◦ A data model for the Web
◦ A framework that allows representing knowledge
◦ Model knowledge as a directed and labeled graph
Chapter 2 Materializing the Web of Linked Data 69
70. RDF (1)
The cornerstone of the Semantic Web
A common representation of web resources
First publication in 1999
Can represent data from other data models
◦ Makes it easy to integrate data from multiple heterogeneous sources
Main idea:
◦ Model every resource with respect to its relations (properties) to
other web resources
Chapter 2 Materializing the Web of Linked Data 70
71. RDF (2)
Relations form triples
◦ First term: subject
◦ Second term: property
◦ Third term: object
RDF statements contain triples, in the form:
(resource, property, resource)
or (subject, property, object)
Chapter 2 Materializing the Web of Linked Data 71
ex:staffMember
ex:hasWorkAddress
ex:hasCity ex:hasAddress ex:hasPostalCode
“London” “23, Houghton str.” “WC2A 2AE”
72. Namespaces (1)
Initially designed to prevent confusion in XML names
Allow document elements to be uniquely identified
Declared in the beginning of XML documents
Chapter 2 Materializing the Web of Linked Data 72
xmlns:prefix="location".
73. Namespaces (2)
Prefix Namespace URI Description
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns# The built-in RDF vocabulary
rdfs http://www.w3.org/2000/01/rdf-schema RDFS puts an order to RDF
owl http://www.w3.org/2002/07/owl OWL terms
xsd http://www.w3.org/2001/XMLSchema# The RDF-compatible XML Schema datatypes
dc http://purl.org/dc/elements/1.1/ The Dublin Core standard for digital object description
foaf http://xmlns.com/foaf/0.1 The FOAF network
skos http://www.w3.org/2004/02/skos/core# Simple Knowledge Organization System
void http://rdfs.org/ns/void# Vocabulary of Interlinked Datasets
sioc http://rdfs.org/sioc/ns# Semantically-Interlinked Online Communities
cc http://creativecommons.org/ns Creative commons helps expressing licensing information
rdfa http://www.w3.org/ns/rdfa RDFa
Chapter 2 Materializing the Web of Linked Data 73
74. RDF Serialization
RDF is mainly destined for machine consumption
Several ways to express RDF graphs in machine-readable
(serialization) formats
◦ N-Triples
◦ Turtle
◦ N-Quads
◦ TriG
◦ RDF/XML
◦ JSON-LD
◦ RDFa
Chapter 2 Materializing the Web of Linked Data 74
Turtle family
RDFa
JSON-LD
Supports Multiple Graphs
RDF/XML
Turtle
N-Triples
TriG
N-Quads
75. N-Triples Serialization
Chapter 2 Materializing the Web of Linked Data 75
<http://www.example.org/bradPitt> <http://www.example.org/isFatherOf>
<http://www.example.org/maddoxJoliePitt>.
<http://www.example.org/bradPitt> <http://xmlns.com/foaf/0.1/name> "Brad Pitt".
<http://www.example.org/bradPitt> <http://xmlns.com/foaf/0.1/based_near> :_x.
:_x <http://www.w3.org/2003/01/geo/wgs84_pos#lat> "34.1000".
:_x <http://www.w3.org/2003/01/geo/wgs84_pos#long> "118.3333".
76. Turtle Serialization
Chapter 2 Materializing the Web of Linked Data 76
PREFIX ex: <http://www.example.org/>.
PREFIX foaf: <http://xmlns.com/foaf/0.1/>.
PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>.
ex:bradPitt ex:isFatherOf ex:maddoxJoliePitt;
foaf:name "Brad Pitt";
foaf:based_near :_x.
:_x geo:lat "34.1000";
geo:long "118.3333".
77. TriG Serialization
Chapter 2 Materializing the Web of Linked Data 77
PREFIX ex: <http://www.example.org/>.
PREFIX foaf: <http://xmlns.com/foaf/0.1/>.
PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>.
GRAPH <http://www.example.org/graphs/brad> {
ex:bradPitt ex:isFatherOf ex:maddoxJoliePitt;
foaf:name "Brad Pitt";
foaf:based_near :_x.
:_x geo:lat "34.1000";
geo:long "118.3333".
}
78. XML/RDF Serialization
Chapter 2 Materializing the Web of Linked Data 78
<?xml version="1.0" encoding="utf-8"?>
<rdf:RDF xmlns:ex="http://www.example.org/"
xmlns:foaf="http://xmlns.com/foaf/0.1/"
xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#">
<rdf:Description rdf:about=" http://www.example.org/bradPitt">
<ex:isFatherOf rdf:resource=" http://www.example.org/maddoxJoliePitt"/>
<foaf:name>Brad Pitt</foaf:name>
<foaf:based_near rdf:nodeID="A0"/>
</rdf:Description>
<rdf:Description rdf:nodeID="A0">
<geo:lat>34.1000</geo:lat>
<geo:long>118.3333</geo:long>
</rdf:Description>
</rdf:RDF>
80. RDFa Serialization
Chapter 2 Materializing the Web of Linked Data 80
<html>
<head>
…
</head>
<body vocab="http://xmlns.com/foaf/0.1/">
<div resource="http://www.example.org/bradPitt">
<p>Famous American actor <span property="name">Brad Pitt</span>
eldest son is
<a property="http://www.example.org/isFatheOf"
href="http://www.example.org/maddoxJoliePitt">Maddox Jolie-Pitt</a>.
</p>
</div>
</body>
</html>
81. The RDF Schema (1)
RDF
◦ A graph model
◦ Provides basic constructs for defining a graph structure
RDFS
◦ A semantic extension of RDF
◦ Provides mechanisms for assigning meaning to the RDF nodes and
edges
◦ RDF Schema is not to RDF what XML Schema is to XML!
Chapter 2 Materializing the Web of Linked Data 81
82. The RDF Schema (2)
Classes
◦ A definition of groups of resources
◦ Members of a class are called instances of this class
Class hierarchy
Property hierarchy
Property domain and range
RDFS specification also refers to the RDF namespace
Chapter 2 Materializing the Web of Linked Data 82
Prefix Namespace
rdfs http://www.w3.org/2000/01/rdf-schema#
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns#
83. The RDF Schema (3)
A Class named Actor
◦ Where ex: http://www.example.org/
Class membership example
Containment relationship
Using reasoning, we can infer
◦ In practice, we could store inferred triples
Chapter 2 Materializing the Web of Linked Data 83
ex:Actor rdf:type rdfs:Class.
ex:bradPitt rdf:type ex:Actor.
ex:Actor rdfs:subClassOf ex:MovieStaff.
ex:bradPitt rdf:type ex:MovieStaff.
84. The RDF Schema (4)
No restrictions on the form of the class hierarchy that can be defined
in an ontology
◦ No strict tree-like hierarchy as regular taxonomies
◦ More complex graph-like structures are allowed
◦ Classes may have more than one superclass
Allows hierarchies of properties to be defined
◦ Just like class hierarchies
◦ Introduced via the rdfs:subPropertyOf property
Chapter 2 Materializing the Web of Linked Data 84
ex:participatesIn rdf:type rdf:Property.
ex:starsIn rdf:type rdf:Property.
ex:starsIn rdfs:subPropertyOf ex:participatesIn.
85. The RDF Schema (5)
◦ Inferred triple:
Define the classes to which RDF properties can be applied
◦ rdfs:domain and rdfs:range
Then, the assertion
Entails
Chapter 2 Materializing the Web of Linked Data 85
ex:bradPitt ex:starsIn ex:worldWarZ.
ex:bradPitt ex:participatesIn ex:worldWarZ.
ex:georgeClooney ex:starsIn ex:idesOfMarch.
ex:georgeClooney rdf:type ex:Actor.
ex:starsIn rdfs:domain ex:Actor.
86. The RDF Schema (6)
Similarly
Produces
Domain and range axioms
◦ Are not constraints that data need to follow
◦ Function as rules that lead to the production of new triples and knowledge
Chapter 2 Materializing the Web of Linked Data 86
ex:starsIn rdfs:range ex:Movie.
ex:worldWarZ rdf:type ex:Movie.
ex:starsIn rdfs:range ex:Movie.
87. The RDF Schema (7)
Example
◦ Infers:
An individual entity can be a member of both ex:Actor and ex:Movie
classes
Such restrictions are met in more expressive ontology languages,
such as OWL
Chapter 2 Materializing the Web of Linked Data 87
ex:georgeClooney ex:starsIn ex:bradPitt.
ex:bradPitt rdf:type ex:Movie.
88. Reification (1)
Statements about other RDF statements
Ability to treat an RDF statement as an RDF resource
◦ Make assertions about that statement
Becomes:
Chapter 2 Materializing the Web of Linked Data 88
<http://www.example.org/person/1> foaf:name "Brad Pitt " .
<http://www.example.org/statement/5> a rdf:Statement;
rdf:subject <http://www.example.org/person/1>;
rdf:predicate foaf:name;
rdf:object "Brad Pitt".
89. Reification (2)
Useful when referring to an RDF triple in order to
◦ Describe properties that apply to it
◦ e.g. provenance or trust
◦ E.g. assign a trust level of 0.8
Chapter 2 Materializing the Web of Linked Data 89
<http://www.example.org/statement/5> ex:hasTrust "0.8"^^xsd:float.
90. Classes (1)
rdfs:Resource
◦ All things described by RDF are instances of the class rdfs:Resource
◦ All classes are subclasses of this class, which is an instance of
rdfs:Class
rdfs:Literal
◦ The class of all literals
◦ An instance of rdfs:Class
◦ Literals are represented as strings but they can be of any XSD
datatype
Chapter 2 Materializing the Web of Linked Data 90
91. Classes (2)
rdfs:langString
◦ The class of language-tagged string values
◦ It is a subclass of rdfs:Literal and an instance of rdfs:Datatype
◦ Example: "foo"@en
rdfs:Class
◦ The class of all classes, i.e. the class of all resources that are RDF
classes
Chapter 2 Materializing the Web of Linked Data 91
92. Classes (3)
rdfs:Datatype
◦ The class of all the data types
◦ An instance but also a subclass of rdfs:Class
◦ Every instance of rdfs:Datatype is also a subclass of rdfs:Literal
rdf:HTML
◦ The class of HTML literal values
◦ An instance of rdfs:Datatype
◦ A subclass of rdfs:Literal
Chapter 2 Materializing the Web of Linked Data 92
93. Classes (4)
rdf:XMLLiteral
◦ The class of XML literal values
◦ An instance of rdfs:Datatype
◦ A subclass of rdfs:Literal
rdf:Property
◦ The class of all RDF properties
Chapter 2 Materializing the Web of Linked Data 93
94. Properties (1)
rdfs:domain
◦ Declares the domain of a property P
◦ The class of all the resources that can appear as S in a triple (S, P, O)
rdfs:range
◦ Declares the range of a property P
◦ The class of all the resources that can appear as O in a triple (S, P, O)
Chapter 2 Materializing the Web of Linked Data 94
95. Properties (2)
rdf:type
◦ A property that is used to state that a resource is an instance of a
class
rdfs:label
◦ A property that provides a human-readable version of a resource’s
name
Chapter 2 Materializing the Web of Linked Data 95
96. Properties (3)
rdfs:comment
◦ A property that provides a human-readable description of a
resource
rdfs:subClassOf
◦ Corresponds a class to one of its superclasses
◦ A class can have more than one superclasses
rdfs:subPropertyOf
◦ Corresponds a property to one of its superproperties
◦ A property can have more than one superproperties
Chapter 2 Materializing the Web of Linked Data 96
97. Container Classes and Properties (1)
rdfs:Container
◦ Superclass of all classes that can contain instances such as rdf:Bag, rdf:Seq and
rdf:Alt
rdfs:member
◦ Superproperty to all the properties that declare that a resource belongs to a
class that can contain instances (container)
rdfs:ContainerMembershipProperty
◦ A property that is used in declaring that a resource is a member of a container
◦ Every instance is a subproperty of rdfs:member
Chapter 2 Materializing the Web of Linked Data 97
98. Container Classes and Properties (2)
rdf:Bag
◦ The class of unordered containers
◦ A subclass of rdfs:Container
rdf:Seq
◦ The class of ordered containers
◦ A subclass of rdfs:Container
rdf:Alt
◦ The class of containers of alternatives
◦ A subclass of rdfs:Container
Chapter 2 Materializing the Web of Linked Data 98
99. Collections (1)
rdf:List
◦ The class of RDF Lists
◦ An instance of rdfs:Class
◦ Can be used to build descriptions of lists and other list-like
structures
rdf:nil
◦ An instance of rdf:List that is an empty rdf:List
Chapter 2 Materializing the Web of Linked Data 99
100. Collections (2)
rdf:first
◦ The first item in the subject RDF list
◦ An instance of rdf:Property
rdf:rest
◦ The rest of the subject RDF list after the first item
◦ An instance of rdf:Property
rdf:_1, rdf:_2, rdf:_3, etc.
◦ A sub-property of rdfs:member
◦ An instance of the class rdfs:ContainerMembershipProperty
Chapter 2 Materializing the Web of Linked Data 100
101. Reification (1)
rdf:Statement
◦ The class of RDF statements
◦ An instance of rdfs:Class
rdf:subject
◦ The subject of the subject RDF statement
◦ An instance of rdf:Property
Chapter 2 Materializing the Web of Linked Data 101
102. Reification (2)
rdf:predicate
◦ The predicate of the subject RDF statement
◦ An instance of rdf:Property
rdf:object
◦ The object of the subject RDF statement
◦ An instance of rdf:Property
Chapter 2 Materializing the Web of Linked Data 102
103. Utility Properties
rdf:value
◦ Idiomatic property used for describing structured values
◦ An instance of rdf:Property
rdfs:seeAlso
◦ A property that indicates a resource that might provide additional information
about the subject resource
rdfs:isDefinedBy
◦ A property that indicates a resource defining the subject resource
◦ The defining resource may be an RDF vocabulary in which the subject
resource is described
Chapter 2 Materializing the Web of Linked Data 103
104. Outline
Introduction
RDF and RDF Schema
Description Logics
Querying RDF data with SPARQL
Mapping relational data with R2RML
Other technologies
Ontologies
Datasets
Chapter 2 Materializing the Web of Linked Data 104
105. Ontologies Based on Description Logics
Language roots are in Description Logics (DL)
DAML, OIL, and DAML + OIL
OWL and OWL 2 latest results in this direction
Formal semantics
Syntactically compatible to the RDF serializations
◦ Ontologies in OWL can be queried in the same approach as in RDF
graphs
Chapter 2 Materializing the Web of Linked Data 105
106. Description Logics (1)
Offers the language for the description and manipulation of
independent individuals, roles, and concepts
There can be many DL languages
◦ Describe the world using formulas
Formulas constructed using
◦ Sets of concepts, roles, individuals
◦ Constructors, e.g. intersection ∧, union ∨, exists ∃, for each ∀
Chapter 2 Materializing the Web of Linked Data 106
107. Description Logics (2)
Variables in DL can represent arbitrary world objects
E.g. a DL formula can declare that a number x, greater than
zero exists:
◦ ∃x:greaterThan(x; 0)
Adding more constructors to the basic DL language increases
expressiveness
◦ Possible to describe more complex concepts
Chapter 2 Materializing the Web of Linked Data 107
108. Description Logics (3)
E.g. concept conjunction (C ∪ D)
◦ Parent = Father ∪ Mother
Expressiveness used in describing the world varies according
to the subset of the language that is used
Differentiation affects
◦ World description capabilities
◦ Behavior and performance of processing algorithms
Chapter 2 Materializing the Web of Linked Data 108
109. The Web Ontology Language (1)
OWL language is directly related to DL
◦ Different “flavors” correspond to different DL language subsets
Successor of DAML+OIL
◦ Creation of the OWL language officially begins with the initiation of the DAML
project
◦ DAML, combined to OIL led to the creation of DAML + OIL
◦ An extension of RDFS
◦ OWL is the successor of DAML + OIL
W3C recommendation, currently in version 2
Chapter 2 Materializing the Web of Linked Data 109
110. The Web Ontology Language (2)
Designed in order to allow applications to process the
information content itself instead of simply presenting the
information
The goal is to provide a schema that will be compatible both
to the Semantic Web and the World Wide Web architecture
Makes information more machine- and human- processable
Chapter 2 Materializing the Web of Linked Data 110
111. The Web Ontology Language (3)
First version comprised three flavors
◦ Lite, DL, Full
OWL Lite
◦ Designed keeping in mind that it had to resemble RDFS
OWL DL
◦ Guaranteed that all reasoning procedures are finite and return a result
OWL Full
◦ The whole wealth and expressiveness of the language
◦ Reasoning is not guaranteed to be finite
◦ Even for small declaration sets
Chapter 2 Materializing the Web of Linked Data 111
112. The Web Ontology Language (4)
An OWL ontology may also comprise declarations from RDF
and RDFS
◦ E.g. rdfs:subClassOf, rdfs:range, rdf:resource
OWL uses them and relies on them in order to model its
concepts
Chapter 2 Materializing the Web of Linked Data 112
rdfs:Resource
owl:Class
rdfs:Class rdf:Property
owl:DataProperty owl:ObjectProperty owl:AnnotationProperty
113. OWL 2
Very similar overall structure to the first version of OWL
Backwards compatible
◦ Every OWL 1 ontology is also an OWL 2 ontology
Chapter 2 Materializing the Web of Linked Data 113
114. OWL 2 Additional Features (1)
Additional property and qualified cardinality constructors
◦ Minimum, maximum or exact qualified cardinality restrictions,
object and data properties
◦ E.g. ObjectMinCardinality, DataMaxCardinality
Property chains
◦ Properties as a composition of other properties
◦ E.g. ObjectPropertyChain in SubObjectPropertyOf
Chapter 2 Materializing the Web of Linked Data 114
115. OWL 2 Additional Features (2)
Extended datatype support
◦ Support the definition of subsets of datatypes (e.g. integers and
strings)
◦ E.g. state that every person has an age, which is of type integer, and restrict
the range of that datatype value
Simple metamodeling
◦ Relaxed separation between the names of, e.g. classes and
individuals
◦ Allow different uses of the same term
Chapter 2 Materializing the Web of Linked Data 115
116. OWL 2 Additional Features (3)
Extended annotations
◦ Allow annotations of axioms
◦ E.g. who asserted an axiom or when
Extra syntactic sugar
◦ Easier to write common patterns
◦ Without changing language expressiveness, semantics, or complexity
Chapter 2 Materializing the Web of Linked Data 116
117. OWL 2 Profiles (1)
OWL 2 EL
◦ Polynomial time algorithms for all standard reasoning tasks
◦ Suitable for very large ontologies
◦ Trades expressive power for performance guarantees
Chapter 2 Materializing the Web of Linked Data 117
118. OWL 2 Profiles (2)
OWL 2 QL
◦ Conjunctive queries are answered in LOGSPACE using standard
relational database technology
◦ Suitable for
◦ Relatively lightweight ontologies with large numbers of individuals
◦ Useful or necessary to access the data via relational queries (e.g. SQL)
Chapter 2 Materializing the Web of Linked Data 118
119. OWL 2 Profiles (3)
OWL 2 RL
◦ Enables polynomial time reasoning algorithms using rule-extended
database technologies operating directly on RDF triples
◦ Suitable for
◦ Relatively lightweight ontologies with large numbers of individuals
◦ Necessary to operate directly on data in the form of RDF triples
Chapter 2 Materializing the Web of Linked Data 119
120. Manchester OWL syntax
A user-friendly syntax for OWL 2 descriptions
Chapter 2 Materializing the Web of Linked Data 120
Class: VegetarianPizza
EquivalentTo:
Pizza and
not (hasTopping some FishTopping) and
not (hasTopping some MeatTopping)
DisjointWith:
NonVegetarianPizza
121. Outline
Introduction
RDF and RDF Schema
Description Logics
Querying RDF data with SPARQL
Mapping relational data with R2RML
Other technologies
Ontologies
Datasets
Chapter 2 Materializing the Web of Linked Data 121
122. Querying the Semantic Web with SPARQL
SPARQL
◦ Is for RDF what SQL is for relational databases
SPARQL family of recommendations
◦ SPARQL Update
◦ SPARQL 1.1 Protocol
◦ Graph Store HTTP Protocol
◦ SPARQL 1.1 entailment regimes
◦ Specifications about the serialization of query results
◦ In JSON, CSV and TSV, and XML
Chapter 2 Materializing the Web of Linked Data 122
123. SELECT Queries (1)
Return a set of bindings of variables to RDF terms
Binding
◦ A mapping from a variable to a URI or an RDF literal
Triple pattern
◦ Resembles an RDF triple
◦ May also contain variables in one or more positions
Chapter 2 Materializing the Web of Linked Data 123
124. SELECT Queries (2)
An RDF triple matches a triple pattern
◦ There is an appropriate substitution of variables with RDF terms
that makes the triple pattern and the RDF triple equivalent
◦ Example:
matches
Chapter 2 Materializing the Web of Linked Data 124
ex:bradPitt rdf:type ex:Actor.
?x rdf:type ex:Actor
125. SELECT Queries (3)
A set of triple patterns build a basic graph pattern
◦ The heart of a SPARQL SELECT query
Chapter 2 Materializing the Web of Linked Data 125
SELECT ?x ?date
WHERE {
?x rdf:type ex:Actor .
?x ex:bornIn ?date
}
126. SELECT Queries (4)
Graph example
SPARQL query solution
Chapter 2 Materializing the Web of Linked Data 126
?x ?date
ex:bradPitt "1963"^^xsd:gYear
ex:angelinaJolie "1975"^^xsd:gYear
ex:bradPitt rdf:type ex:Actor;
ex:bornIn "1963"^^xsd:gYear.
ex:angelinaJolie rdf:type ex:Actor;
ex:bornIn "1975"^^xsd:gYear.
ex:jenniferAniston rdf:type ex:Actor.
127. The OPTIONAL keyword
Resource ex:jenniferAniston not included in the results
◦ No RDF triple matches the second triple pattern
Use of the OPTIONAL keyword
◦ Retrieves all actors and actresses and get their birthdate only if it is specified
in the RDF graph
Chapter 2 Materializing the Web of Linked Data 127
?x ?date
ex:bradPitt "1963"^^xsd:gYear
ex:angelinaJolie "1975"^^xsd:gYear
ex:jenniferAniston
SELECT ?x ?date
WHERE {
?x rdf:type ex:Actor .
OPTIONAL {?x ex:bornIn ?date}
}
128. The UNION Keyword
Specify alternative graph patterns
Search for resources that are of type ex:Actor or ex:Director,
including cases where a resource may belong to both
classes
Chapter 2 Materializing the Web of Linked Data 128
SELECT ?x
WHERE {
{?x rdf:type ex:Actor}
UNION
{?x rdf:type ex:Director}
}
129. The FILTER Keyword
Set a condition that limits the result of a SPARQL query
Return all actors that were known to be born before 1975
and their corresponding birth date
Chapter 2 Materializing the Web of Linked Data 129
SELECT ?x ?date
WHERE{
?x rdf:type ex:Actor .
?x ex:bornIn ?date .
FILTER(?date < "1975"^^xsd:gYear)
}
130. ORDER BY and LIMIT Keywords
Chapter 2 Materializing the Web of Linked Data 130
SELECT ?x ?date
WHERE {
?x rdf:type ex:Actor .
?x ex:bornIn ?date .
}
ORDER BY DESC(?date)
LIMIT 10
131. CONSTRUCT Queries
Generate an RDF graph based on a graph template using the
solutions of a SELECT query
Chapter 2 Materializing the Web of Linked Data 131
CONSTRUCT {?x rdf:type ex:HighlyPaidActor}
WHERE {
?x rdf:type ex:Actor .
?x ex:hasSalary ?salary .
FILTER (?salary > 1000000)
}
132. ASK Queries
Respond with a boolean
Whether a graph pattern is satisfied by a subgraph of the
considered RDF graph
Return true if the RDF graph contains an actor named "Cate
Blanchett"
Chapter 2 Materializing the Web of Linked Data 132
ASK {
?x rdf:type ex:Actor .
?x foaf:name "Cate Blanchett" .
}
133. DESCRIBE Queries
Return a set of RDF statements that contain data about a
given resource
Exact structure of the returned RDF graph depends on the
specific SPARQL engine
Chapter 2 Materializing the Web of Linked Data 133
DESCRIBE <http://www.example.org/bradPitt> .
134. Outline
Introduction
RDF and RDF Schema
Description Logics
Querying RDF data with SPARQL
Mapping relational data with R2RML
Other technologies
Ontologies
Datasets
Chapter 2 Materializing the Web of Linked Data 134
135. Mapping Relational Data to RDF (1)
Lack of RDF data volumes → Slow uptake of the Semantic Web vision
Large part of available data is stored in relational databases
Several methods and tools were developed for the translation of
relational data to RDF
◦ Each one with its own set of features and, unfortunately, its own mapping
language
The need for the reuse of RDB-to-RDF mappings led to the creation
of R2RML
◦ A standard formalism by W3C for expressing such mappings
Chapter 2 Materializing the Web of Linked Data 135
136. Mapping Relational Data to RDF (2)
R2RML: RDB to RDF Mapping Language
◦ Provides a vocabulary for the definition of RDF views over
relational schemas
◦ Is database vendor-agnostic
An R2RML mapping is an RDF graph: an R2RML mapping
graph
◦ In Turtle syntax
Chapter 2 Materializing the Web of Linked Data 136
137. R2RML Overview
More features
◦ Organization of generated triples in named graphs
◦ Definition of blank nodes
◦ Specification of a generated literal’s language
Chapter 2 Materializing the Web of Linked Data 137
RefObjectMap
TriplesMap
PredicateObjectMap
SubjectMap
Generated Triples
Generated Output Dataset
GraphMap
Join
ObjectMap
PredicateMap
LogicalTable
138. R2RML (1)
An R2RML mapping
◦ Associates relational views with functions that generate RDF terms
Logical tables
◦ Relations or (custom) views defined in the relational schema
Term maps
◦ RDF generating functions
◦ Distinguished according to the position of the RDF term in the
generated triple
◦ Subject, predicate, and object maps
Chapter 2 Materializing the Web of Linked Data 138
139. R2RML (2)
Triples maps
◦ Functions that map relational data to a set of RDF triples
◦ Groups of term maps
◦ R2RML mappings contain one or more triples maps
Graph maps
◦ Generated RDF triples can be organized into named graphs
Chapter 2 Materializing the Web of Linked Data 139
140. R2RML Example (1)
A relational instance
Chapter 2 Materializing the Web of Linked Data 140
FILM
ID Title Year Director
1 The Hunger Games 2012 1
ACTOR
ID Name BirthYear BirthLocation
1 Jennifer Lawrence 1990 Louisville, KY
2 Josh Hutcherson 1992 Union, KY
FILM2ACTOR
FilmID ActorID
1 1
1 2
DIRECTOR
ID Name BirthYear
1 Gary Ross 1956
141. R2RML Example (2)
An R2RML
triples map
Chapter 2 Materializing the Web of Linked Data 141
@prefix rr: <http://www.w3.org/ns/r2rml#>.
@prefix ex: <http://www.example.org/>.
@prefix dc: <http://purl.org/dc/terms/>.
<#TriplesMap1>
rr:logicalTable [ rr:tableName "FILM" ];
rr:subjectMap [
rr:template "http://data.example.org/film/{ID}";
rr:class ex:Movie;
];
rr:predicateObjectMap [
rr:predicate dc:title;
rr:objectMap [ rr:column "Title" ];
];
rr:predicateObjectMap [
rr:predicate ex:releasedIn;
rr:objectMap [ rr:column "Year";
rr:datatype xsd:gYear;];
].
142. R2RML Example (3)
The result (in Turtle)
Note the reuse of terms from external ontologies
◦ E.g. dc:title from Dublin Core
Chapter 2 Materializing the Web of Linked Data 142
<http://data.example.org/film/1> a ex:Movie;
dc:title "The Hunger Games";
ex:releasedIn "2012"^^xsd:gYear.
143. R2RML Example (4)
The R2RML mapping graph of the example
◦ Specifies the generation of a set of RDF triples for every row of the
FILM relation
◦ Is the simplest possible
◦ Contains only one triples map
Every triples map must have exactly
◦ One logical table
◦ One subject map
◦ One or more predicate-object maps
Chapter 2 Materializing the Web of Linked Data 143
144. R2RML Example (5)
A logical table represents an SQL result set
◦ Each row gives rise to an RDF triple
◦ Or quad in the general case, when named graphs are used
A subject map is simply a term map that produces the
subject of an RDF triple
Chapter 2 Materializing the Web of Linked Data 144
145. R2RML Example (6)
Term maps
◦ Template-valued (rr:template)
◦ The subject map of the example
◦ Constant-valued (rr:predicate)
◦ The predicate map of the example
◦ Column-valued (rr:column)
◦ The object map of the example
Every generated resource will be an instance of ex:Movie
◦ Use of rr:class
Chapter 2 Materializing the Web of Linked Data 145
146. R2RML Example (7)
Subject maps
◦ Generate subjects
Predicate-object maps
◦ Generate pairs of predicates and objects
◦ Contain at least one predicate map
◦ Contain at least one object map
Typed Literals
◦ Use of rr:datatype
◦ The second object map of the example specifies that the datatype of the RDF
literal that will be generated will be xsd:gYear
Chapter 2 Materializing the Web of Linked Data 146
147. Foreign Keys (1)
Add another predicate-
object map that operates
on the DIRECTOR relation
Specify the generation of
three RDF triples per row
of the DIRECTOR relation
Chapter 2 Materializing the Web of Linked Data 147
@prefix rr: <http://www.w3.org/ns/r2rml#>.
@prefix ex: <http://www.example.org/>.
@prefix foaf: <http://xmlns.com/foaf/0.1/>.
<#TriplesMap2>
rr:logicalTable [ rr:tableName "DIRECTOR" ];
rr:subjectMap [
rr:template "http://data.example.org/director/{ID}";
rr:class ex:Director;
];
rr:predicateObjectMap [
rr:predicate foaf:name;
rr:objectMap [ rr:column "Name" ];
];
rr:predicateObjectMap [
rr:predicate ex:bornIn;
rr:objectMap [ rr:column "BirthYear";
rr:datatype xsd:gYear;];
].
148. Foreign Keys (2)
Add another predicate-object
map, the referencing object
map
Link entities described in
separate tables
Exploit the foreign key
relationship of FILM.Director
and DIRECTOR.ID
Chapter 2 Materializing the Web of Linked Data 148
<#TriplesMap1>
rr:logicalTable [ rr:tableName "FILM" ];
rr:subjectMap [
rr:template "http://data.example.org/film/{ID}";
rr:class ex:Movie;
];
rr:predicateObjectMap [
rr:predicate dc:title;
rr:objectMap [ rr:column "Title" ];
];
rr:predicateObjectMap [
rr:predicate ex:releasedIn;
rr:objectMap [ rr:column "Year";
rr:datatype xsd:gYear;];
rr:predicateObjectMap[
rr:predicate ex:directedBy;
rr:objectMap[
rr:parentTriplesMap <#TriplesMap2>;
rr:joinCondition[
rr:child "Director";
rr:parent "ID";
];
];
].
149. Foreign Keys (3)
TriplesMap1 now contains a referencing object map
◦ Responsible for the generation of the object of a triple
◦ Referencing object maps are a special case of object maps
◦ Follows the generation rules of the subject map of another triples
map
◦ Called parent triples map
◦ Join conditions are also specified in order to select the appropriate
row of the logical table of the parent triples map
Chapter 2 Materializing the Web of Linked Data 149
150. Foreign Keys (4)
Not necessary for this foreign key relationship to be explicitly defined
as a constraint in the relational schema
R2RML is flexible enough to allow the linkage of any column set
among different relations
TriplesMap1 result
TriplesMap2 result
Chapter 2 Materializing the Web of Linked Data 150
<http://data.example.org/film/1> ex:directedBy <http://data.example.org/director/1>.
<http://data.example.org/director/1> a ex:Director;
foaf:name "Gary Ross";
ex:bornIn "1956"^^xsd:gYear.
151. Custom Views (1)
R2RML view defined via the
rr:sqlQuery property
Used just as the native logical
tables
Could also have been defined as
an SQL view in the underlying
database system
◦ Not always feasible/desirable
Produced result set must not
contain columns with the same
name
Chapter 2 Materializing the Web of Linked Data 151
@prefix rr: <http://www.w3.org/ns/r2rml#>.
@prefix ex: <http://www.example.org/>.
@prefix dc: <http://purl.org/dc/terms/>.
<#TriplesMap3>
rr:logicalTable [ rr:sqlQuery """
SELECT ACTOR.ID AS ActorId, ACTOR.Name AS
ActorName, ACTOR.BirthYear AS ActorBirth, FILM.ID AS FilmId FROM
ACTOR, FILM, FILM2ACTOR WHERE FILM.ID=FILM2ACTOR.FilmID AND
ACTOR.ID=FILM2ACTOR.ActorID; """];
rr:subjectMap [
rr:template "http://data.example.org/actor/{ActorId}";
rr:class ex:Actor;
];
rr:predicateObjectMap [
rr:predicate foaf:name;
rr:objectMap [ rr:column "ActorName" ];
];
rr:predicateObjectMap [
rr:predicate ex:bornIn;
rr:objectMap [ rr:column "ActorBirth";
rr:datatype xsd:gYear;];
].
rr:predicateObjectMap [
rr:predicate ex:starsIn;
rr:objectMap [ rr:template
"http://data.example.org/film/{ID}";
];
].
152. Custom Views (2)
TriplesMap3 generates the following triples
Chapter 2 Materializing the Web of Linked Data 152
<http://data.example.org/actor/1> a ex:Actor;
foaf:name "Jennifer Lawrence";
ex:bornIn "1990"^^xsd:gYear;
ex:starsIn<http://data.example.org/film/1>.
<http://data.example.org/actor/2> a ex:Actor;
foaf:name "Josh Hutcherson";
ex:bornIn "1992"^^xsd:gYear;
ex:starsIn <http://data.example.org/film/1>.
153. Direct Mapping (1)
Chapter 2 Materializing the Web of Linked Data 153
A default strategy for translating a relational instance to an
RDF graph
A trivial transformation strategy
A recommendation by W3C since 2012
Defines a standard URI generation policy for all kinds of
relations
154. Direct Mapping (2)
Chapter 2 Materializing the Web of Linked Data 154
Maps every relation to a new ontology class and every relation
row to an instance of the respective class
The generated RDF graph essentially mirrors the relational
structure
Can be useful as a starting point for mapping authors who can
then augment and customize it accordingly using R2RML
155. Outline
Introduction
RDF and RDF Schema
Description Logics
Querying RDF data with SPARQL
Mapping relational data with R2RML
Other technologies
Ontologies
Datasets
Chapter 2 Materializing the Web of Linked Data 155
156. POWDER – Protocol for Web Description Resources (1)
XML-based protocol
Allows provision of information about RDF resources
A POWDER document contains
◦ Its own provenance information
◦ Information about its creator, when it was created, etc.
◦ A list of description resources
Chapter 2 Materializing the Web of Linked Data 156
157. POWDER – Protocol for Web Description Resources (2)
Description resource
◦ Contains a set of property-value pairs in RDF
◦ For a given resource or group of resources
Allows for quick annotation of large amounts of content with
metadata
Ideal for scenarios involving personalized delivery of content,
trust control and semantic annotation
Chapter 2 Materializing the Web of Linked Data 157
158. RIF – Rule Interchange Format
An extensible framework for the representation of rule dialects
A W3C recommendation
A family of specifications
◦ Three rule dialects
◦ RIF-BLD (Basic Logic Dialect)
◦ RIF-Core and
◦ RIF-PRD (Production Rule Dialect)
◦ RIF-FLD (Framework for Logic Dialects)
◦ Allows the definition of other rule dialects
◦ Specifies the interface of rules expressed in a logic-based RIF dialect with RDF and OWL
Chapter 2 Materializing the Web of Linked Data 158
159. SPIN – SPARQL Inferencing Notation
RIF uptake not as massive as expected
◦ Complexity, lack of supporting tools
SPIN
◦ SPARQL-based
◦ Production rules
Links RDFS and OWL classes with constraint checks and
inference rules
◦ Similar to class behavior in object-oriented languages
Chapter 2 Materializing the Web of Linked Data 159
160. GRDDL – Gleaning Resource Descriptions from Dialects of Languages
A W3C recommendation
Mechanisms that specify how to construct RDF
representations
◦ Can be applied to XHTML and XML documents
◦ Defines XSLT transformations that generate RDF/XML
Is the standard way of converting an XML document to RDF
Familiarity with XSLT required
Chapter 2 Materializing the Web of Linked Data 160
161. LDP – Linked Data Platform
Best practices and guidelines for organizing and accessing LDP resources
◦ Via HTTP RESTful services
◦ Resources can be RDF or non-RDF
A W3C recommendation
◦ Specifies the expected behavior of a Linked Data server when it receives HTTP requests
◦ Creation, deletion, update
◦ Defines the concept of a Linked Data Platform Container
◦ A collection of resources (usually homogeneous)
LDP specification expected to establish a standard behavior for Linked Data
systems
Example: Apache Marmotta
Chapter 2 Materializing the Web of Linked Data 161
162. Outline
Introduction
RDF and RDF Schema
Description Logics
Querying RDF data with SPARQL
Mapping relational data with R2RML
Other technologies
Ontologies
Datasets
Chapter 2 Materializing the Web of Linked Data 162
163. Ontologies and Datasets (1)
Standards and technologies (e.g. RDF and OWL)
◦ Machine-consumable serializations
◦ Syntactical groundwork
◦ Assignment of meaning to resources and their relationships
Vocabularies
◦ (Re)used in order for the described information to be commonly,
unambiguously interpreted and understood
◦ Reused by various data producers
◦ Make data semantically interoperable
Chapter 2 Materializing the Web of Linked Data 163
164. Ontologies and Datasets (2)
Datasets
◦ Collections of facts involving specific individual entities and
ontologies, which provide an abstract, high-level, terminological
description of a domain, containing axioms that hold for every
individual
A great number of ontologies exist nowadays
◦ Subject domains: e.g. geographical, business, life sciences,
literature, media
Chapter 2 Materializing the Web of Linked Data 164
165. Ontology search engines and specialized directories
Schemapedia
Watson
Swoogle
Linked Open Vocabularies (LOV)
◦ The most accurate and comprehensive source of ontologies used
in the Linked Data cloud
◦ Vocabularies described by appropriate metadata and classified to
domain spaces
◦ Full-text search enabled at vocabulary and element level
Chapter 2 Materializing the Web of Linked Data 165
166. DC – Dublin Core (1)
A minimal metadata element set
Mainly used for the description of web resources
DC vocabulary
◦ Available as an RDFS ontology
◦ Contains classes and properties
◦ E.g. agent, bibliographic resource, creator, title, publisher, description
Chapter 2 Materializing the Web of Linked Data 166
167. DC – Dublin Core (2)
DC ontology partitioned in two sets
◦ Simple DC
◦ Contains 15 core properties
◦ Qualified DC
◦ Contains all the classes and the rest of the properties (specializations of
those in simple DC)
Chapter 2 Materializing the Web of Linked Data 167
168. FOAF – Friend-Of-A-Friend
One of the first lightweight ontologies being developed
since the first years of the Semantic Web
Describes human social networks
Classes such as Person, Agent or Organization
Properties denoting personal details and relationships with
other persons and entities
Chapter 2 Materializing the Web of Linked Data 168
169. SKOS – Simple Knowledge Organization System (1)
A fundamental vocabulary in the Linked Data ecosystem
A W3C recommendation since 2009
Contains terms for organizing knowledge
◦ In the form of taxonomies, thesauri, and concept hierarchies
Defines concept schemes as sets of concepts
◦ Concept schemes can relate to each other through
“broader/narrower than” or equivalence relationships
Chapter 2 Materializing the Web of Linked Data 169
170. SKOS – Simple Knowledge Organization System (2)
Defined as an OWL Full ontology
◦ Has its own semantics
◦ Its own set of entailment rules distinct from those of RDFS and OWL
Widely used in the library and information science domain
Chapter 2 Materializing the Web of Linked Data 170
171. VoID – Vocabulary of Interlinked Datasets
Metadata for RDF datasets
◦ General metadata
◦ E.g. Name, description, creator
◦ Information on the ways that this dataset can be accessed
◦ E.g. as a data dump, via a SPARQL endpoint
◦ Structural information
◦ E.g. the set of vocabularies that are used or the pattern of a typical URI
◦ Information on the links
◦ E.g. the number and target datasets
Aid users and machines to decide whether a dataset is suitable for
their needs
Chapter 2 Materializing the Web of Linked Data 171
172. SIOC – Semantically-Interlinked Online Communities
An OWL ontology
Describes online communities
◦ E.g. forums, blogs, mailing lists
Main classes
◦ E.g. forum, post, event, group, user
Properties
◦ Attributes of those classes
◦ E.g. the topic or the number of views of a post
Chapter 2 Materializing the Web of Linked Data 172
173. Good Relations
Describes online commercial offerings
◦ E.g. Product and business descriptions, pricing and delivery
methods
Great impact in real-life applications
◦ Adoption by several online retailers and search engines
Search engines able to interpret product metadata
expressed in the Good Relations vocabulary
◦ Offer results better tailored to the needs of end users
Chapter 2 Materializing the Web of Linked Data 173
174. Outline
Introduction
RDF and RDF Schema
Description Logics
Querying RDF data with SPARQL
Mapping relational data with R2RML
Other technologies
Ontologies
Datasets
Chapter 2 Materializing the Web of Linked Data 174
175. Datasets
Several “thematic neighborhoods” in the Linked Data cloud
◦ E.g. government, media, geography, life sciences
Single-domain and cross-domain datasets
Chapter 2 Materializing the Web of Linked Data 175
176. DBpedia (1)
The RDF version of Wikipedia
◦ Perhaps the most popular RDF dataset in the LOD cloud
◦ A large number of incoming links
A cross-domain dataset
◦ Assigns a URI to every resource described by a Wikipedia article
◦ Produces structured information by mining information from
Wikipedia infoboxes
Chapter 2 Materializing the Web of Linked Data 176
177. DBpedia (2)
A multilingual dataset
◦ Transforms various language editions of Wikipedia
The DBpedia dataset offered as data dump and through a
SPARQL endpoint
A “live” version available, updated whenever a Wikipedia
page is updated
Chapter 2 Materializing the Web of Linked Data 177
178. Freebase
Openly-licensed, structured dataset, also available as an RDF
graph
Tens of millions of concepts, types and properties
Can be edited by anyone
Gathers information from several structured sources and free
text
Offers an API for developers
Linked to DBpedia through incoming and outgoing links
Chapter 2 Materializing the Web of Linked Data 178
179. GeoNames
An open geographical database
Millions of geographical places
Can be edited by anyone
Geographical information as the context of descriptions and
facts
Omnipresent in the Linked Data cloud
Several incoming links from other datasets
Chapter 2 Materializing the Web of Linked Data 179
180. Lexvo
Central dataset for the so-called linguistic Linked Data Cloud
Provides identifiers for thousands of languages
URIs for terms in every language
Identifiers for language characters
Chapter 2 Materializing the Web of Linked Data 180
182. Outline
Introduction
Modeling Data
Software for Working with Linked Data
Software Tools for Storing and Processing Linked Data
Tools for Linking and Aligning Linked Data
Software Libraries for working with RDF
Chapter 3 Materializing the Web of Linked Data 182
183. Introduction
Today’s Web: Anyone can say anything about any topic
◦ Information on the Web cannot always be trusted
Linked Open Data (LOD) approach
◦ Materializes the Semantic Web vision
◦ A focal point is provided for any given web resource
◦ Referencing (referring to)
◦ De-referencing (retrieving data about)
Chapter 3 Materializing the Web of Linked Data 183
184. Not All Data Can Be Published Online
Data has to be
◦ Stand-alone
◦ Strictly separated from business logic, formatting, presentation processing
◦ Adequately described
◦ Use well-known vocabularies to describe it, or
◦ Provide de-referenceable URIs with vocabulary term definitions
◦ Linked to other datasets
◦ Accessed simply
◦ HTTP and RDF instead of Web APIs
Chapter 3 Materializing the Web of Linked Data 184
185. Linked Data-driven Applications (1)
Content reuse
◦ E.g. BBC’s Music Store
◦ Uses DBpedia and MusicBrainz
Semantic tagging and rating
◦ E.g. Faviki
◦ Uses DBpedia
Chapter 3 Materializing the Web of Linked Data 185
186. Linked Data-driven Applications (2)
Integrated question-answering
◦ E.g. DBpedia mobile
◦ Indicate locations in the user’s vicinity
Event data management
◦ E.g. Virtuoso’s calendar module
◦ Can organize events, tasks, and notes
Chapter 3 Materializing the Web of Linked Data 186
187. Linked Data-driven Applications (3)
Linked Data-driven data webs are expected to evolve in
numerous domains
◦ E.g. Biology, software engineering
The bulk of Linked Data processing is not done online
Traditional applications use other technologies
◦ E.g. relational databases, spreadsheets, XML files
◦ Data must be transformed in order to be published on the web
Chapter 3 Materializing the Web of Linked Data 187
188. The O in LOD: Open Data
Open ≠ Linked
◦ Open data is data that is publicly accessible via internet
◦ No physical or virtual barriers to accessing them
◦ Linked Data allows relationships to be expressed among these data
RDF is ideal for representing Linked Data
◦ This contributes to the misconception that LOD can only be published in RDF
Definition of openness by www.opendefinition.org
Chapter 3 Materializing the Web of Linked Data 188
Open means anyone can freely access, use, modify, and share for
any purpose (subject, at most, to requirements that preserve
provenance and openness)
189. Why should anyone open their data?
Reluctance by data owners
◦ Fear of becoming useless by giving away their core value
In practice the opposite happens
◦ Allowing access to content leverages its value
◦ Added-value services and products by third parties and interested audience
◦ Discovery of mistakes and inconsistencies
◦ People can verify convent freshness, completeness, accuracy, integrity, overall value
◦ In specific domains, data have to be open for strategic reasons
◦ E.g. transparency in government data
Chapter 3 Materializing the Web of Linked Data 189
190. Steps in Publishing Linked Open Data (1)
Data should be kept simple
◦ Start small and fast
◦ Not all data is required to be opened at once
◦ Start by opening up just one dataset, or part of a larger dataset
◦ Open up more datasets
◦ Experience and momentum may be gained
◦ Risk of unnecessary spending of resources
◦ Not every dataset is useful
Chapter 3 Materializing the Web of Linked Data 190
191. Steps in Publishing Linked Open Data (2)
Engage early and engage often
◦ Know your audience
◦ Take its feedback into account
◦ Ensure that next iteration of the service will be as relevant as it can be
◦ End users will not always be direct consumers of the data
◦ It is likely that intermediaries will come between data providers and end users
◦ E.g. an end user will not find use in an array of geographical coordinates but a company offering
maps will
◦ Engage with the intermediaries
◦ They will reuse and repurpose the data
Chapter 3 Materializing the Web of Linked Data 191
192. Steps in Publishing Linked Open Data (3)
Deal in advance with common fears and misunderstandings
◦ Opening data is not always looked upon favorably
◦ Especially in large institutions, it will entail a series of
consequences and, respectively, opposition
◦ Identify, explain, and deal with the most important fears and
probable misconceptions from an early stage
Chapter 3 Materializing the Web of Linked Data 192
193. Steps in Publishing Linked Open Data (4)
It is fine to charge for access to the data via an API
◦ As long as the data itself is provided in bulk for free
◦ Data can be considered as open
◦ The API is considered as an added-value service on top of the data
◦ Fees are charged for the use of the API, not of the data
◦ This opens business opportunities in the data-value chain around
open data
Chapter 3 Materializing the Web of Linked Data 193
194. Steps in Publishing Linked Open Data (5)
Data openness ≠ data freshness
◦ Opened data does not have to be a real-time snapshot of the
system data
◦ Consolidate data into bulks asynchronously
◦ E.g. every hour or every day
◦ You could offer bulk access to the data dump and access through an API to
the real-time data
Chapter 3 Materializing the Web of Linked Data 194
195. Dataset Metadata (1)
Provenance
◦ Information about entities, activities and people involved in the
creation of a dataset, a piece of software, a tangible object, a thing
in general
◦ Can be used in order to assess the thing’s quality, reliability,
trustworthiness, etc.
◦ Two related recommendations by W3C
◦ The PROV Data Model, in OWL 2
◦ The PROV ontology
Chapter 3 Materializing the Web of Linked Data 195
196. Dataset Metadata (2)
Description about the dataset
W3C recommendation
◦ DCAT
◦ Describes an RDF vocabulary
◦ Specifically designed to facilitate interoperability between data catalogs
published on the Web
Chapter 3 Materializing the Web of Linked Data 196
197. Dataset Metadata (3)
Licensing
◦ A short description regarding the terms of use of the dataset
◦ E.g. for the Open Data Commons Attribution License
Chapter 3 Materializing the Web of Linked Data 197
This {DATA(BASE)-NAME} is made available under the Open Data Commons
Attribution License: http://opendatacommons.org/licenses/by/{version }.—See
more at: http://opendatacommons.org/licenses/by/#sthash.9HadQzSW.dpuf
198. Bulk Access vs. API (1)
Offering bulk access is a requirement
◦ Offering an API is not
Chapter 3 Materializing the Web of Linked Data 198
199. Bulk Access vs. API (2)
Bulk access
◦ Can be cheaper than providing an API
◦ Even an elementary API entails development and maintenance costs
◦ Allows building an API on top of the offered data
◦ Offering an API does not allow clients to retrieve the whole amount of data
◦ Guarantees full access to the data
◦ An API does not
Chapter 3 Materializing the Web of Linked Data 199
200. Bulk Access vs. API (3)
API
◦ More suitable for large volumes of data
◦ No need to download the whole dataset when a small subset is needed
Chapter 3 Materializing the Web of Linked Data 200
201. The 5-Star Deployment Scheme
Chapter 3 Materializing the Web of Linked Data 201
★
Data is made available on the Web (whatever format) but with an open
license to be Open Data
★★
Available as machine-readable structured data: e.g. an Excel spreadsheet
instead of image scan of a table
★★★
As the 2-star approach, in a non-proprietary format:
e.g. CSV instead of Excel
★★★★
All the above plus the use of open standards from W3C (RDF and SPARQL) to
identify things, so that people can point at your stuff
★★★★★
All the above, plus: Links from the data to other people’s data in order to
provide context
202. Outline
Introduction
Modeling Data
Software Tools for Storing and Processing Linked Data
Tools for Linking and Aligning Linked Data
Software Libraries for working with RDF
Chapter 3 Materializing the Web of Linked Data 202
203. The D in LOD: Modeling Content
Content has to comply with a specific model
◦ A model can be used
◦ As a mediator among multiple viewpoints
◦ As an interfacing mechanism between humans or computers
◦ To offer analytics and predictions
◦ Expressed in RDF(S), OWL
◦ Custom or reusing existing vocabularies
◦ Decide on the ontology that will serve as a model
◦ Among the first decisions when publishing a dataset as LOD
◦ Complexity of the model has to be taken into account, based on the desired
properties
◦ Decide whether RDFS or one of the OWL profiles (flavors) is needed
Chapter 3 Materializing the Web of Linked Data 203
204. Reusing Existing Works (1)
Vocabularies and ontologies have existed long before the
emergence of the Web
◦ Widespread vocabularies and ontologies in several domains
encode the accumulated knowledge and experience
Highly probable that a vocabulary has already been created
in order to describe the involved concepts
◦ Any domain of interest
Chapter 3 Materializing the Web of Linked Data 204
205. Reusing Existing Works (2)
Increased interoperability
◦ Use of standards can help content aggregators to parse and
process the information
◦ Without much extra effort per data source
◦ E.g. an aggregator that parses and processes dates from several
sources
◦ More likely to support the standard date formats
◦ Less likely to convert the formatting from each source to a uniform syntax
◦ Much extra effort per data source
◦ E.g. DCMI Metadata Terms
◦ Field dcterms:created, value "2014-11-07"^^xsd:date
Chapter 3 Materializing the Web of Linked Data 205
206. Reusing Existing Works (3)
Credibility
◦ Shows that the published dataset
◦ Has been well thought of
◦ Is curated
◦ A state-of-the-art survey has been performed prior to publishing the data
Ease of use
◦ Reusing is easier than rethinking and implementing again or replicating
existing solutions
◦ Even more, when vocabularies are published by multidisciplinary consortia with potentially
more spherical view on the domain than yours
Chapter 3 Materializing the Web of Linked Data 206
207. Reusing Existing Works (4)
In conclusion
◦ Before adding terms in our vocabulary, make sure they do not
already exist
◦ In such case, reuse them by reference
◦ When we need to be more specific, we can create a subclass or a
subproperty of the existing
◦ New terms can be generated, when the existing ones do not
suffice
Chapter 3 Materializing the Web of Linked Data 207
208. Semantic Web for Content Modeling
Powerful means for system description
◦ Concept hierarchy, property hierarchy, set of individuals, etc.
Beyond description
◦ Model checking
◦ Use of a reasoner assures creation of coherent, consistent models
◦ Semantic interoperability
◦ Inference
◦ Formally defined semantics
◦ Support of rules
◦ Support of logic programming
Chapter 3 Materializing the Web of Linked Data 208
209. Assigning URIs to Entities
Descriptions can be provided for
◦ Things that exist online
◦ Items/persons/ideas/things (in general) that exist outside of the Web
Example: Two URIs to describe a company
◦ The company’s website
◦ A description of the company itself
◦ May well be in an RDF document
A strategy has to be devised in assigning URIs to entities
◦ No deterministic approaches
Chapter 3 Materializing the Web of Linked Data 209
210. Assigning URIs to Entities: Challenges
Dealing with ungrounded data
Lack of reconciliation options
Lack of identifier scheme documentation
Proprietary identifier schemes
Multiple identifiers for the same concepts/entities
Inability to resolve identifiers
Fragile identifiers
Chapter 3 Materializing the Web of Linked Data 210
211. Assigning URIs to Entities: Benefits
Semantic annotation
Data is discoverable and citable
The value of the data increases as the usage of its identifiers
increases
Chapter 3 Materializing the Web of Linked Data 211
212. URI Design Patterns (1)
Conventions for how URIs will be assigned to resources
Also widely used in modern web frameworks
In general applicable to web applications
Can be combined
Can evolve and be extended over time
Their use is not restrictive
◦ Each dataset has its own characteristics
Some upfront thought about identifiers is always beneficial
Chapter 3 Materializing the Web of Linked Data 212
213. URI Design Patterns (2)
Hierarchical URIs
◦ URIs assigned to a group of resources that form a natural hierarchy
◦ E.g. :collection/:item/:sub-collection/:item
Natural keys
◦ URIs created from data that already has unique identifiers
◦ E.g. identify books using their ISBN
Chapter 3 Materializing the Web of Linked Data 213
214. URI Design Patterns (3)
Literal keys
◦ URIs created from existing, non-global identifiers
◦ E.g. the dc:identifier property of the described resource
Patterned URIs
◦ More predictable, human-readable URIs
◦ E.g. /books/12345
◦ /books is the base part of the URI indicating “the collection of books”
◦ 12345 is an identifier for an individual book
Chapter 3 Materializing the Web of Linked Data 214
215. URI Design Patterns (4)
Proxy URIs
◦ Used in order to deal with the lack of standard identifiers for third-party
resources
◦ If for these resources, identifiers do exist, then these should be reused
◦ If not, use locally minted Proxy URIs
Rebased URIs
◦ URIs constructed based on other URIs
◦ E.g. URIs rewritten using regular expressions
◦ From http://graph1.example.org/document/1 to
http://graph2.example.org/document/1
Chapter 3 Materializing the Web of Linked Data 215
216. URI Design Patterns (5)
Shared keys
◦ URIs specifically designed to simplify the linking task between datasets
◦ Achieved by a creating Patterned URIs while applying the Natural Keys pattern
◦ Public, standard identifiers are preferable to internal, system-specific
URL slugs
◦ URIs created from arbitrary text or keywords, following a certain algorithm
◦ E.g. lowercasing the text, removing special characters and replacing spaces
with a dash
◦ A URI for the name “Brad Pitt” could be http://www.example.org/brad-pitt
Chapter 3 Materializing the Web of Linked Data 216
217. Assigning URIs to Entities
Desired functionality
◦ Semantic Web applications retrieve the RDF
description of things
◦ Web browsers are directed to the (HTML)
documents describing the same resource
Two categories of technical approaches
for providing URIs for dataset entities
◦ Hash URIs
◦ 303 URIs
Chapter 3 Materializing the Web of Linked Data 217
Resource Identifier (URI)
ID
Semantic Web
applications
Web
browsers
RDF document URI HTML document URI
218. Hash URIs
URIs contain a fragment separated from the rest of the URI
using ‘#’
◦ E.g. URIs for the descriptions of two companies
◦ http://www.example.org/info#alpha
◦ http://www.example.org/info#beta
◦ The RDF document containing descriptions about both companies
◦ http://www.example.org/info
◦ The original URIs will be used in this RDF document to uniquely
identify the resources
◦ Companies Alpha, Beta and anything else
Chapter 3 Materializing the Web of Linked Data 218
219. Hash URIs with Content Negotiation
Redirect either to the RDF or the
HTML representation
◦ Decision based on client preferences
and server configuration
Technically
◦ The Content-Location header should be
set to indicate where the hash URI
refers to
◦ Part of the RDF document (info.rdf)
◦ Part of the HTML document (info.html)
Chapter 3 Materializing the Web of Linked Data 219
http://www.example.org/info#alpha
Thing
application/rdf+xml
wins
text/html
wins
Content-Location:
http://www.example.org/info.html
Content-Location:
http://www.example.org/info.rdf
Automatic truncation of fragment
http://www.example.org/info
220. Hash URIs without Content Negotiation
Can be implemented by simply uploading
static RDF files to a Web server
◦ No special server configuration is needed
◦ Not as technically challenging as the previous one
Popular for quick-and-dirty RDF publication
Major problem: clients will be obliged to load
(download) the whole RDF file
◦ Even if they are interested in only one of the
resources
Chapter 3 Materializing the Web of Linked Data 220
http://www.example.org/info#alpha
ID
http://www.example.org/info
Automatic truncation of fragment
221. 303 URIs (1)
Approach relies on the “303 See Other” HTTP status code
◦ Indication that the requested resource is not a regular Web
document
Regular HTTP response (200) cannot be returned
◦ Requested resource does not have a suitable representation
However, we still can retrieve description about this
resource
◦ Distinguishing between the real-world resource and its description
(representation) on the Web
Chapter 3 Materializing the Web of Linked Data 221
222. 303 URIs (2)
HTTP 303 is a redirect status code
◦ Server provides the location of a document that represents the
resource
E.g. companies Alpha and Beta can be described using the
following URIs
◦ http://www.example.org/id/alpha
◦ http://www.example.org/id/beta
Server can be configured to answer requests to these URIs
with a 303 (redirect) HTTP status code
Chapter 3 Materializing the Web of Linked Data 222
223. 303 URIs (3)
Location can contain an HTML, an RDF, or any alternative form, e.g.
◦ http://www.example.org/doc/alpha
◦ http://www.example.org/doc/beta
This setup allows to maintain bookmarkable, de-referenceable URIs
◦ For both the RDF and HTML views of the same resource
A very flexible approach
◦ Redirection target can be configured separately per resource
◦ There could be a document for each resource, or one (large) document with
descriptions of all the resources
Chapter 3 Materializing the Web of Linked Data 223
224. 303 URIs (4)
303 URI solution based on a
generic document URI
Chapter 3 Materializing the Web of Linked Data 224
http://www.example.org/id/alpha
Thing
303 redirect
application/rdf+xml
wins
text/html
wins
Content-Location:
http://www.example.org/doc/alpha.html
Content-Location:
http://www.example.org/doc/alpha.rdf
Generic Document
content
negotiation
http://www.example.org/doc/alpha
225. 303 URIs (5)
303 URI solution without the generic
document URI
Chapter 3 Materializing the Web of Linked Data 225
http://www.example.org/id/alpha
Thing
303 redirect
with content
negotiation
application/rdf+xml
wins
text/html
wins
http://www.example.org/company/alpha
http://www.example.org/data/alpha
226. 303 URIs (6)
Problems of the 303 approach
◦ Latency caused by client redirects
◦ A client looking up a set of terms may use many HTTP requests
◦ While everything that could be loaded in the first request is there and is
ready to be downloaded
◦ Clients of large datasets may be tempted to download the full data
via HTTP, using many requests
◦ In these cases, SPARQL endpoints or comparable services should be
provided in order to answer complex queries directly on the server
Chapter 3 Materializing the Web of Linked Data 226
227. 303 URIs (7)
303 and Hash approaches are not mutually exclusive
◦ Contrarily, combining them could be ideal
◦ Allow large datasets to be separated into multiple parts and have identifiers
for non-document resources
Chapter 3 Materializing the Web of Linked Data 227
228. Outline
Introduction
Modeling Data
Software for Working with Linked Data
Software Tools for Storing and Processing Linked Data
Tools for Linking and Aligning Linked Data
Software Libraries for working with RDF
Chapter 3 Materializing the Web of Linked Data 228
229. Software for Working with Linked Data (1)
Working on small datasets is a task that can be tackled by
manually authoring an ontology, however
Publishing LOD means the data has to be programmatically
manipulated
Many tools exist that facilitate the effort
Chapter 3 Materializing the Web of Linked Data 229
230. Software for Working with Linked Data (2)
The most prominent tools listed next
◦ Ontology authoring environments
◦ Cleaning data
◦ Software tools and libraries for working with Linked Data
◦ No clear lines can be drawn among software categories
◦ E.g. graphical tools offering programmatic access, or software libraries
offering a GUI
Chapter 3 Materializing the Web of Linked Data 230
231. Ontology Authoring (1)
Not a linear process
◦ It is not possible to line up the steps needed in order to complete
its authoring
An iterative procedure
◦ The core ontology structure can be enriched with more
specialized, peripheral concepts
◦ Further complicating concept relations
◦ The more the ontology authoring effort advances, the more complicated the
ontology becomes
Chapter 3 Materializing the Web of Linked Data 231
232. Ontology Authoring (2)
Various approaches
◦ Start from the more general and continue with the more specific
concepts
◦ Reversely, write down the more specific concepts and group them
Can uncover existing problems
◦ E.g. concept clarification
◦ Understanding of the domain in order to create its model
◦ Probable reuse and connect to other ontologies
Chapter 3 Materializing the Web of Linked Data 232
233. Ontology Editors (1)
Offer a graphical interface
◦ Through which the user can interact
◦ Textual representation of ontologies can be prohibitively obscure
Assure syntactic validity of the ontology
Consistency checks
Chapter 3 Materializing the Web of Linked Data 233
234. Ontology Editors (2)
Freedom to define concepts and their relations
◦ Constrained to assure semantic consistency
Allow revisions
Several ontology editors have been built
◦ Only a few are practically used
◦ Among them the ones presented next
Chapter 3 Materializing the Web of Linked Data 234
235. Protégé (1)
Open-source
Maintained by the Stanford Center for Biomedical
Informatics Research
Among the most long-lived, complete and capable solutions
for ontology authoring and managing
A rich set of plugins and capabilities
Chapter 3 Materializing the Web of Linked Data 235
236. Protégé (2)
Customizable user interface
Multiple ontologies can be developed in a single frame workspace
Several Protégé frames roughly correspond to OWL components
◦ Classes
◦ Properties
◦ Object Properties, Data Properties, and Annotation Properties
◦ Individuals
A set of tools for visualization, querying, and refactoring
Chapter 3 Materializing the Web of Linked Data 236
237. Protégé (3)
Reasoning support
◦ Connection to DL reasoners like HermiT (included) or Pellet
OWL 2 support
Allows SPARQL queries
WebProtégé
◦ A much younger offspring of the Desktop version
◦ Allows collaborative viewing and editing
◦ Less feature-rich, more buggy user interface
Chapter 3 Materializing the Web of Linked Data 237
238. TopBraid Composer
RDF and OWL authoring and editing environment
Based on the Eclipse development platform
A series of adapters for the conversion of data to RDF
◦ E.g. from XML, spreadsheets and relational databases
Supports persistence of RDF graphs in external triple stores
Ability to define SPIN rules and constraints and associate them with OWL classes
Maestro, Commercial, Free edition
◦ Free edition offers merely a graphical interface for the definition of RDF graphs and OWL
ontologies and the execution of SPARQL queries on them
Chapter 3 Materializing the Web of Linked Data 238
239. The NeOn Toolkit
Open-source ontology authoring environment
◦ Also based on the Eclipse platform
Mainly implemented in the course of the EC-funded NeOn project
◦ Main goal: the support for all tasks in the ontology engineering life-cycle
Contains a number of plugins
◦ Multi-user collaborative ontology development
◦ Ontology evolution through time
◦ Ontology annotation
◦ Querying and reasoning
◦ Mappings between relational databases and ontologies
◦ ODEMapster plugin
Chapter 3 Materializing the Web of Linked Data 239
240. Platforms and Environments
Data that published as Linked Data is not always produced primarily
in this form
◦ Files in hard drives, relational databases, legacy systems etc.
Many options regarding how the information is to be transformed
into RDF
Many software tools and libraries available in the Linked Data
ecosystem
◦ E.g. for converting, cleaning up, storing, visualizing, linking etc.
◦ Creating Linked Data from relational databases is a special case, discussed in
detail in the next Chapter
Chapter 3 Materializing the Web of Linked Data 240
241. Cleaning-Up Data: OpenRefine (1)
Data quality may be lower than expected
◦ In terms of homogeneity, completeness, validity, consistency, etc.
Prior processing has to take place before publishing
It is not enough to provide data as Linked Data
◦ Published data must meet certain quality standards
Chapter 3 Materializing the Web of Linked Data 241
242. Cleaning-Up Data: OpenRefine (2)
Initially developed as “Freebase Gridworks”, renamed “Google
Refine” in 2010, “OpenRefine” after its transition to a community-
supported project in 2012
Created specifically to help working with messy data
Used to improve data consistency and quality
Used in cases where the primary data source are files
◦ In tabular form (e.g. TSV, CSV, Excel spreadsheets) or
◦ Structured as XML, JSON, or even RDF.
Chapter 3 Materializing the Web of Linked Data 242
243. Cleaning-Up Data: OpenRefine (3)
Allows importing data into the tool and connect them to other
sources
It is a web application, intended to run locally, in order to allow
processing sensitive data
Cleaning data
◦ Removing duplicate records
◦ Separating multiple values that may reside in the same field
◦ Splitting multi-valued fields
◦ Identifying errors (isolated or systematic)
◦ Applying ad-hoc transformations using regular expressions
Chapter 3 Materializing the Web of Linked Data 243
244. OpenRefine: The RDF Refine Extension (1)
Allows conversion from other sources to RDF
◦ RDF export
◦ RDF reconciliation
RDF export part
◦ Describe the shape of the generated RDF graph through a template
◦ Template uses values from the input spreadsheet
◦ User can specify the structure of an RDF graph
◦ The relationships that hold among resources
◦ The form of the URI scheme that will be followed, etc.
Chapter 3 Materializing the Web of Linked Data 244
245. OpenRefine: The RDF Refine Extension (2)
RDF reconciliation part
◦ Offers a series of alternatives for discovering related Linked Data
entities
◦ A reconciliation service
◦ Allows reconciliation of resources
◦ Against an arbitrary SPARQL endpoint
◦ With or without full-text search functionality
◦ A predefined SPARQL query that contains the request label (i.e. the label of the resource to be
reconciled) is sent to a specific SPARQL endpoint
◦ Via the Sindice API
◦ A call to the Sindice API is directly made using the request label as input to the service
Chapter 3 Materializing the Web of Linked Data 245
246. Outline
Introduction
Modeling Data
Software Tools for Storing and Processing Linked Data
Tools for Linking and Aligning Linked Data
Software Libraries for working with RDF
Chapter 3 Materializing the Web of Linked Data 246
247. Tools for Storing and Processing Linked Data
Storing and processing solutions
◦ Usage not restricted to these capabilities
A mature ecosystem of technologies and solutions
◦ Cover practical problems such as programmatic access, storage,
visualization, querying via SPARQL endpoints, etc.
Chapter 3 Materializing the Web of Linked Data 247
248. Sesame
An open source, fully extensible and configurable with respect to
storage mechanisms, Java framework for processing RDF data
Transaction support
RDF 1.1 support
Storing and querying APIs
A RESTful HTTP interface supporting SPARQL
The Storage And Inference Layer (Sail) API
◦ A low level system API for RDF stores and inferences, allowing for various
types of storage and inference to be used
Chapter 3 Materializing the Web of Linked Data 248