The document provides an overview of validation of RDF data using the SHACL (Shapes Constraint Language) recommendation. It begins with background on RDF and then discusses why validation of RDF data is important. It introduces key SHACL concepts like shapes, constraints, targets, and property shapes. Examples are provided to illustrate node shapes, value type constraints, cardinality constraints, logical constraints, and property pair constraints. The document serves as an introduction to validating RDF data using the SHACL language.
This document provides an introduction and examples for SHACL (Shapes Constraint Language), a W3C recommendation for validating RDF graphs. It defines key SHACL concepts like shapes, targets, and constraint components. An example shape validates nodes with a schema:name and schema:email property. Constraints like minCount, maxCount, datatype, nodeKind, and logical operators like and/or are demonstrated. The document is an informative tutorial for learning SHACL through examples.
This document provides an overview of SHACL (Shapes Constraint Language), a W3C recommendation for defining constraints on RDF graphs. It defines key SHACL concepts like shapes, targets, node shapes, property shapes and constraint components. Examples are provided to illustrate shape definitions and how validation of an RDF graph works against the defined shapes. The document summarizes the motivation for SHACL and inputs that influenced its development.
Semantic Web technologies (such as RDF and SPARQL) excel at bringing together diverse data in a world of independent data publishers and consumers. Common ontologies help to arrive at a shared understanding of the intended meaning of data.
However, they don’t address one critically important issue: What does it mean for data to be complete and/or valid? Semantic knowledge graphs without a shared notion of completeness and validity quickly turn into a Big Ball of Data Mud.
The Shapes Constraint Language (SHACL), an upcoming W3C standard, promises to help solve this problem. By keeping semantics separate from validity, SHACL makes it possible to resolve a slew of data quality and data exchange issues.
Presented at the Lotico Berlin Semantic Web Meetup.
SPIN is a vocabulary that represents SPARQL queries and constraints as RDF triples. This allows SPARQL queries to be stored and shared on the semantic web. SPIN can be used to define SPARQL constraints, rules, functions and reusable query templates. Storing SPARQL queries as RDF triples provides benefits like referential integrity, managing namespaces centrally, and facilitating the easy sharing of queries on the semantic web.
This document summarizes an Apache Jena track presentation about SHACL (Shapes Constraint Language) in Apache Jena. The presentation introduces SHACL as a W3C standard for validating RDF graphs, provides examples of using SHACL shapes to validate FOAF profiles, and demonstrates the SHACL validation process. It also covers the SHACL compact syntax, SHACL-SPARQL constraints, SHACL operations, and Apache Jena's support for SHACL including command line tools and Fuseki validation services.
The document discusses the Semantic Web and Resource Description Framework (RDF). It defines the Semantic Web as making web data machine-understandable by describing web resources with metadata. RDF uses triples to describe resources, properties, and relationships. RDF data can be visualized as a graph and serialized in formats like RDF/XML. RDF Schema (RDFS) provides a basic vocabulary for defining classes, properties, and hierarchies to enable reasoning about RDF data.
JSON-LD is a set of W3C standards track specifications for representing Linked Data in JSON. It is fully compatible with the RDF data model, but allows developers to work with data entirely within JSON.
More information on JSON-LD can be found at http://json-ld.org/
The document discusses HTTP requests and responses. It explains that a request contains a start line with the method, URL and HTTP version, followed by headers providing additional information, and an optional message body. A response contains a status line with the protocol version and status code, followed by headers including caching information, and an optional message body. Content negotiation headers like Accept and Content-Type are used to select the appropriate representation format.
This document provides an introduction and examples for SHACL (Shapes Constraint Language), a W3C recommendation for validating RDF graphs. It defines key SHACL concepts like shapes, targets, and constraint components. An example shape validates nodes with a schema:name and schema:email property. Constraints like minCount, maxCount, datatype, nodeKind, and logical operators like and/or are demonstrated. The document is an informative tutorial for learning SHACL through examples.
This document provides an overview of SHACL (Shapes Constraint Language), a W3C recommendation for defining constraints on RDF graphs. It defines key SHACL concepts like shapes, targets, node shapes, property shapes and constraint components. Examples are provided to illustrate shape definitions and how validation of an RDF graph works against the defined shapes. The document summarizes the motivation for SHACL and inputs that influenced its development.
Semantic Web technologies (such as RDF and SPARQL) excel at bringing together diverse data in a world of independent data publishers and consumers. Common ontologies help to arrive at a shared understanding of the intended meaning of data.
However, they don’t address one critically important issue: What does it mean for data to be complete and/or valid? Semantic knowledge graphs without a shared notion of completeness and validity quickly turn into a Big Ball of Data Mud.
The Shapes Constraint Language (SHACL), an upcoming W3C standard, promises to help solve this problem. By keeping semantics separate from validity, SHACL makes it possible to resolve a slew of data quality and data exchange issues.
Presented at the Lotico Berlin Semantic Web Meetup.
SPIN is a vocabulary that represents SPARQL queries and constraints as RDF triples. This allows SPARQL queries to be stored and shared on the semantic web. SPIN can be used to define SPARQL constraints, rules, functions and reusable query templates. Storing SPARQL queries as RDF triples provides benefits like referential integrity, managing namespaces centrally, and facilitating the easy sharing of queries on the semantic web.
This document summarizes an Apache Jena track presentation about SHACL (Shapes Constraint Language) in Apache Jena. The presentation introduces SHACL as a W3C standard for validating RDF graphs, provides examples of using SHACL shapes to validate FOAF profiles, and demonstrates the SHACL validation process. It also covers the SHACL compact syntax, SHACL-SPARQL constraints, SHACL operations, and Apache Jena's support for SHACL including command line tools and Fuseki validation services.
The document discusses the Semantic Web and Resource Description Framework (RDF). It defines the Semantic Web as making web data machine-understandable by describing web resources with metadata. RDF uses triples to describe resources, properties, and relationships. RDF data can be visualized as a graph and serialized in formats like RDF/XML. RDF Schema (RDFS) provides a basic vocabulary for defining classes, properties, and hierarchies to enable reasoning about RDF data.
JSON-LD is a set of W3C standards track specifications for representing Linked Data in JSON. It is fully compatible with the RDF data model, but allows developers to work with data entirely within JSON.
More information on JSON-LD can be found at http://json-ld.org/
The document discusses HTTP requests and responses. It explains that a request contains a start line with the method, URL and HTTP version, followed by headers providing additional information, and an optional message body. A response contains a status line with the protocol version and status code, followed by headers including caching information, and an optional message body. Content negotiation headers like Accept and Content-Type are used to select the appropriate representation format.
Comparison of features between ShEx (Shape Expressions) and SHACL (Shapes Constraint Language)
Changelog:
11/06/17
- Removed slides about compositionality
31/May/2017
- Added slide 30 about validation report
- Added slide 32 about stems
- Changed slides 7 and 8 adapting compact syntax to new operator .
23/05/2017:
Slide 14: Repaired typos in typos in sh:entailment, rdfs:range
21/05/2017:
- Slide 8. Changed the example to be an IRI and a datatype
- Added typically in slide 9
- Slide 10: Removed the phrase: "Target declarations can problematic when reusing/importing shapes"
and created slide 27 to talk about reuability
- Added slide 11 to talk about the differences in triggering validation
- Created slide 14 to talk about inference
- Renamed slide 15 as "Inference and triggering mechanism"
- Added slides 27 and 28 to talk about reuability
- Added slide 29 to talk about annotations
18/05/2017
- Slides 9 now includes an example using ShEx RDF vocabulary
- Slide 10 now says that target declarations are optional
- Slide 13 now says that some RDF Schema terms have special treatment in SHACL
- Example in slide 18 now uses sh:or instead of sh:and
- Added slides 22, 23 and 24 which show some features supported by SHACL but not supported by ShEx (property pair constraints, uniqueLang and owl:imports)
ShEx is a language for validating RDF data. It allows defining shapes that specify constraints on nodes and triples. ShEx expressions can be used to validate if RDF graphs conform to the defined shapes. The ShEx language is inspired by languages like RelaxNG and provides different serialization formats like ShExC, ShExJ, and ShExR. There are open-source implementations of ShEx validators in languages like JavaScript, Scala, Ruby, Python, and Java. ShEx provides a concise way to define RDF shapes and validate instance data against those shapes.
A RESTful API is only truly RESTful if it uses hypermedia to tell us about all the actions that can be performed on the curent resource, allowing us to traverse the API from a single entry point.
His session looks at REST and HATEOAS (Hypermedia As The Engine Of Application State) to illustrate good service structure. Ben will use the RESTful file sharing service fdrop.it to illustrate the various examples of how this can be used.
This session is recommended for architects and senior developers alike and will give a good grounding in writing excellent, self-explanatory RESTful services.
This is a presentation which describe the big picture of the Rest API. In this presentation I simply describe the theories with practical examples. Hope this presentation will cover the overall Rest API domain.
What is REST API? REST API Concepts and Examples | EdurekaEdureka!
YouTube Link: https://youtu.be/rtWH70_MMHM
** Node.js Certification Training: https://www.edureka.co/nodejs-certification-training **
This Edureka PPT on 'What is REST API?' will help you understand the concept of RESTful APIs and show you the implementation of REST APIs'. Following topics are covered in this REST API tutorial for beginners:
Need for REST API
What is REST API?
Features of REST API
Principles of REST API
Methods of REST API
How to implement REST API?
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
- SPARQL is a query language for retrieving and manipulating data stored in RDF format. It is similar to SQL but for RDF data.
- SPARQL queries contain prefix declarations, specify a dataset using FROM, and include a graph pattern in the WHERE clause to match triples.
- The main types of SPARQL queries are SELECT, ASK, DESCRIBE, and CONSTRUCT. SELECT returns variable bindings, ASK returns a boolean, DESCRIBE returns a description of a resource, and CONSTRUCT generates an RDF graph.
This document provides an introduction to REST APIs. It defines an API as a set of tools and protocols for building software. A REST API is an architectural style for web services that uses HTTP requests to GET, POST, PUT, and DELETE data. REST APIs have features like being simpler than SOAP, having documentation, and providing error messages. The core principles of REST are that it is stateless, uses a uniform interface, is layered, cacheable, and has code-on-demand. Common HTTP methods map to CRUD operations on resources. REST APIs offer advantages like scalability, flexibility, and independence.
Hydra: A Vocabulary for Hypermedia-Driven Web APIsMarkus Lanthaler
Presentation of the paper "Hydra: A Vocabulary for Hypermedia-Driven Web APIs" at the 6th Workshop on Linked Data on the Web (LDOW2013) at the WWW2013 in Rio de Janeiro, Brazil
Presented by Nikola Vasilev on SkopjeTechMeetup 7.
Representational state transfer (REST) can be thought of as the language of the Internet. Now with cloud usage on the rise, REST is a logical choice for building APIs that allow end users to connect and interact with cloud services. This talk will deliver more insight into the challenges on building and maintaining good and clean RESTful APIs.
Flask is a micro web development framework for Python that keeps its core simple but allows for extensibility. It emphasizes building applications with extensions rather than having all functionality contained within the framework. A minimal Flask app requires only a few lines of code and runs a development server. Templates can be rendered to generate dynamic HTML content by passing context through the render_template function. Flask supports common features like request handling, cookies, sessions, and file uploads through extensions.
The document discusses Resource Description Framework (RDF) and its role in representing data on the Semantic Web. It provides examples of how RDF can represent relationships between resources through triples and graphs, and compares this to how the same information would be represented in XML. It also discusses RDF Schema (RDFS) and the Ontology Web Language (OWL) as languages used to build ontologies that can express richer relationships between resources on the Semantic Web.
The document describes React, a JavaScript library for building user interfaces. It introduces some key concepts of React including components, props, state, and the virtual DOM. Components are the building blocks of React apps and can be composed together. Props provide immutable data to components, while state provides mutable data. The virtual DOM allows React to efficiently update the real DOM by only changing what needs to be changed. Data flows unidirectionally in React from parent to child components via props, and state updates within a component are handled via setState().
This document provides an overview of the RDF data model. It discusses the history and development of RDF standards from 1997 to 2014. It explains that an RDF graph is made up of triples consisting of a subject, predicate, and object. It provides examples of RDF triples and their N-triples representation. It also describes RDF syntaxes like Turtle and features of RDF like literals, blank nodes, and language-tagged strings.
GraphQL is a query language for APIs and a runtime for fulfilling those queries. It gives clients the power to ask for exactly what they need, which makes it a great fit for modern web and mobile apps. In this talk, we explain why GraphQL was created, introduce you to the syntax and behavior, and then show how to use it to build powerful APIs for your data. We will also introduce you to AWS AppSync, a GraphQL-powered serverless backend for apps, which you can use to host GraphQL APIs and also add real-time and offline capabilities to your web and mobile apps. You can follow along if you have an AWS account – no GraphQL experience required!
Level: Beginner
Speaker: Rohan Deshpande - Sr. Software Dev Engineer, AWS Mobile Applications
This document provides an overview of Flask, a microframework for Python. It discusses that Flask is easy to code and configure, extensible via extensions, and uses Jinja2 templating and SQLAlchemy ORM. It then provides a step-by-step guide to setting up a Flask application, including creating a virtualenv, basic routing, models, forms, templates, and views. Configuration and running the application are also covered at a high level.
W3C Tutorial on Semantic Web and Linked Data at WWW 2013Fabien Gandon
The document provides an introduction to Semantic Web and Linked Data. It discusses key concepts such as RDF, which represents data as subject-predicate-object triples that can be connected to form a graph. RDF has several syntaxes including XML, Turtle, and JSON. Properties in RDF triples can link to other resources or contain literal values. Types are identified with URIs and vocabularies are extensible. The goal of Linked Data is to publish structured data on the web and link it to other data to form a global data web.
The document discusses Representational State Transfer (REST) and RESTful web services. It provides an overview of REST principles including treating everything as a resource with a uniform interface, using standard HTTP methods, supporting multiple representations, communicating statelessly through hypermedia, and linking resources together. It then provides examples of how to design a RESTful API for a bookmark management application, mapping operations to resources, URIs, and HTTP methods.
This document provides an introduction to SHACL (Shapes Constraint Language), including examples of using SHACL to validate RDF graphs. It describes key SHACL concepts like shape graphs that define validation rules and target nodes, and data graphs that are validated. It also explains common validation components and how SHACL validation works by checking if data graphs conform to the constraints in corresponding shape graphs.
This document summarizes a draft specification for the SHACL (Shapes Constraint Language) language. It provides an overview of the key components and design principles of SHACL, including examples of how SHACL can be used to define shapes and constraints for validating RDF data. The draft aims to define a simple yet fully extensible language for validating RDF graphs in a consistent and future-proof manner.
Comparison of features between ShEx (Shape Expressions) and SHACL (Shapes Constraint Language)
Changelog:
11/06/17
- Removed slides about compositionality
31/May/2017
- Added slide 30 about validation report
- Added slide 32 about stems
- Changed slides 7 and 8 adapting compact syntax to new operator .
23/05/2017:
Slide 14: Repaired typos in typos in sh:entailment, rdfs:range
21/05/2017:
- Slide 8. Changed the example to be an IRI and a datatype
- Added typically in slide 9
- Slide 10: Removed the phrase: "Target declarations can problematic when reusing/importing shapes"
and created slide 27 to talk about reuability
- Added slide 11 to talk about the differences in triggering validation
- Created slide 14 to talk about inference
- Renamed slide 15 as "Inference and triggering mechanism"
- Added slides 27 and 28 to talk about reuability
- Added slide 29 to talk about annotations
18/05/2017
- Slides 9 now includes an example using ShEx RDF vocabulary
- Slide 10 now says that target declarations are optional
- Slide 13 now says that some RDF Schema terms have special treatment in SHACL
- Example in slide 18 now uses sh:or instead of sh:and
- Added slides 22, 23 and 24 which show some features supported by SHACL but not supported by ShEx (property pair constraints, uniqueLang and owl:imports)
ShEx is a language for validating RDF data. It allows defining shapes that specify constraints on nodes and triples. ShEx expressions can be used to validate if RDF graphs conform to the defined shapes. The ShEx language is inspired by languages like RelaxNG and provides different serialization formats like ShExC, ShExJ, and ShExR. There are open-source implementations of ShEx validators in languages like JavaScript, Scala, Ruby, Python, and Java. ShEx provides a concise way to define RDF shapes and validate instance data against those shapes.
A RESTful API is only truly RESTful if it uses hypermedia to tell us about all the actions that can be performed on the curent resource, allowing us to traverse the API from a single entry point.
His session looks at REST and HATEOAS (Hypermedia As The Engine Of Application State) to illustrate good service structure. Ben will use the RESTful file sharing service fdrop.it to illustrate the various examples of how this can be used.
This session is recommended for architects and senior developers alike and will give a good grounding in writing excellent, self-explanatory RESTful services.
This is a presentation which describe the big picture of the Rest API. In this presentation I simply describe the theories with practical examples. Hope this presentation will cover the overall Rest API domain.
What is REST API? REST API Concepts and Examples | EdurekaEdureka!
YouTube Link: https://youtu.be/rtWH70_MMHM
** Node.js Certification Training: https://www.edureka.co/nodejs-certification-training **
This Edureka PPT on 'What is REST API?' will help you understand the concept of RESTful APIs and show you the implementation of REST APIs'. Following topics are covered in this REST API tutorial for beginners:
Need for REST API
What is REST API?
Features of REST API
Principles of REST API
Methods of REST API
How to implement REST API?
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
Slides for Data Syndrome one hour course on PySpark. Introduces basic operations, Spark SQL, Spark MLlib and exploratory data analysis with PySpark. Shows how to use pylab with Spark to create histograms.
- SPARQL is a query language for retrieving and manipulating data stored in RDF format. It is similar to SQL but for RDF data.
- SPARQL queries contain prefix declarations, specify a dataset using FROM, and include a graph pattern in the WHERE clause to match triples.
- The main types of SPARQL queries are SELECT, ASK, DESCRIBE, and CONSTRUCT. SELECT returns variable bindings, ASK returns a boolean, DESCRIBE returns a description of a resource, and CONSTRUCT generates an RDF graph.
This document provides an introduction to REST APIs. It defines an API as a set of tools and protocols for building software. A REST API is an architectural style for web services that uses HTTP requests to GET, POST, PUT, and DELETE data. REST APIs have features like being simpler than SOAP, having documentation, and providing error messages. The core principles of REST are that it is stateless, uses a uniform interface, is layered, cacheable, and has code-on-demand. Common HTTP methods map to CRUD operations on resources. REST APIs offer advantages like scalability, flexibility, and independence.
Hydra: A Vocabulary for Hypermedia-Driven Web APIsMarkus Lanthaler
Presentation of the paper "Hydra: A Vocabulary for Hypermedia-Driven Web APIs" at the 6th Workshop on Linked Data on the Web (LDOW2013) at the WWW2013 in Rio de Janeiro, Brazil
Presented by Nikola Vasilev on SkopjeTechMeetup 7.
Representational state transfer (REST) can be thought of as the language of the Internet. Now with cloud usage on the rise, REST is a logical choice for building APIs that allow end users to connect and interact with cloud services. This talk will deliver more insight into the challenges on building and maintaining good and clean RESTful APIs.
Flask is a micro web development framework for Python that keeps its core simple but allows for extensibility. It emphasizes building applications with extensions rather than having all functionality contained within the framework. A minimal Flask app requires only a few lines of code and runs a development server. Templates can be rendered to generate dynamic HTML content by passing context through the render_template function. Flask supports common features like request handling, cookies, sessions, and file uploads through extensions.
The document discusses Resource Description Framework (RDF) and its role in representing data on the Semantic Web. It provides examples of how RDF can represent relationships between resources through triples and graphs, and compares this to how the same information would be represented in XML. It also discusses RDF Schema (RDFS) and the Ontology Web Language (OWL) as languages used to build ontologies that can express richer relationships between resources on the Semantic Web.
The document describes React, a JavaScript library for building user interfaces. It introduces some key concepts of React including components, props, state, and the virtual DOM. Components are the building blocks of React apps and can be composed together. Props provide immutable data to components, while state provides mutable data. The virtual DOM allows React to efficiently update the real DOM by only changing what needs to be changed. Data flows unidirectionally in React from parent to child components via props, and state updates within a component are handled via setState().
This document provides an overview of the RDF data model. It discusses the history and development of RDF standards from 1997 to 2014. It explains that an RDF graph is made up of triples consisting of a subject, predicate, and object. It provides examples of RDF triples and their N-triples representation. It also describes RDF syntaxes like Turtle and features of RDF like literals, blank nodes, and language-tagged strings.
GraphQL is a query language for APIs and a runtime for fulfilling those queries. It gives clients the power to ask for exactly what they need, which makes it a great fit for modern web and mobile apps. In this talk, we explain why GraphQL was created, introduce you to the syntax and behavior, and then show how to use it to build powerful APIs for your data. We will also introduce you to AWS AppSync, a GraphQL-powered serverless backend for apps, which you can use to host GraphQL APIs and also add real-time and offline capabilities to your web and mobile apps. You can follow along if you have an AWS account – no GraphQL experience required!
Level: Beginner
Speaker: Rohan Deshpande - Sr. Software Dev Engineer, AWS Mobile Applications
This document provides an overview of Flask, a microframework for Python. It discusses that Flask is easy to code and configure, extensible via extensions, and uses Jinja2 templating and SQLAlchemy ORM. It then provides a step-by-step guide to setting up a Flask application, including creating a virtualenv, basic routing, models, forms, templates, and views. Configuration and running the application are also covered at a high level.
W3C Tutorial on Semantic Web and Linked Data at WWW 2013Fabien Gandon
The document provides an introduction to Semantic Web and Linked Data. It discusses key concepts such as RDF, which represents data as subject-predicate-object triples that can be connected to form a graph. RDF has several syntaxes including XML, Turtle, and JSON. Properties in RDF triples can link to other resources or contain literal values. Types are identified with URIs and vocabularies are extensible. The goal of Linked Data is to publish structured data on the web and link it to other data to form a global data web.
The document discusses Representational State Transfer (REST) and RESTful web services. It provides an overview of REST principles including treating everything as a resource with a uniform interface, using standard HTTP methods, supporting multiple representations, communicating statelessly through hypermedia, and linking resources together. It then provides examples of how to design a RESTful API for a bookmark management application, mapping operations to resources, URIs, and HTTP methods.
This document provides an introduction to SHACL (Shapes Constraint Language), including examples of using SHACL to validate RDF graphs. It describes key SHACL concepts like shape graphs that define validation rules and target nodes, and data graphs that are validated. It also explains common validation components and how SHACL validation works by checking if data graphs conform to the constraints in corresponding shape graphs.
This document summarizes a draft specification for the SHACL (Shapes Constraint Language) language. It provides an overview of the key components and design principles of SHACL, including examples of how SHACL can be used to define shapes and constraints for validating RDF data. The draft aims to define a simple yet fully extensible language for validating RDF graphs in a consistent and future-proof manner.
We propose a set of optimizations that can be applied to a given SPARQL query, and that guarantee that the optimized query has the same answers under bag semantics as the original query, provided that the queried RDF graph validates certain SHACL constraints. We prove the correctness of these optimizations and show how they can be propagated to larger queries while preserving answers. Further, we prove the confluence of rewritings that employ these optimizations, guaranteeing convergence to the same optimized query regardless of the rewriting order.
Validating and Describing Linked Data Portals using RDF Shape ExpressionsJose Emilio Labra Gayo
Presentation at 1st Linked Data Quality Workshop, Leipzig, 2nd Sept. 2014
Author: Jose Emilio Labra Gayo
Applies Shapes Expressions to validate the WebIndex linked data portal
This document provides an overview of RDF, RDFS, and OWL, which are graph data models used to represent data on the Semantic Web. It describes the core components of RDF, including URIs, triples, and data types. It also explains how RDF graphs can be represented in N-Triples format or XML. Additionally, it covers RDF Schema (RDFS) and how it adds a type system to RDF through classes, subclasses, domains, and ranges of properties. The document concludes by noting some limitations of RDF and RDFS in modeling complex constraints and relationships.
This document discusses sheXer, a Python library that can automatically extract Shape Expressions (ShEx) schemas from RDF data. SheXer uses a voting system to infer common constraints on nodes in a graph and represent them as ShEx shapes. Constraints are assigned votes based on how many nodes they apply to. Shapes are then generated by selecting the highest voted constraints that meet a threshold. The document provides information on how sheXer works and can be used, as well as discussing related tools and opportunities for further development.
The document discusses RDF Shapes, which are used to describe and validate RDF data. It provides examples of using ShEx and SHACL to define shapes for RDF graphs and validate instance data against those shapes. Key points covered include the differences between ShEx and SHACL, such as ShEx focusing on defining structures while SHACL adds target declarations, and how both can be used to generate validation reports.
The document discusses representing structured values in metadata records using the Dublin Core metadata element set. It provides examples of representing creator information and subject terms as structured values using the Dublin Core terms dc:creator and dc:subject. The examples show how to assign URIs to the values, link them to related metadata descriptions, and store the metadata in separate documents rather than embedding it in each record. This allows the structured information to be centrally defined while still linking it to individual resource descriptions.
This document provides an introduction to the Shapes Constraint Language (SHACL) for validating RDF data against a set of rules defined in a shape graph. It demonstrates how to use SHACL to validate data using the shaclvalidate.sh tool and explains common SHACL constraints for defining minimum and maximum counts, expected data types, allowed values, and more. Core SHACL concepts covered include shapes, properties, constraints, and validating RDF data.
RSP-QL*: Querying Data-Level Annotations in RDF Streamskeski
This document proposes an extension to RSP-QL called RSP-QL* that allows querying of statement-level annotations in RDF streams. RSP-QL* uses the RDF* model, which allows embedding RDF triples as the subject or object of other triples. This provides an efficient way to represent statement-level metadata in RDF. The semantics of RSP-QL are extended to support RSP-QL* patterns, which can include basic graph patterns, named graphs, windows and other operators. Future work includes adding more functionality to the RDF* model, prototyping an implementation, and evaluating performance.
The document discusses the Semantic Web, providing an overview of identification languages, integration, storage and querying, browsing and viewing technologies. It describes languages like RDF, RDF Schema and OWL, and how they add machine-understandable semantics and shared ontologies to the web. It also discusses tools for querying, visualizing and presenting Semantic Web data like SPARQL, RDF browsers, Fresnel lenses, and Yahoo Pipes for aggregating and filtering RDF feeds.
Two graph data models : RDF and Property Graphsandyseaborne
This document provides an overview of two graph data models: RDF and Property Graphs. It describes the key components of each model, including triples for RDF and nodes/edges/properties for Property Graphs. It also discusses Apache projects that work with each model like Apache Jena for RDF and Apache TinkerPop, Spark, Giraph and Flink for Property Graphs. Finally, it notes that while the models have different focuses, they could potentially share technologies like storage and query capabilities.
The document discusses the W3C stack for representing metadata, with XML providing syntax but no semantics, RDF and RDF Schema defining a data model for relations between resources and a vocabulary definition language, and OWL adding more expressivity with concepts such as classes, properties, and cardinality restrictions. It also covers RDF syntaxes like Turtle and XML, and how RDF can represent implied claims from XML and facilitate interoperability between systems through its abstract model.
Lecture at the advanced course on Data Science of the SIKS research school, May 20, 2016, Vught, The Netherlands.
Contents
-Why do we create Linked Open Data? Example questions from the Humanities and Social Sciences
-Introduction into Linked Open Data
-Lessons learned about the creation of Linked Open Data (link discovery, knowledge representation, evaluation).
-Accessing Linked Open Data
This presentation was given at the Balisage 2017 conference, and provides an overview of three key RDF standards for constraint modeling, annotation and the use of data frames and cubes in RDF.
The formulation of constraints and the validation of RDF data against these constraints is a common requirement and a much sought-after feature, particularly as this is taken for granted in the XML world. Recently, RDF validation as a research field gained speed due to shared needs of data practitioners from a variety of domains. For constraint formulation and RDF data validation, several languages exist or are currently developed. Yet, none of the languages is able to meet all requirements raised by data professionals.
We have published a set of constraint types that are required by diverse stakeholders for data applications. We use these constraint types to gain a better understanding of the expressiveness of solutions, investigate the role that reasoning plays in practical data validation, and give directions for the further development of constraint languages.
We introduce a validation framework that enables to consistently execute RDF-based constraint languages on RDF data and to formulate constraints of any type in a way that mappings from high-level constraint languages to an intermediate generic representation can be created straight-forwardly. The framework reduces the representation of constraints to the absolute minimum, is based on formal logics, and consists of a very simple conceptual model with a small lightweight vocabulary. We demonstrate that using another layer on top of SPARQL ensures consistency regarding validation results and enables constraint transformations for each constraint type across RDF-based constraint languages.
The Gramsci Project is a multidisciplinary research project that aims to create a knowledge graph and facilitate browsing of information related to Antonio Gramsci's work. It involves developing semi-automatic annotation tools, integrating a triple store with a search interface, and experimenting with dynamically generated facets and rankings. The goal is to allow exploration of annotated texts and enable linking between related people, concepts and documents in Gramsci's body of work.
This query will not return any results. The pattern specified in the WHERE clause contains two triples, but the second triple contains a syntax error - it is missing the property between ?x and ?ema. A valid property like email would need to be specified, such as:
SELECT ?name WHERE {
?x name ?name .
?x email ?email
}
This query will select and return the ?name of any resources ?x that have both a name and email property specified.
A Platform for Difficulty Assessment andRecommendation of Hiking TrailsJean-Paul Calbimonte
Syris is a platform that assesses the difficulty of hiking trails and provides personalized recommendations to hikers. It uses a multidimensional data model to evaluate the effort, technique, and risk levels of trails, and collects data through apps, volunteers, and a collaborative platform. Hikers can self-assess their physical capabilities and have the trails filtered and visualized based on their preferences and constraints. The fully deployed Syris prototype has assessed 70 trails in Val d'Anniviers, demonstrating its methodology for difficulty assessment and potential to aid hikers in choosing appropriate trails.
This document discusses stream reasoning agents, which are intelligent collaborative entities with stream reasoning capabilities and autonomous reactive behavior. Such agents could process heterogeneous streaming data on the semantic web by cooperating, negotiating, and sharing streams. Key challenges include discovering and reusing data streams, publishing streams on the web, and stream reasoners cooperating. Opportunities exist to build a web of decentralized stream processing engines and use semantic vocabularies and multi-agent systems. Realizing a system of intelligent stream reasoning agents that collaborate to fulfill goals will require standards for data exchange, federation of stream reasoning engines, negotiation capabilities, and ensuring privacy.
Decentralized Management of Patient Profiles and Trajectories through Semanti...Jean-Paul Calbimonte
This document discusses managing patient health trajectories through decentralized semantic web agents. It addresses the challenges of modeling dynamic patient trajectories from heterogeneous data sources while ensuring data privacy and integration. An ontology is proposed to semantically represent medical concepts within patient trajectories. Agents would cooperate to acquire, aggregate, analyze and predict trajectory data in a decentralized manner. Future work involves further developing the trajectory modeling and validating the agent-based approach through implementation.
Multi-agent interactions on the Web through Linked Data NotificationsJean-Paul Calbimonte
This document discusses using Linked Data Notifications (LDN) to enable multi-agent interactions on the web through decentralized semantic messaging. LDN allows agents to send and receive notifications by posting RDF-formatted messages to other agents' HTTP-accessible inboxes. This provides a standardized way for agents to discover, communicate with, and coordinate with each other using common web technologies and Linked Data principles. However, challenges remain in promoting adoption of semantic messaging among web agents and linking existing agent communication standards to web protocols.
The MedRed Ontology for Representing Clinical Data Acquisition MetadataJean-Paul Calbimonte
The document describes the MedRed Ontology for representing clinical data acquisition metadata. The ontology was developed to address challenges with paper-based clinical data collection such as errors, missing data, and slow data acquisition. MedRed provides a platform for the lifecycle of medical research data instruments based on the REDCap system. It defines core concepts for instruments, sections, questions, events, and arms. MedRed also captures provenance and defines dependencies between different elements to support standardized, reusable, and validated clinical data collection. The ontology is available online and has been evaluated using existing clinical research instruments from various institutions.
This document discusses using Linked Data Notifications (LDN) for RDF data streams. It proposes modeling RDF streams as identified Web resources with input and output endpoints. Streams can be discovered and their endpoints retrieved. Data can be sent to input endpoints and retrieved from output endpoints. Queries can be registered against streams to generate output streams. The approach uses existing standards like LDP and aims to provide a simple, generic protocol for decentralized communication between heterogeneous RDF stream processors and consumers.
Este documento presenta la biografía y experiencia de Jean-Paul Calbimonte, un investigador suizo. Incluye su formación académica en Bolivia, Suiza y España, así como su experiencia como investigador y desarrollador de software. Su área de investigación se centra en el procesamiento de streams, ontologías e inteligencia artificial aplicada a la salud.
This document summarizes Jean-Paul Calbimonte's presentation on connecting stream reasoners on the web. It discusses representing data streams as RDF and using RDF stream processing systems. Key points include:
- RDF streams can be represented as sequences of timestamped RDF graphs.
- The W3C RSP community group is working to standardize RDF stream models and query languages.
- Producing RDF streams involves mapping live data sources to RDF and adding timestamps.
- Consuming RDF streams involves discovering stream metadata and endpoints to access the streams.
- Systems like TripleWave demonstrate approaches for spreading RDF streams on the web.
This document provides an overview of RDF stream processing and existing RDF stream processing engines. It discusses RDF streams and how sensor data can be represented as RDF streams. It also summarizes some existing RDF stream processing query languages and systems, including C-SPARQL, and the features they support like continuous execution, operators, and time-based windows. The document is intended as a tutorial for developers on working with RDF stream processing.
This document discusses query rewriting in RDF stream processing. It presents StreamQR, a system that incorporates query rewriting techniques with an RDF stream processor (RSP) to answer queries over ontologies in streams. StreamQR rewrites queries using an ontology and registers the rewritten queries with an RSP. It achieves throughput comparable to no rewriting even for queries with many rewritings. StreamQR performance is evaluated under different workloads and compared to an approach using incremental reasoning. Query rewriting allows efficient query answering over ontologies in RSPs.
This document discusses semantic sensor data archives and challenges in making sensor data discoverable, reusable, accessible, and interoperable. It proposes using semantic web technologies like RDF, SSN ontology, and DCAT to describe sensor metadata, observations, and datasets. Key points include:
- Describing sensors, observations, properties, units, locations and timestamps using ontologies like SSN
- Organizing sensor data into datasets described using DCAT
- Storing observations in CSV files and generating RDF representations on demand using R2RML mappings
- This allows querying and analyzing sensor data while reducing storage size compared to storing all data in RDF triples
This document proposes a non-invasive method to detect hypoglycemic events in patients with type 1 diabetes using wearable sensors. The method involves collecting physiological and activity data from sensors like ECG, accelerometers and breathing monitors. Machine learning models analyze the data to detect glycemic events, which are then represented semantically and used to generate alerts. Preliminary tests show the physiological model can accurately detect hypoglycemic events based on continuous glucose monitor data. The system aims to help patients and practitioners monitor insulin levels without invasive blood glucose testing.
This document discusses RDF stream processing and the role of semantics. It begins by outlining common sources of streaming data on the internet of things. It then discusses challenges of querying streaming data and existing approaches like CQL. Existing RDF stream processing systems are classified based on their query capabilities and use of time windows and reasoning. The role of linked data principles and HTTP URIs for representing streaming sensor data is discussed. Finally, requirements for reactive stream processing systems are outlined, including keeping data moving, integrating stored and streaming data, and responding instantaneously. The document argues that building relevant RDF stream processing systems requires going beyond existing requirements to address data heterogeneity, stream reasoning, and optimization.
The document describes the Schema Editor tool within the OpenIoT platform for managing semantic sensor network schemas and metadata. The Schema Editor allows users to define sensor types, observed properties, and generate RDF instances for new sensor descriptions without requiring expertise in ontologies or RDF syntax. It provides an integrated interface for working with the SSN ontology and extending schemas directly within the OpenIoT platform.
Scala Programming for Semantic Web Developers ESWC Semdev2015Jean-Paul Calbimonte
Scalable and Reactive Programming for Semantic Web Developers discusses using Scala for semantic web development. Key points include:
- Scala allows for more concise RDF code compared to Java through features like type inference and implicit parameters.
- The actor model and futures in Scala enable asynchronous and reactive programming for RDF streams and SPARQL queries.
- OWL API reasoning with ontologies can be done more clearly in Scala through implicit classes that simplify common operations.
This document discusses reactive processing of RDF streams. It begins by introducing events and semantics on the web. It then discusses why streams are important due to the internet of things, sensor networks, and social media. The document outlines challenges in processing raw streams semantically and provides an example using the SSN ontology. It proposes a common RDF stream model and discusses RDF stream processing, producers, consumers, and implementations. The rest of the document discusses reactive systems, scaling challenges, using the actor model, dynamic push-pull processing, and evaluation of throughput. It concludes by discussing the reactive RDF stream processing community and opportunities for future work.
The document discusses requirements and approaches for RDF stream processing (RSP). It covers the following key points in 3 sentences:
RSP aims to process continuous RDF streams to address scenarios like sensor data and social media. It involves querying streaming data, integrating streams with static data, and handling issues like imperfections. The document reviews existing RSP systems and languages, actor-based approaches, and the 8 requirements for real-time stream processing including keeping data moving, generating predictable outcomes, and responding instantaneously.
Ready to Unlock the Power of Blockchain!Toptal Tech
Imagine a world where data flows freely, yet remains secure. A world where trust is built into the fabric of every transaction. This is the promise of blockchain, a revolutionary technology poised to reshape our digital landscape.
Toptal Tech is at the forefront of this innovation, connecting you with the brightest minds in blockchain development. Together, we can unlock the potential of this transformative technology, building a future of transparency, security, and endless possibilities.
Understanding User Behavior with Google Analytics.pdfSEO Article Boost
Unlocking the full potential of Google Analytics is crucial for understanding and optimizing your website’s performance. This guide dives deep into the essential aspects of Google Analytics, from analyzing traffic sources to understanding user demographics and tracking user engagement.
Traffic Sources Analysis:
Discover where your website traffic originates. By examining the Acquisition section, you can identify whether visitors come from organic search, paid campaigns, direct visits, social media, or referral links. This knowledge helps in refining marketing strategies and optimizing resource allocation.
User Demographics Insights:
Gain a comprehensive view of your audience by exploring demographic data in the Audience section. Understand age, gender, and interests to tailor your marketing strategies effectively. Leverage this information to create personalized content and improve user engagement and conversion rates.
Tracking User Engagement:
Learn how to measure user interaction with your site through key metrics like bounce rate, average session duration, and pages per session. Enhance user experience by analyzing engagement metrics and implementing strategies to keep visitors engaged.
Conversion Rate Optimization:
Understand the importance of conversion rates and how to track them using Google Analytics. Set up Goals, analyze conversion funnels, segment your audience, and employ A/B testing to optimize your website for higher conversions. Utilize ecommerce tracking and multi-channel funnels for a detailed view of your sales performance and marketing channel contributions.
Custom Reports and Dashboards:
Create custom reports and dashboards to visualize and interpret data relevant to your business goals. Use advanced filters, segments, and visualization options to gain deeper insights. Incorporate custom dimensions and metrics for tailored data analysis. Integrate external data sources to enrich your analytics and make well-informed decisions.
This guide is designed to help you harness the power of Google Analytics for making data-driven decisions that enhance website performance and achieve your digital marketing objectives. Whether you are looking to improve SEO, refine your social media strategy, or boost conversion rates, understanding and utilizing Google Analytics is essential for your success.
Discover the benefits of outsourcing SEO to Indiadavidjhones387
"Discover the benefits of outsourcing SEO to India! From cost-effective services and expert professionals to round-the-clock work advantages, learn how your business can achieve digital success with Indian SEO solutions.
APNIC Foundation, presented by Ellisha Heppner at the PNG DNS Forum 2024APNIC
Ellisha Heppner, Grant Management Lead, presented an update on APNIC Foundation to the PNG DNS Forum held from 6 to 10 May, 2024 in Port Moresby, Papua New Guinea.
Meet up Milano 14 _ Axpo Italia_ Migration from Mule3 (On-prem) to.pdfFlorence Consulting
Quattordicesimo Meetup di Milano, tenutosi a Milano il 23 Maggio 2024 dalle ore 17:00 alle ore 18:30 in presenza e da remoto.
Abbiamo parlato di come Axpo Italia S.p.A. ha ridotto il technical debt migrando le proprie APIs da Mule 3.9 a Mule 4.4 passando anche da on-premises a CloudHub 1.0.
Instagram has become one of the most popular social media platforms, allowing people to share photos, videos, and stories with their followers. Sometimes, though, you might want to view someone's story without them knowing.
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
1. Validation of RDF Data
Jean-Paul Calbimonte
University of Applied Sciences and Arts Western Switzerland (HES-SO Valais-Wallis)
Zurich, December 2017
@jpcik
5. 5
RDF
University of Zurich City of Zurich
is located in
The University of Zurich
is located in the city of
Zurich
A triple:
http://dbpedia.org/resource/University_of_Zurich http://dbpedia.org/resource/Zürich
http://dbpedia.org/ontology/cityAn RDF triple:
<http://dbpedia.org/resource/University_of_Zurich> <http://dbpedia.org/ontology/city> <http://dbpedia.org/resource/Zürich>.
An RDF triple in N-Triples format:
11. 11
Validation Alternatives
• SPARQL queries
• SPIN
• Stardog ICV (based on OWL)
• OSLC Resource shapes
• RDFUnit
• RDF data descriptors
• ShEx expressions
what we will see today:
SHACL: W3C Recommendation (July 2017)
12. 12
Shapes Constraint Language SHACL
https://www.w3.org/TR/shacl/
• Language for validating RDF graphs
• Conditions represented as shapes
• Shapes expressed in RDF
• SPARQL-based extensions
• W3C Recommendation
13. 13
SHACL Basics
Shape
Node Shape Property Shape
shapes about
the focus node
shapes about the values
of a property/path
how to validate a focus node based on:
- values of properties
- other characteristics
Focus Node
An RDF term that is validated
against a shape
Constraint
componentsTarget
Target declarations can be
used to produce focus nodes
for a shape
Determine how to
validate a node
14. 14
SHACL: an example
ex:CityShape
a sh:NodeShape ;
sh:targetClass ex:City ;
sh:property [
sh:path ex:population ;
sh:maxCount 1 ;
sh:datatype xsd:integer ;
] .
it is a node shape
applies to all cities
constraint the values
of ex:population
max 1 population
of type integer
e.g. "all cities have at
most one population
property of type
integer"
ex:London a ex:City ;
ex:population "two million" .
ex:Paris a ex:City ;
ex:population 2304 ;
ex:population 5342 ;
15. 15
Targets
Declare the focus nodes for a shape
Node target: ex:CityShape
a sh:NodeShape ;
sh:targetNode ex:Zurich .
ex:London a ex:City ;
ex:Zurich a ex:City ;
directly declare nodes
Class target:
nodes with a given type
ex:CityShape
a sh:NodeShape ;
sh:targetClass ex:City .
Implicit class target:
same, but implicit
ex:City
a rdfs:Class, sh:NodeShape .
ex:Luzern a ex:City .
ex:Olten a ex:City .
ex:Valais a ex:Canton .
ex:SwissCity
rdfs:subClassOf ex:City .
ex:Basel a ex:SwissCity .
ex:Munich a ex:GermanCity .
ex:Lausanne a ex:City .
ex:Limmat a ex:River .
target subject of, target object of: see docs.
16. 16
Node Shapes
:University
a sh:NodeShape ;
sh:nodeKind sh:IRI .
:epfl a :University.
<http://example.ch/unifr> a :University .
_:1 a :University .
Constraints about a focus node
sh:BlankNode
sh:IRI
sh:Literal
sh:BlankNodeOrIRI
sh:BlankNodeOrLiteral
sh:IRIOrLiteral
Possible values:
17. 17
Property Shapes
Constraints about a given property and its values for the focus node
- sh:property associates a shape with a property constraint
- sh:path identifies the path
:Student a sh:NodeShape ;
sh:property [
sh:path ex:email;
sh:nodeKind sh:IRI
] .
:anna a :Student ;
ex:email <mailto:anna@uzh.ch> .
:max a :Student ;
ex:email <mailto:max@uzh.ch> .
:greta a :Student ;
ex:email "greta@uzh.ch" .
20. 20
Value Type Constraints: Datatype
sh:datatype: condition to be satisfied for the datatype of each value node.
:University a sh:NodeShape ;
sh:property [
sh:path ex:established;
sh:datatype xsd:date ;
] .
:hes-so ex:established "1997-01-20"^^xsd:date .
:eth ex:established "Unknown"^^xsd:date .
:uzh ex:established 1990 .
21. 21
Value Type Constraints: Class
sh:class condition: each value node is a SHACL instance of a given type.
:Person
a sh:NodeShape, rdfs:Class ;
sh:property [
sh:path ex:almaMater ;
sh:class :University
] .
:unifr a :University .
:eth a :FederalSchool .
:unibe a :CantonalUniversity
:FederalSchool rdfs:subClassOf :University .
:anna a :Person;
ex:almaMater :unifr .
:max a :Person ;
ex:almaMater :eth .
:greta a :Person;
ex:almaMater :unibe .
22. 22
Value Type Constraints: Kind
sh:nodeKind: condition to be satisfied by the RDF node kind
:Student
a sh:NodeShape, rdfs:Class ;
sh:property [
sh:path ex:name ;
sh:nodeKind sh:Literal ;
];
sh:property [
sh:path ex:friendOf ;
sh:nodeKind sh:BlankNodeOrIRI
];
sh:nodeKind sh:IRI .
:anna a :Student;
ex:name _:1 ;
ex:friendOf :max .
:max a :Student;
ex:name "Max";
ex:friendOf [ex:name "Lucas"] .
:greta a :Student;
ex:name "Greta" ;
ex:friendOf "Lucas" .
_:1 a :Student.
BlankNode, IRI, Literal,
BlankNodeOrIRI, IRIOrLiteral
BlankNodeOrLiteral,
possible
kinds
23. 23
Cardinality constraints
sh:minCount: minimum number of value nodes that satisfy the condition
sh:maxCount: maximum number of value nodes that satisfy the condition.
:Student a sh:NodeShape ;
sh:property [
sh:path ex:hasCourse ;
sh:minCount 2 ;
sh:maxCount 3 ;
] .
:anna ex:hasCourse
:math, :physics .
:max ex:hasCourse
:chemistry .
:greta ex:hasCourse
:math, :physics,
:chemistry, :history .
24. 24
Value Range Constraints
Value range conditions for value nodes that are comparable via operators
such as <, <=, > and >=. sh:minInclusive, sh:maxInclusive,
sh:minExclusive, sh:maxExclusive
:Grade a sh:NodeShape ;
sh:property [
sh:path ex:gradeValue ;
sh:minInclusive 1 ;
sh:maxInclusive 5 ;
sh:datatype xsd:integer
] .
:failure ex:gradeValue 1 .
:sufficient ex:gradeValue 3 .
:excelent ex:gradeValue 5 .
:toobad ex:gradeValue 0 .
25. 25
String-based Constraints
Specify conditions on the string representation of value nodes.
sh:minLength: minimum string length of each value node.
sh:maxLength: maximum string length of each value node.
sh:pattern: regular expression that each value node matches.
sh:languageIn: allowed language tags for each value node.
sh:uniqueLang: no pair of value nodes may use the same language tag.
26. 26
minLength/maxLength
:Student a sh:NodeShape ;
sh:property [
sh:path ex:name ;
sh:minLength 4 ;
sh:maxLength 10 ;
] .
:anna ex:name "Anna" .
:max ex:name "Max" .
:greta ex:name :Greta .
:strange ex:name _:strange .
sh:minLength: minimum string length of each value node.
sh:maxLength: maximum string length of each value node.
29. 29
uniqueLang
:Canton a sh:NodeShape ;
sh:property [
sh:path ex:name ;
sh:uniqueLang true
] .
:valais ex:name
"Valais"@fr, "Wallis"@de .
:fribourg ex:name
"Fribourg"@fr,
"Freiburg"@de,
"Friburgo"@es .
:zurich ex:name
"Zurich"@de, "Zuerich"@de.
sh:uniqueLang: no pair of value nodes may use the same language tag.
30. 30
Property Pair Constraints
Specify conditions on the sets of value nodes in relation to other properties.
sh:equals: all value nodes equal to the objects of the focus node
sh:disjoint: value nodes is disjoint with the objects of the focus node
sh:lessThan: each value node is smaller than all the objects of focus node
sh:lessThanOrEquals: same, but smaller than or equal
31. 31
equals
:Student a sh:NodeShape ;
sh:property [
sh:path ex:givenName ;
sh:equals ex:firstName
];
:anna ex:givenName "Anna";
ex:lastName "Parker";
ex:firstName "Anna" .
:max ex:givenName "Max";
ex:lastName "Sutter" ;
ex:firstName "Maximilian" .
:greta ex:givenName "Greta";
ex:lastName "Greta" ;
ex:firstName "Greta" .
sh:equals: all value nodes equal to the objects of the focus node
32. 32
disjoint
:Student a sh:NodeShape ;
sh:property [
sh:path ex:givenName ;
sh:disjoint ex:lastName
] .
:anna ex:givenName "Anna";
ex:lastName "Parker";
ex:firstName "Anna" .
:max ex:givenName "Max";
ex:lastName "Sutter" ;
ex:firstName "Maximilian" .
:greta ex:givenName "Greta";
ex:lastName "Greta" ;
ex:firstName "Greta" .
sh:disjoint: value nodes is disjoint with the objects of the focus node
34. 34
Logical Constraints
Implement the common logical operators and, or, not, xone (kind of xor)
sh:and: Conjunction of a list of shapes
sh:or: Disjunction of a list of shapes
sh:not: Negation of a shape
sh:xone: Exactly one (similar XOR for 2 arguments)
35. 35
not
ex:NotShape a sh:NodeShape ;
sh:targetNode :anna ;
sh:not [
a sh:PropertyShape ;
sh:path ex:established ;
sh:minCount 1 ;
] .
:anna ex:established "Some value" .
sh:not: Negation of a shape
36. 36
and
ex:Shape1 a sh:NodeShape ;
sh:property [
sh:path ex:courses ;
sh:minCount 1 ;
] .
ex:Shape2 a sh:NodeShape ;
sh:targetNode :anna, :max ;
sh:and (
ex:Shape1
[ sh:path ex:courses ;
sh:maxCount 1 ; ]
) .
:anna ex:courses "Math" .
:max ex:courses "Math" ;
ex:courses "Chemistry" .
sh:and: Conjunction of a list of shapes
37. 37
or
ex:OrShape a sh:NodeShape ;
sh:targetNode :anna, :max ;
sh:or (
[ sh:path ex:firstName ;
sh:minCount 1 ; ]
[ sh:path ex:givenName ;
sh:minCount 1 ; ]
) .
:anna ex:firstName "Anna" .
:max ex:givenName "Max" .
sh:or: Disjunction of a list of shapes
38. 38
or
ex:AddressShape a sh:NodeShape ;
sh:targetClass ex:Student ;
sh:property [
sh:path ex:address ;
sh:or (
[ sh:datatype xsd:string ; ]
[ sh:class ex:Address ; ]
)
] .
:anna
ex:address "12 Petit Rue, 1220,
Geneva" .
:max ex:address :maxAddress .
:maxAddress a ex:Address ;
ex:street "Grand Rue" ;
ex:zip 3960 ;
ex:locality ex:Sierre .
sh:or: Disjunction of a list of shapes
40. 40
Shape-based Constraints
Specify complex conditions by validating the value nodes against certain shapes.
sh:node: each value node conforms to the given node shape.
sh:property: specify that each value node has a given property shape.
sh:qualifiedValueShape: a number of value nodes conforms to a given shape.
- one value for sh:qualifiedMinCount
- one value for sh:qualifiedMaxCount
- one value for each
41. 41
node
ex:AddressShape a sh:NodeShape ;
sh:property [
sh:path ex:postalCode ;
sh:datatype xsd:string ;
sh:maxCount 1 ;
] .
ex:PersonShape a sh:NodeShape ;
sh:targetClass ex:Person ;
sh:property [
sh:path ex:address ;
sh:minCount 1 ;
sh:node ex:AddressShape ;
] .
ex:Bob a ex:Person ;
ex:address ex:BobsAddress .
ex:BobsAddress ex:postalCode "1234" .
ex:Reto a ex:Person ;
ex:address ex:RetosAddress .
ex:RetosAddress ex:postalCode 5678 .
sh:node: each value node conforms to the given node shape.
43. 43
Constraints on values: hasValue
sh:hasValue: at least one value node is equal to the given RDF term.
ex:ETHGraduate a sh:NodeShape ;
sh:targetNode :anna ;
sh:property [
sh:path ex:alumniOf ;
sh:hasValue ex:ETH ;
] .
:anna ex:alumniOf ex:EPFL ;
ex:alumniOf ex:ETH .
44. 44
Constraints on values: in
sh:in: each value node is a member of a provided SHACL list.
ex:InShape a sh:NodeShape ;
sh:targetClass ex:SkiSlope ;
sh:property [
sh:path ex:difficulty ;
sh:in ( ex:Black ex:Blue ex:Red ) ;
] .
ex:slope1 a ex:SkiSlope;
ex:difficulty ex:Pink .
ex:slope2 a ex:SkiSlope;
ex:difficulty ex:Red .
45. 45
Closed shapes
sh:closed Set to true to close the shape.
sh:ignoredProperties
Optional properties that are also permitted in addition to those
explicitly enumerated via sh:property.
ex:ClosedShape a sh:NodeShape ;
sh:targetNode ex:Alice, ex:Bob ;
sh:closed true ;
sh:ignoredProperties (rdf:type) ;
sh:property [ sh:path ex:firstName ; ] ;
sh:property [ sh:path ex:lastName ; ] .
ex:Alice ex:firstName "Alice" .
ex:Bob ex:firstName "Bob" ;
ex:middleInitial "J" .
46. 46
Non-validating constraints
sh:name: provide human-readable labels for the property.
sh:description: provide descriptions of the property in the given context.
sh:order: indicate the relative order of the property shape for purposes
such as form building.
sh:group: indicate that the shape belongs to a group of related property
shapes.
Property shapes may have a single sh:defaultValue. The default value
does not have fixed semantics
47. 47
Non validating constraints
ex:PersonFormShape a sh:NodeShape ;
sh:property [
sh:path ex:firstName ;
sh:name "first name" ;
sh:description "The given name(s)" ;
sh:order 0 ;
sh:group ex:NameGroup ; ] ;
sh:property [
sh:path ex:lastName ;
sh:name "last name" ;
sh:description "The last name" ;
sh:order 1 ;
sh:group ex:NameGroup ; ] ;
Name
Address
sh:property [
sh:path ex:streetAddress ;
sh:name "street address" ;
sh:description "The street address" ;
sh:order 11 ;
sh:group ex:AddressGroup ; ] ;
sh:property [
sh:path ex:locality ;
sh:name "locality" ;
sh:description "The town or city " ;
sh:order 12 ;
sh:group ex:AddressGroup ; ] ;
sh:property [
sh:path ex:postalCode ;
sh:name "postal code" ;
sh:name "zip code"@en-US ;
sh:description "The postal code" ;
sh:order 13 ;
sh:group ex:AddressGroup ; ] .
ex:NameGroup a sh:PropertyGroup ;
sh:order 0 ;
rdfs:label "Name" .
ex:AddressGroup a sh:PropertyGroup ;
sh:order 1 ;
rdfs:label "Address" .