This document provides an overview of a Linked Data tutorial presented on March 6, 2009. The tutorial covered topics such as the motivation for Linked Open Data, relevant technologies like URIs, RDF, and SPARQL, and principles for publishing and interlinking data on the web in a way that is accessible to both humans and machines. The goal of Linked Data is to open up data silos and make public data available on the web in a standardized format.
Semantic Web technologies (such as RDF and SPARQL) excel at bringing together diverse data in a world of independent data publishers and consumers. Common ontologies help to arrive at a shared understanding of the intended meaning of data.
However, they don’t address one critically important issue: What does it mean for data to be complete and/or valid? Semantic knowledge graphs without a shared notion of completeness and validity quickly turn into a Big Ball of Data Mud.
The Shapes Constraint Language (SHACL), an upcoming W3C standard, promises to help solve this problem. By keeping semantics separate from validity, SHACL makes it possible to resolve a slew of data quality and data exchange issues.
Presented at the Lotico Berlin Semantic Web Meetup.
If you’re already a SQL user then working with Hadoop may be a little easier than you think, thanks to Apache Hive. It provides a mechanism to project structure onto the data in Hadoop and to query that data using a SQL-like language called HiveQL (HQL).
This cheat sheet covers:
-- Query
-- Metadata
-- SQL Compatibility
-- Command Line
-- Hive Shell
Introduction to the Data Web, DBpedia and the Life-cycle of Linked DataSören Auer
Over the past 4 years, the Semantic Web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into
a very promising candidate for addressing one of the biggest challenges
of computer science: the exploitation of the Web as a platform for data
and information integration. To translate this initial success into a
world-scale reality, a number of research challenges need to be
addressed: the performance gap between relational and RDF data
management has to be closed, coherence and quality of data published on
the Web have to be improved, provenance and trust on the Linked Data Web
must be established and generally the entrance barrier for data
publishers and users has to be lowered. This tutorial will discuss
approaches for tackling these challenges. As an example of a successful
Linked Data project we will present DBpedia, which leverages Wikipedia
by extracting structured information and by making this information
freely accessible on the Web. The tutorial will also outline some recent advances in DBpedia, such as the mappings Wiki, DBpedia Live as well as
the recently launched DBpedia benchmark.
Semantic Web technologies (such as RDF and SPARQL) excel at bringing together diverse data in a world of independent data publishers and consumers. Common ontologies help to arrive at a shared understanding of the intended meaning of data.
However, they don’t address one critically important issue: What does it mean for data to be complete and/or valid? Semantic knowledge graphs without a shared notion of completeness and validity quickly turn into a Big Ball of Data Mud.
The Shapes Constraint Language (SHACL), an upcoming W3C standard, promises to help solve this problem. By keeping semantics separate from validity, SHACL makes it possible to resolve a slew of data quality and data exchange issues.
Presented at the Lotico Berlin Semantic Web Meetup.
If you’re already a SQL user then working with Hadoop may be a little easier than you think, thanks to Apache Hive. It provides a mechanism to project structure onto the data in Hadoop and to query that data using a SQL-like language called HiveQL (HQL).
This cheat sheet covers:
-- Query
-- Metadata
-- SQL Compatibility
-- Command Line
-- Hive Shell
Introduction to the Data Web, DBpedia and the Life-cycle of Linked DataSören Auer
Over the past 4 years, the Semantic Web activity has gained momentum with the widespread publishing of structured data as RDF. The Linked Data paradigm has therefore evolved from a practical research idea into
a very promising candidate for addressing one of the biggest challenges
of computer science: the exploitation of the Web as a platform for data
and information integration. To translate this initial success into a
world-scale reality, a number of research challenges need to be
addressed: the performance gap between relational and RDF data
management has to be closed, coherence and quality of data published on
the Web have to be improved, provenance and trust on the Linked Data Web
must be established and generally the entrance barrier for data
publishers and users has to be lowered. This tutorial will discuss
approaches for tackling these challenges. As an example of a successful
Linked Data project we will present DBpedia, which leverages Wikipedia
by extracting structured information and by making this information
freely accessible on the Web. The tutorial will also outline some recent advances in DBpedia, such as the mappings Wiki, DBpedia Live as well as
the recently launched DBpedia benchmark.
This Hadoop Hive Tutorial will unravel the complete Introduction to Hive, Hive Architecture, Hive Commands, Hive Fundamentals & HiveQL. In addition to this, even fundamental concepts of BIG Data & Hadoop are extensively covered.
At the end, you'll have a strong knowledge regarding Hadoop Hive Basics.
PPT Agenda
✓ Introduction to BIG Data & Hadoop
✓ What is Hive?
✓ Hive Data Flows
✓ Hive Programming
----------
What is Apache Hive?
Apache Hive is a data warehousing infrastructure built over Hadoop which is targeted towards SQL programmers. Hive permits SQL programmers to directly enter the Hadoop ecosystem without any pre-requisites in Java or other programming languages. HiveQL is similar to SQL, it is utilized to process Hadoop & MapReduce operations by managing & querying data.
----------
Hive has the following 5 Components:
1. Driver
2. Compiler
3. Shell
4. Metastore
5. Execution Engine
----------
Applications of Hive
1. Data Mining
2. Document Indexing
3. Business Intelligence
4. Predictive Modelling
5. Hypothesis Testing
----------
Skillspeed is a live e-learning company focusing on high-technology courses. We provide live instructor led training in BIG Data & Hadoop featuring Realtime Projects, 24/7 Lifetime Support & 100% Placement Assistance.
Email: sales@skillspeed.com
Website: https://www.skillspeed.com
LOD , Linked Open Data 에 대한 소개 자료 입니다. LOD는 공공 데이터를 제공, 공유, 재활용하기 위한 또 하나의 방법이며 오픈 데이터(Open Data) 를 위한 하나의 방법으로 웹을 기반으로 데이터를 공유하여 재활용하고자 방법이며 기술이고 데이터입니다.
빅데이터 개념 부터 시작해서 빅데이터 분석 플랫폼의 출현(hadoop)과 스파크의 등장배경까지 풀어서 작성된 spark 소개 자료 입니다.
스파크는 RDD에 대한 개념과 spark SQL 라이브러리에 대한 자료가 조금 자세히 설명 되어있습니다. (텅스텐엔진, 카탈리스트 옵티마이져에 대한 간략한 설명이 있습니다.)
마지막에는 간단한 설치 및 interactive 분석 실습자료가 포함되어 있습니다.
원본 ppt 를 공개해 두었으니 언제 어디서든 필요에 따라 변형하여 사용하시되 출처만 잘 남겨주시면 감사드리겠습니다.
다른 슬라이드나, 블로그에서 사용된 그림과 참고한 자료들은 작게 출처를 표시해두었는데, 본 ppt의 초기버전을 작성하면서 찾았던 일부 자료들은 출처가 불분명한 상태입니다. 자료 출처를 알려주시면 반영하여 수정해 두도록하겠습니다. (제보 부탁드립니다!)
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains, including significantly improved performance for ACID tables. The talk will also provide a glimpse of what is expected to come in the near future.
In this Introduction to Apache Sqoop the following topics are covered:
1. Why Sqoop
2. What is Sqoop
3. How Sqoop Works
4. Importing and Exporting Data using Sqoop
5. Data Import in Hive and HBase with Sqoop
6. Sqoop and NoSql data store i.e. MongoDB
7. Resources
Agenda:
1.Data Flow Challenges in an Enterprise
2.Introduction to Apache NiFi
3.Core Features
4.Architecture
5.Demo –Simple Lambda Architecture
6.Use Cases
7.Q & A
In this presentation, Raghavendra BM of Valuebound has discussed the basics of MongoDB - an open-source document database and leading NoSQL database.
----------------------------------------------------------
Get Socialistic
Our website: http://valuebound.com/
LinkedIn: http://bit.ly/2eKgdux
Facebook: https://www.facebook.com/valuebound/
Twitter: http://bit.ly/2gFPTi8
Apache Hive is a data warehousing system for large volumes of data stored in Hadoop. However, the data is useless unless you can use it to add value to your company. Hive provides a SQL-based query language that dramatically simplifies the process of querying your large data sets. That is especially important while your data scientists are developing and refining their queries to improve their understanding of the data. In many companies, such as Facebook, Hive accounts for a large percentage of the total MapReduce queries that are run on the system. Although Hive makes writing large data queries easier for the user, there are many performance traps for the unwary. Many of them are artifacts of the way Hive has evolved over the years and the requirement that the default behavior must be safe for all users. This talk will present examples of how Hive users have made mistakes that made their queries run much much longer than necessary. It will also present guidelines for how to get better performance for your queries and how to look at the query plan to understand what Hive is doing.
Linked Open Data Principles, Technologies and ExamplesOpen Data Support
Theoretical and practical introducton to linked data, focusing both on the value proposition, the theory/foundations, and on practical examples. The material is tailored to the context of the EU institutions.
This Hadoop Hive Tutorial will unravel the complete Introduction to Hive, Hive Architecture, Hive Commands, Hive Fundamentals & HiveQL. In addition to this, even fundamental concepts of BIG Data & Hadoop are extensively covered.
At the end, you'll have a strong knowledge regarding Hadoop Hive Basics.
PPT Agenda
✓ Introduction to BIG Data & Hadoop
✓ What is Hive?
✓ Hive Data Flows
✓ Hive Programming
----------
What is Apache Hive?
Apache Hive is a data warehousing infrastructure built over Hadoop which is targeted towards SQL programmers. Hive permits SQL programmers to directly enter the Hadoop ecosystem without any pre-requisites in Java or other programming languages. HiveQL is similar to SQL, it is utilized to process Hadoop & MapReduce operations by managing & querying data.
----------
Hive has the following 5 Components:
1. Driver
2. Compiler
3. Shell
4. Metastore
5. Execution Engine
----------
Applications of Hive
1. Data Mining
2. Document Indexing
3. Business Intelligence
4. Predictive Modelling
5. Hypothesis Testing
----------
Skillspeed is a live e-learning company focusing on high-technology courses. We provide live instructor led training in BIG Data & Hadoop featuring Realtime Projects, 24/7 Lifetime Support & 100% Placement Assistance.
Email: sales@skillspeed.com
Website: https://www.skillspeed.com
LOD , Linked Open Data 에 대한 소개 자료 입니다. LOD는 공공 데이터를 제공, 공유, 재활용하기 위한 또 하나의 방법이며 오픈 데이터(Open Data) 를 위한 하나의 방법으로 웹을 기반으로 데이터를 공유하여 재활용하고자 방법이며 기술이고 데이터입니다.
빅데이터 개념 부터 시작해서 빅데이터 분석 플랫폼의 출현(hadoop)과 스파크의 등장배경까지 풀어서 작성된 spark 소개 자료 입니다.
스파크는 RDD에 대한 개념과 spark SQL 라이브러리에 대한 자료가 조금 자세히 설명 되어있습니다. (텅스텐엔진, 카탈리스트 옵티마이져에 대한 간략한 설명이 있습니다.)
마지막에는 간단한 설치 및 interactive 분석 실습자료가 포함되어 있습니다.
원본 ppt 를 공개해 두었으니 언제 어디서든 필요에 따라 변형하여 사용하시되 출처만 잘 남겨주시면 감사드리겠습니다.
다른 슬라이드나, 블로그에서 사용된 그림과 참고한 자료들은 작게 출처를 표시해두었는데, 본 ppt의 초기버전을 작성하면서 찾았던 일부 자료들은 출처가 불분명한 상태입니다. 자료 출처를 알려주시면 반영하여 수정해 두도록하겠습니다. (제보 부탁드립니다!)
Apache Hive is a rapidly evolving project which continues to enjoy great adoption in the big data ecosystem. As Hive continues to grow its support for analytics, reporting, and interactive query, the community is hard at work in improving it along with many different dimensions and use cases. This talk will provide an overview of the latest and greatest features and optimizations which have landed in the project over the last year. Materialized views, the extension of ACID semantics to non-ORC data, and workload management are some noteworthy new features.
We will discuss optimizations which provide major performance gains, including significantly improved performance for ACID tables. The talk will also provide a glimpse of what is expected to come in the near future.
In this Introduction to Apache Sqoop the following topics are covered:
1. Why Sqoop
2. What is Sqoop
3. How Sqoop Works
4. Importing and Exporting Data using Sqoop
5. Data Import in Hive and HBase with Sqoop
6. Sqoop and NoSql data store i.e. MongoDB
7. Resources
Agenda:
1.Data Flow Challenges in an Enterprise
2.Introduction to Apache NiFi
3.Core Features
4.Architecture
5.Demo –Simple Lambda Architecture
6.Use Cases
7.Q & A
In this presentation, Raghavendra BM of Valuebound has discussed the basics of MongoDB - an open-source document database and leading NoSQL database.
----------------------------------------------------------
Get Socialistic
Our website: http://valuebound.com/
LinkedIn: http://bit.ly/2eKgdux
Facebook: https://www.facebook.com/valuebound/
Twitter: http://bit.ly/2gFPTi8
Apache Hive is a data warehousing system for large volumes of data stored in Hadoop. However, the data is useless unless you can use it to add value to your company. Hive provides a SQL-based query language that dramatically simplifies the process of querying your large data sets. That is especially important while your data scientists are developing and refining their queries to improve their understanding of the data. In many companies, such as Facebook, Hive accounts for a large percentage of the total MapReduce queries that are run on the system. Although Hive makes writing large data queries easier for the user, there are many performance traps for the unwary. Many of them are artifacts of the way Hive has evolved over the years and the requirement that the default behavior must be safe for all users. This talk will present examples of how Hive users have made mistakes that made their queries run much much longer than necessary. It will also present guidelines for how to get better performance for your queries and how to look at the query plan to understand what Hive is doing.
Linked Open Data Principles, Technologies and ExamplesOpen Data Support
Theoretical and practical introducton to linked data, focusing both on the value proposition, the theory/foundations, and on practical examples. The material is tailored to the context of the EU institutions.
This slides I've used on talk about Semantic Web use-case. Not all know what exactly Semantic Web is about. So I've created set of slides showing this in a simple and correct way. Use-case slides are removed on this public available slides. Animated version here goo.gl/qKoF6k . Contact me for sources!
study or concern about what kinds of things exist
what entities there are in the universe.
the ontology derives from the Greek onto (being) and logia (written or spoken). It is a branch of metaphysics , the study of first principles or the root of things.
This presentation is about:
- Introduction to OWL
- OWL Basics
- Class Expression Axioms
- Property Axioms
- Assertions
- Class Expressions -Propositional Connectives and Enumeration of Individuals
- Class Expressions -Property Restrictions
- Class Expressions -Cardinality Restrictions
This tutorial explains the Data Web vision, some preliminary standards and technologies as well as some tools and technological building blocks developed by AKSW research group from Universität Leipzig.
A Semantic Data Model for Web ApplicationsArmin Haller
This presentation gives a short overview of the Semantic Web, RDFa and Linked Data. The second part briefly discusses ActiveRaUL, our model and system for developing form-based Web applications using Semantic Web technologies.
This presentation covers the whole spectrum of Linked Data production and exposure. After a grounding in the Linked Data principles and best practices, with special emphasis on the VoID vocabulary, we cover R2RML, operating on relational databases, Open Refine, operating on spreadsheets, and GATECloud, operating on natural language. Finally we describe the means to increase interlinkage between datasets, especially the use of tools like Silk.
O'Reilly Where 2.0 2011
As a result of cheap storage and computing power, society is measuring and storing increasing amounts of information.
It is now possible to efficiently crunch Petabytes of data with tools like Hadoop.
In this O'Reilly Where 2.0 tutorial, Pete Skomoroch, Sr. Data Scientist at LinkedIn, gives an overview of spatial analytics and how you can use tools like Hadoop, Python, and Mechanical Turk to process location data and derive insights about cities and people.
Topics:
* Data Science & Geo Analytics
* Useful Geo tools and Datasets
* Hadoop, Pig, and Big Data
* Cleaning Location Data with Mechanical Turk
* Spatial Tweet Analytics with Hadoop & Python
* Using Social Data to Understand Cities
The Linked Data and Services presentation was presented by Andreas Harth (KIT) and Barry Norton (KIT) at the PlanetData project Kick-off Meeting on October 11, 2010 in Palma de Mallorca, Spain.
DBpedia Archive using Memento, Triple Pattern Fragments, and HDTHerbert Van de Sompel
DBpedia is the Linked Data version of Wikipedia. Starting in 2007, several DBpedia dumps have been made available for download. In 2010, the Research Library at the Los Alamos National Laboratory used these dumps to deploy a Memento-compliant DBpedia Archive, in order to demonstrate the applicability and appeal of accessing temporal versions of Linked Data sets using the Memento “Time Travel for the Web” protocol. The archive supported datetime negotiation to access various temporal versions of RDF descriptions of DBpedia subject URIs.
In a recent collaboration with the iMinds Group of Ghent University, the DBpedia Archive received a major overhaul. The initial MongoDB storage approach, which was unable to handle increasingly large DBpedia dumps, was replaced by HDT, the Binary RDF Representation for Publication and Exchange. And, in addition to the existing subject URI access point, Triple Pattern Fragments access, as proposed by the Linked Data Fragments project, was added. This allows datetime negotiation for URIs that identify RDF triples that match subject/predicate/object patterns. To add this powerful capability, native Memento support was added to the Linked Data Fragments Server of Ghent University.
In this talk, we will include a brief refresher of Memento, and will cover Linked Data Fragments, Triple Pattern Fragments, and HDT in more detail. We will share lessons learned from this effort and demo the new DBpedia Archive, which, at this point, holds over 5 billion RDF triples.
Practical Semantic Web and Why You Should Care - DrupalCon DC 2009Boris Mann
Presented at Drupalcon DC 2009 - http://dc2009.drupalcon.org/session/practical-semantic-web-and-why-you-should-care
An overview of Semantic Web concepts and RDF. Exploration of RDFa. How open data fits. Examples of modules and functionality in Drupal today, and a plan for Drupal 7.
QALD-7 @ ESWC 2017 Portoroz, Slovenia
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
Presentation of QALD 7 challenge at ESWC2017: Question Answering over Linked Data.
This work was supported by grants from the EU H2020 Framework Programme provided for the project HOBBIT (GA no. 688227).
#3 INTEROPERABLE covers: -- an overview of the 3 INTEROPERABLE principles which use vocabularies for knowledge representation, standardisation and references other metadata. -- resources to support institutional awareness and uptake of Interoperable principles
Speakers :
1)Keith Russell, ANDS, provides an overview of the key components of Interoperability
2) Simon Cox and Jonathan Yu (CSIRO) presented on how they have made the research data in the OzNome project Interoperable, not only for humans, but also for machines
Full YouTube recording: https://youtu.be/MeFl9WrtG20
Slides prepared with Clement Levallois for the tutorial held at the Meertens institute. The presentation goes over the need for using Linked Data to make data machine readable. The hands-on part is focused on the annotation of a profile page with RDFa.
Slides about "Information and Data Extraction on the Web" for "Information management on the Web" course at DIA (Computer Science Department) of Roma Tre University
Maphub und Pelagios: Anwendung von Linked Data in den Digitalen Geisteswissen...Bernhard Haslhofer
In recent years, scientists at the Austrian Institute of Technology have been involved in numerous projects in the digital humanities area. In this talk, Dr. Bernhard Haslhofer will present two of them, both having a strong focus on applying the Linked Open Data method on datasets produced throughout the project. The first is Maphub (http://maphub.github.io/), an open source Web application which allows users to create annotations on historical maps, link these annotations with other Web sources (e.g., Wikipedia), and share annotations as Linked Open Data following the Open Annotation model. The second is Pelagios (http://pelagios-project.blogspot.co.at/), a community initiative that aims to facilitate better linking between online resources documenting the past, based on the places they refer to. To date, Pelagios interconnects 900.000+ heterogeneous digital objects - literature, archaeology, epigraphy, cartography - from 40+ international partners. The current focus of the project is to annotate Early Geospatial Documents - documents that use written or visual representation to describe geographic space prior to the European discovery of the Americas in 1492, and make the annotations available as (Linked) Open Data.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
Linked Data Tutorial
1. Linked Data Tutorial
@ Vlaams Theater Instituut
Brussels, 03/2009
bernhard.haslhofer@univie.ac.at
Bernhard Haslhofer, Linked Data Tutorial
Friday, March 6, 2009
2. Contents
• Motivation for Linked Open Data (LOD)
• Base Technologies
• The Linked Data Principles
• Publishing Solutions and Tools
• Vocabularies & Interoperability Issues
• Demos + Discussion
Bernhard Haslhofer, Linked Data Tutorial 2
Friday, March 6, 2009
3. How do we look up information about things?
Bernhard Haslhofer, Linked Data Tutorial 3
Friday, March 6, 2009
4. ... about this book?
Bernhard Haslhofer, Linked Data Tutorial 4
Friday, March 6, 2009
5. ... about this play?
Bernhard Haslhofer, Linked Data Tutorial 5
Friday, March 6, 2009
6. ... about these concepts?
Bernhard Haslhofer, Linked Data Tutorial 6
Friday, March 6, 2009
7. ... about things?
...that book? ...this play? ...these concepts?
we look up their names
Metamorphosis Effi Briest Love / Hate
and see what we can find out
Bernhard Haslhofer, Linked Data Tutorial 7
Friday, March 6, 2009
14. If information is not available on the Web, people
tend to ignore it.
Bernhard Haslhofer, Linked Data Tutorial 14
Friday, March 6, 2009
15. Current Situation
Web
Web Server
JDBC HTML
Database
Bernhard Haslhofer, Linked Data Tutorial 15
Friday, March 6, 2009
16. The current Web is a Web of Documents intended
for human interpretation.
Bernhard Haslhofer, Linked Data Tutorial 16
Friday, March 6, 2009
17. Database
Database
Database
The data are still locked in closed silos.
Database
Database Database
Bernhard Haslhofer, Linked Data Tutorial 17
Friday, March 6, 2009
18. Other applications can not access and process
these data unless...
Bernhard Haslhofer, Linked Data Tutorial 18
Friday, March 6, 2009
19. Existing Approaches
CD-ROM FTP
REST
OAI-PMH
Web Services
SRU/SRQ SOAP
OAI-ORE
RSS WSDL
Atom UDDI
JSON CD-ROM
RPC
RMI CORBA
Z39.50 DCOM
Bernhard Haslhofer, Linked Data Tutorial 19
Friday, March 6, 2009
20. The linked data vision is to...
Bernhard Haslhofer, Linked Data Tutorial 20
Friday, March 6, 2009
21. Documents
Data
Database
Bernhard Haslhofer, Linked Data Tutorial
Friday, March 6, 2009
22. • open the data silos and get rid of repository-centric
mindset
• publish data of public interest on the Web
• in a way that other applications can access and
interpret the data
• using common Web technologies
Bernhard Haslhofer, Linked Data Tutorial 22
Friday, March 6, 2009
23. make it possible for applications to look up
(meta)data ...
Bernhard Haslhofer, Linked Data Tutorial 23
Friday, March 6, 2009
24. ... about things
...that book? ...this play? ...these concepts?
by looking up their names
http://dbpedia.org/resource/
Love
http://dbpedia.org/resource/ http://dbpedia.org/resource/
The_Metamorphosis Effi_Briest
http://dbpedia.org/resource/
Hatred
on the Web
Bernhard Haslhofer, Linked Data Tutorial 24
Friday, March 6, 2009
25. To do so, we need some technology
Bernhard Haslhofer, Linked Data Tutorial 25
Friday, March 6, 2009
26. Uniform Resource Identifier (URI)
• Names (Identifiers) for resources in an open
environment
• dereferencable URI = HTTP URI (URL)
http://dbpedia.org/resource/
Hallstatt_culture
Bernhard Haslhofer, Linked Data Tutorial 26
Friday, March 6, 2009
27. Resource Description Framework (RDF)
• a model for representing metadata on the Web
• in the form of statements (triples)
“The Hallstatt culture
t
abstrac
: was ...”
p
dbppro
rdfs: label
http://dbpedia.org/resource/
“Hallstatt culture”
Hallstatt_culture
skos:s
ubject
http://
dbpedia.org/resource/
Category:Iron_Age_Europe
Bernhard Haslhofer, Linked Data Tutorial 27
Friday, March 6, 2009
28. RDF/XML, N3, Turtle, etc.
• for exchanging RDF data
• serialization & de-serialization
<rdf:Description rdf:about=” http://dbpedia.org/resource/Hallstatt_culture”>
<dbpprop:abstract>The Hallstatt culture was...</dbpprop:abstract>
<skos:subject rdf:resource=”http://dbpedia.org/resource/
Category:Iron_Age_Europe”>
<rdfs:label>Hallstatt culture</rdfs:label>
</rdf:Description>
Bernhard Haslhofer, Linked Data Tutorial 28
Friday, March 6, 2009
29. RDFS & OWL
• languages for describing vocabularies
rdf: Property
rdf: type rdf: type
rdf: type
dbpprop: abstract skos: subject rdfs: label
Bernhard Haslhofer, Linked Data Tutorial 29
Friday, March 6, 2009
30. Simple Knowledge Organization System (SKOS)
• a language for describing controlled vocabularies
“Iron Age
“Hallstatt Period” “Hallstatt Culture”
Europe”
skos:prefLabel skos:prefLabel skos:altLabel
“La Tène Period”
skos:narrower
dbpedia:Category:
xyz: HallstattCulture
skos:prefLabel
Iron_Age_Europe
skos:narrower xyz: LateneCulture
Bernhard Haslhofer, Linked Data Tutorial 30
Friday, March 6, 2009
31. SPARQL
• a query language & protocol for accessing RDF data
via the Web
SELECT ?uri
WHERE {
?uri skos:subject <http://dbpedia.org/resource/Category:Iron_Age_Europe>.
}
Bernhard Haslhofer, Linked Data Tutorial 31
Friday, March 6, 2009
32. The vision is becoming reality...
Bernhard Haslhofer, Linked Data Tutorial 32
Friday, March 6, 2009
33. http://en.wikipedia.org/wiki/Hallstatt_culture
<http://dbpedia.org/resource/Hallstatt_culture >
p:abstract “Die Hallstattzeit bezeichnet...” @de
p:abstract “The Hallstatt culture was...” @en
rdfs:label “Hallstatt culture”@en;
rdfs:label “Hallstattzeit” @de;
rdfs:label “Civilisation de Hallstatt” @fr;
rdfs:label “ ” @jp;
skos:subject dbpedia:Category:Iron_Age_Europe;
...
Bernhard Haslhofer, Linked Data Tutorial 33
Friday, March 6, 2009
34. <http://dbpedia.org/resource/Hallstatt_culture >
...
p:hasPhotoCollection <http://www4.wiwiss.fu-berlin.de/flickrwrappr/
photos/Hallstatt_culture>;
...
Bernhard Haslhofer, Linked Data Tutorial 34
Friday, March 6, 2009
35. Library of Congress Subject Headings
Bernhard Haslhofer, Linked Data Tutorial 35
Friday, March 6, 2009
37. The LOD idea in brief
• expose data on the Web, not just documents
• interlink these data with those of other data sources
Bernhard Haslhofer, Linked Data Tutorial 37
Friday, March 6, 2009
38. LOD benefits
• other humans and applications can
• easily access your data using Web technologies
• follow the links in order to obtain further contextual
information
• links to your data and search engine indices can
increase the visibility of your data
Bernhard Haslhofer, Linked Data Tutorial 38
Friday, March 6, 2009
39. “Stop hugging
your data”
Sir Tim Berners-Lee, 2009
(c) Paul Miller, http://www.slideshare.net/
cloudofdata/toward-the-data-cloud
Friday, March 6, 2009
40. Contents
• Motivation for Linked Open Data (LOD)
• Base Technologies
• The Linked Data Principles
• Publishing Solutions and Tools
• Vocabularies & Interoperability Issues
• Demos + Discussion
Bernhard Haslhofer, Linked Data Tutorial 40
Friday, March 6, 2009
41. The Semantic Web Layer Cake
User Interface & Applications
Trust
Proof
Unifying Logic
Vocabulary: Ontology: Rules:
SKOS OWL RIF
Query:
SPARQL
Crypto
RDF-S
Data Interchange: RDF
XML
URI Unicode
Bernhard Haslhofer, Linked Data Tutorial 41
Friday, March 6, 2009
42. From a Database point of view
Database Domain Semantic Web
Relational Model
URI, RDF
(Tables)
SQL DDL
RDFS, OWL
(Create Table table-name ...)
SQL Query Language
SPARQL
(Select * from ...)
Bernhard Haslhofer, Linked Data Tutorial 42
Friday, March 6, 2009
43. Uniform Resource Identifiers (URI)
• Unambiguous name for “something”
• for a digital resource
http://dbpedia.org/resource/
• for a concept within a vocabulary Hallstatt_culture
• etc..
• “A URI is a compact sequence of characters that
identifies an abstract or physical resource”
[RFC 3986]
http://dbpedia.org/resource/
Hallstatt
Bernhard Haslhofer, Linked Data Tutorial 43
Friday, March 6, 2009
44. Uniform Resource Identifiers (URI)
• “Uniform”:
• different types of resource identifiers in the same context
• common syntactic conventions
• “Resource”:
• whatever might be identified by a URI
• “Identifier”:
• distinguish one resource from other ones
Bernhard Haslhofer, Linked Data Tutorial 44
Friday, March 6, 2009
45. Uniform Resource Identifiers (URI)
• An URI can be classified as locator (URL), name
(URN), or both
• Uniform Resource Locator (URL)
• a means for locating the resource by describing its
primary access mechanism (e.g., http://example.com)
• Uniform Resource Name (URN)
• a means for naming the resource (e.g.,
urn:example.com:animal)
Bernhard Haslhofer, Linked Data Tutorial 45
Friday, March 6, 2009
46. Uniform Resource Identifiers (URI)
• Generic syntax - a hierarchical sequence of
components
URI = scheme “:” hier-path [ “?” query ] [ “#” fragment ]
foo://example.com:8042/over/there?name=ferret#nose
_/ ________________/_________/ _________/ __/
| | | | |
scheme authority path query fragment
| ______________________|_
//
urn:example:animal:ferret:nose
Bernhard Haslhofer, Linked Data Tutorial 46
Friday, March 6, 2009
47. Uniform Resource Identifiers (URI)
• Example URIs:
ftp://ftp.is.co.za/rfc/rfc1808.txt
http://www.ietf.org/rfc/rfc2396.txt
ldap://[2001:db8::7]/c=GB?objectClass?one
mailto:John.Doe@example.com
news:comp.infosystems.www.servers.unix
tel:+1-816-555-1212
urn:oasis:names:specification:docbook:dtd:xml:4.1.2
Bernhard Haslhofer, Linked Data Tutorial 47
Friday, March 6, 2009
48. Internationalized Resource Identifiers (IRI)
• can we use non-ASCII characters in URIs (e.g.,
Umlaut, Chinese, Arabic, Hebrew, etc...)?
• extend the syntax of URI so that a much wider
repertoire of characters can be used
• uses Universal Character Set (ISO 10646)
• mapping from IRIs to URIs
• IRIs can be used wherever URIs are allowed
Bernhard Haslhofer, Linked Data Tutorial 48
Friday, March 6, 2009
49. The Semantic Web Layer Cake
User Interface & Applications
Trust
Proof
Unifying Logic
Vocabulary: Ontology: Rules:
SKOS OWL RIF
Query:
SPARQL
Crypto
RDF-S
Data Interchange: RDF
XML
URI Unicode
Bernhard Haslhofer, Linked Data Tutorial 49
Friday, March 6, 2009
50. Resource Description Framework (RDF)
• The basic structural element of RDF is the statement
rdfs: label
http://dbpedia.org/resource/
“Hallstatt culture”
Hallstatt_culture
skos:s
ubject
http://dbpedia.org/resource/
Category:Iron_Age_Europe
Subject Predicate Object
Subject Property Value
Bernhard Haslhofer, Linked Data Tutorial 50
Friday, March 6, 2009
51. Resource Description Framework (RDF)
• The subject of a statement is always a (URI) resource
• The predicate of a statement is always a (URI)
resource
• The object of a statement can be a resource (URI) or
a typed literal
• An RDF statement forms a triple
• Triples can be merged into a set of triples, forming a
directed labeled graph
Bernhard Haslhofer, Linked Data Tutorial 51
Friday, March 6, 2009
52. Resource Description Framework (RDF)
• Literals can be plain or typed using arbitrary datatypes
• It is recommended to use XML Schema datatypes
[XMLS2]
rdfs: label
http://dbpedia.org/resource/
“Hallstatt”
Hallstatt
geo:lo
ng
“13.646667”^^<http://
www.w3.org/2001/
XMLSchema#float>
Bernhard Haslhofer, Linked Data Tutorial 52
Friday, March 6, 2009
53. Resource Description Framework (RDF)
• Support for multilingual labels
• Plain literals may have a language tag, as defined by
RFC 3066
“ (
l
: labe )”@jp
rdfs
rdfs: label
http://dbpedia.org/resource/
“Hallstatt”@de
Hallstatt
rdfs:la
bel
“Гальштат”@ru
Bernhard Haslhofer, Linked Data Tutorial 53
Friday, March 6, 2009
54. Resource Description Framework (RDF)
• Blank nodes can be used to model structured
information that needs no URI by itself
• they must still be distinguishable within an RDF graph.
This is done using blank node identifiers
http://www.ex.com/
http://www.ex.com/#address
staff#85740
http://www.ex.com/#zipcode
http://www.ex.com/#city
http://www.ex.com/#street http://www.ex.com/#state
quot;01730quot;
quot;Bedfordquot;
quot;1501 Grant
quot;Massachusettsquot;
Avenuequot;
Bernhard Haslhofer, Linked Data Tutorial 54
Friday, March 6, 2009
55. Resource Description Framework (RDF)
• Containers allow to describe groups of things
• Bag (unordered set), Seq (ordered set), Alt (choice)
http://www.ex.com/ http://www.w3.org/1999/02/22-rdf-
courses#6.001 syntax-ns#Bag
http://www.ex.com/students#students http://www.ex.com/students#Amy
http://www.w3.org/1999/02/22-rdf-syntax-ns#type
http://www.w3.org/1999/02/22-rdf-syntax-ns#_1
http://www.ex.com/
http://www.w3.org/1999/02/22-rdf-syntax-ns#_2 students#Mohamed
http://www.w3.org/1999/02/22-rdf-syntax-ns#_3
http://www.w3.org/1999/02/22-rdf-syntax-ns#_4 http://www.ex.com/students#Johann
blank node
http://www.ex.com/students#Maria
Bernhard Haslhofer, Linked Data Tutorial 55
Friday, March 6, 2009
56. Resource Description Framework (RDF)
• Collections allow the definition of closed containers
• define the exact set of items in a collection
http://www.ex.com/
courses#6.001
http://www.ex.com/students#students
http://www.w3.org/1999/02/22-rdf-syntax-ns#first
http://www.ex.com/students#Amy
http://www.w3.org/1999/02/22-rdf-syntax-ns#rest
http://www.w3.org/1999/02/22-rdf-syntax-ns#first
http://www.ex.com/
students#Mohamed
http://www.w3.org/1999/02/22-rdf-syntax-ns#rest
http://www.w3.org/1999/02/22-rdf-syntax-ns#first
http://www.ex.com/students#Johann
http://www.w3.org/1999/02/22-rdf-syntax-ns#rest
http://www.w3.org/1999/02/22-rdf-
syntax-ns#nil
Bernhard Haslhofer, Linked Data Tutorial 56
Friday, March 6, 2009
57. Resource Description Framework (RDF)
• RDF can be serialized using various syntax formats:
<?xml version=”1.0”?>
<rdf:RDF xmlns:rdf=”http://www.w3.org/1999/02/22-rdf-syntax-ns#” xmlns:rdfs=”http://
www.w3.org/2000/01/rdf-schema#” xmlns:dbpprop=”http://dbpedia.org/property/” .... >
<rdf:Description rdf:about=” http://dbpedia.org/resource/Hallstatt_culture”>
<dbpprop:abstract>The Hallstatt culture was...</dbpprop:abstract>
<skos:subject rdf:resource=” http://dbpedia.org/resource/
Category:Iron_Age_Europe”>
<rdfs:label>Hallstatt culture</rdfs:label>
RDF/XML
</rdf:Description>
Bernhard Haslhofer, Linked Data Tutorial 57
Friday, March 6, 2009
58. Resource Description Framework (RDF)
• RDF can be serialized using various syntax formats:
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
@prefix rdfs: <http://dbpedia.org/about/html/http://www.w3.org/2000/01/rdf-schema#>
@prefix dbpprop: <http://dbpedia.org/property/>
<http://dbpedia.org/resource/Hallstatt_culture>
dbpprop:abstract “The Hallstatt culture was...” ;
skos:subject < http://dbpedia.org/resource/Category:Iron_Age_Europe > ;
Turtle / N3
rdfs:label “Hallstatt culture” .
Bernhard Haslhofer, Linked Data Tutorial 58
Friday, March 6, 2009
59. Resource Description Framework (RDF)
• Reification
• allows to describe other statements; i.e. make statements
about other statements
• rarely used in practice
Bernhard Haslhofer, Linked Data Tutorial 59
Friday, March 6, 2009
60. The Semantic Web Layer Cake
User Interface & Applications
Trust
Proof
Unifying Logic
Vocabulary: Ontology: Rules:
SKOS OWL RIF
Query:
SPARQL
Crypto
RDF-S
Data Interchange: RDF
XML
URI Unicode
Bernhard Haslhofer, Linked Data Tutorial 60
Friday, March 6, 2009
61. RDF Schema (RDFS)
• extends RDF with the possibility to define classes and
associated properties
• allows RDF applications to agree on a common data
description vocabulary
• is implemented on top of RDF - each valid RDFS
document is a valid RDF document
Bernhard Haslhofer, Linked Data Tutorial 61
Friday, March 6, 2009
62. RDF Schema (RDFS)
• classes and subclass relationships
rdfs: Class
rdf:type
rdf:type
yago:
yago: City CitiesAndTownsInUpperAustria
rdfs: subClassOf
rdf:type
http://dbpedia.org/resource/
Hallstatt
Bernhard Haslhofer, Linked Data Tutorial 62
Friday, March 6, 2009
63. RDF Schema (RDFS)
• properties and subPropertyOf relationships
rdf: Property
rdf:type
rdf:type
dcterms: description dbProp: abstract
rdfs: subPropertyOf
Note: in reality, dbpedia: abstract is no subject of
Bernhard Haslhofer, Linked Data Tutorial 63 dcterms; description
Friday, March 6, 2009
64. RDF Schema (RDFS)
• domain and range relationships rdfs: Class
rdf: type
xyz: Article
rdfs: domain
dbProp: abstract
rdfs: range xsd: string
Bernhard Haslhofer, Linked Data Tutorial 64
Friday, March 6, 2009
65. RDF Schema (RDFS)
• comments: rdfs: comment
• human-readable names for resources: rdfs: label
• machine-readable information: rdfs: seeAlso, rdfs:
isDefinedBy
Bernhard Haslhofer, Linked Data Tutorial 65
Friday, March 6, 2009
66. RDF Schema (RDFS)
• Shortcomings:
• no cardinality constraints on properties
• no inverse / transitive / symmetric properties
• no union / disjoint / enumeration classes
Bernhard Haslhofer, Linked Data Tutorial 66
Friday, March 6, 2009
67. The Semantic Web Layer Cake
User Interface & Applications
Trust
Proof
Unifying Logic
Vocabulary: Ontology: Rules:
SKOS OWL RIF
Query:
SPARQL
Crypto
RDF-S
Data Interchange: RDF
XML
URI Unicode
Bernhard Haslhofer, Linked Data Tutorial 67
Friday, March 6, 2009
68. Web Ontology Language (OWL)
• An extension of RDFS
• logic based; provides automated reasoning support
• Three “sub-languages”
• OWL Full: maximum expressiveness and syntactic
freedom
• OWL DL: restricted to first order logic; decidable
• OWL Lite: easy to implement; provides mechanisms for
creating classification hierarchies with simple constraints
Bernhard Haslhofer, Linked Data Tutorial 68
Friday, March 6, 2009
69. Web Ontology Language (OWL)
• owl:Class defines a group of individuals that belong
together because of shared properties
• owl:Thing is the class of all individuals, and a superclass
of all OWL classes
rdfs: Class
rdfs: subClassOf
owl: Class
owl: Thing
rdfs: subClassOf
rdf:type
rdf:type
http://dbpedia.org/resource/
dbpedia-owl: Place
Hallstatt
Bernhard Haslhofer, Linked Data Tutorial 69
Friday, March 6, 2009
70. Web Ontology Language (OWL)
• DatatypeProperty: properties whose value is a literal
• ObjectProperty: properties whose value is an individual
owl:
rdf:type rdfs: range xsd: string
DatatypeProperty
dbPedia-owl: id
rdfs: subClassOf
rdfs: domain dbPedia-owl:
rdf: Property
WorldHeritageSite
dbPedia-owl:
rdf:type rdfs: domain
rdfs: subClassOf region
owl: dbpedia-owl:
rdfs: range
ObjectProperty PopulatedPlace
Bernhard Haslhofer, Linked Data Tutorial 70
Friday, March 6, 2009
71. Web Ontology Language (OWL)
• owl: equivalentClass states that two classes are
equivalent, i.e., they have the same instances
owl: equivalentClass
dbpedia: Car dbpedia: Automobile
Note: DBpedia actually uses dbpedia: redirect
Bernhard Haslhofer, Linked Data Tutorial 71
Friday, March 6, 2009
72. Web Ontology Language (OWL)
• owl: equivalentProperty states that two properties are
equivalent; they relate one individual to the same set
of other individuals
owl: equivalentProperty
myOnt: abstract dbpprop: abstract
Bernhard Haslhofer, Linked Data Tutorial 72
Friday, March 6, 2009
73. Web Ontology Language (OWL)
• owl:sameAs states that two individuals are the same
owl: sameAs
http://dbpedia.org/resource/
fbase: Hallstatt culture
Hallstatt_culture
Bernhard Haslhofer, Linked Data Tutorial 73
Friday, March 6, 2009
74. Web Ontology Language (OWL)
• owl: inverseOf (e.g., hasChild inverseOf hasParent)
• owl: transitiveProperty (e.g., hasAncestor)
• owl: symmetricProperty (e.g., knows)
• owl: functionalProperty (e.g. hasPrimaryEmployer)
• owl: inverseFunctionalProperty (e.g.,
hasSocialSecurityNumber)
Bernhard Haslhofer, Linked Data Tutorial 74
Friday, March 6, 2009
75. Web Ontology Language (OWL)
• owl: allValuesFrom restricts properties to values of
classes
• owl: minCardinality restricts a property’s minimum
cardinality with respect to a class
• owl: maxCardinalty restricts a property’s maximum
cardinality with respect to a class
• owl: cardinality = shortcut for min/maxCardinality
Bernhard Haslhofer, Linked Data Tutorial 75
Friday, March 6, 2009
76. Web Ontology Language (OWL)
• owl: intersectionOf (e.g., EmployedPerson = Person ∩
CorporateResources)
• owl: unionOf (e.g., NorthAmericanCitizen = USCitizen
∪ CanadaCitizen)
• owl: complementOf (e.g., JuniorResearcher =
Researcher - SeniorResearcher)
• owl: disjointWith (e.g., Woman ∩ Man = ∅)
Bernhard Haslhofer, Linked Data Tutorial 76
Friday, March 6, 2009
77. The Semantic Web Layer Cake
User Interface & Applications
Trust
Proof
Unifying Logic
Vocabulary: Ontology: Rules:
SKOS OWL RIF
Query:
SPARQL
Crypto
RDF-S
Data Interchange: RDF
XML
URI Unicode
Bernhard Haslhofer, Linked Data Tutorial 77
Friday, March 6, 2009
78. Simple Knowledge Organization System (SKOS)
• a family of formal languages for the definition and
representation of controlled vocabularies on the Web
• thesauri
• classification schemes
• taxonomies
• subject-heading systems
• an application of RDF/S & OWL
Bernhard Haslhofer, Linked Data Tutorial 78
Friday, March 6, 2009
79. Simple Knowledge Organization System (SKOS)
• vocabulary terms are represented as concepts
identified by URIs
skos: Concept
rdf: type rdf: type
dbpedia: Category:
xyz: HallstattCulture
Iron_Age_Europe
Bernhard Haslhofer, Linked Data Tutorial 79
Friday, March 6, 2009
80. Simple Knowledge Organization System (SKOS)
• concepts have preferred, alternate & hidden labels
• label is either preferred or alternate or hidden; one
per language
“Hallstatt Period@en”
bel
a
refL
:p
kos
s l
fLabe “Hallstatt Kultur@de”
s: pre
sko
xyz: HallstattCulture
skos:
altLab
el
“Hallstatt Culture@en”
Bernhard Haslhofer, Linked Data Tutorial 80
Friday, March 6, 2009
81. Simple Knowledge Organization System (SKOS)
• concepts can be organized in hierarchies by (the
inverse) skos: broader or skos: narrower relationships
dbpedia: Category:
dbpedia: Category: Iron_Age
Prehistoric_Europe
skos: broader skos: broader
skos: narrower
dbpedia: Category:
xyz: HallstattCulture
Iron_Age_Europe
Note: in DBPedia this narrower rel. does not exist
Bernhard Haslhofer, Linked Data Tutorial 81
Friday, March 6, 2009
82. Simple Knowledge Organization System (SKOS)
• concepts can be associated by non-hierarchical (non-
transitive) skos:related properties
skos: related
xyz: HallstattCulture xyz: HallstattCivilization
Bernhard Haslhofer, Linked Data Tutorial 82
Friday, March 6, 2009
83. Simple Knowledge Organization System (SKOS)
• concepts can have notes for general documentation
purposes
• scopeNote: an indication how the use of a concept is
limited in indexing practice
• definition: an explanation of the intended meaning
• example: supplies an example of the use of a concept
• historyNote: describes significant changes to the meaning
of a concept
Bernhard Haslhofer, Linked Data Tutorial 83
Friday, March 6, 2009
84. Simple Knowledge Organization System (SKOS)
• concepts can be defined as part of well-defined
concept schemes (thesauri, classification schemes)
dc:title
“Archeology Thesaurus”
ex: archeologyThesaurus
dc:creator
“John Doe”
skos: inScheme
skos: inScheme
xyz: HallstattCulture xyz: HallstattCivilization
Bernhard Haslhofer, Linked Data Tutorial 84
Friday, March 6, 2009
85. Simple Knowledge Organization System (SKOS)
• Other features
• SKOS allows the mapping between different concept
schemes
• concepts can be organized in collections
• relationships between concept labels
• definition of broaderTransitive and narrowerTransitive
relationships
• ....
Bernhard Haslhofer, Linked Data Tutorial 85
Friday, March 6, 2009
86. The Semantic Web Layer Cake
User Interface & Applications
Trust
Proof
Unifying Logic
Vocabulary: Ontology: Rules:
SKOS OWL RIF
Query:
SPARQL
Crypto
RDF-S
Data Interchange: RDF
XML
URI Unicode
Bernhard Haslhofer, Linked Data Tutorial 86
Friday, March 6, 2009
87. SPARQL
• is a query language that allows to access RDF data
• currently read-only; no update or delete
• based on matching graph patterns
• is a protocol that defines how queries and results can
be transported over a network (over the Web)
Bernhard Haslhofer, Linked Data Tutorial 87
Friday, March 6, 2009
88. SPARQL
• a simple query example
SELECT ?uri
WHERE {
?uri skos:subject <http://dbpedia.org/resource/Category:Iron_Age_Europe>.
}
uri
...
dbpedia: Hallstatt_culture
...
Bernhard Haslhofer, Linked Data Tutorial 88
Friday, March 6, 2009
89. SPARQL
• querying multiple variables
SELECT ?uri ?abstract
WHERE {
?uri skos:subject <http://dbpedia.org/resource/Category:Iron_Age_Europe>.
?uri dbpprop:abstract ?abstract .
}
uri abstract
...
dbpedia: Hallstatt_culture The Hallstatt culture was the...
dbpedia: Hallstatt_culture Гальшта́тская культу́ра ...
Bernhard Haslhofer, Linked Data Tutorial 89
Friday, March 6, 2009
90. SPARQL
• querying and filtering literal values
SELECT ?uri ?label
WHERE {
?uri rdfs:label ?label .
?uri skos:subject <http://dbpedia.org/resource/Category:Iron_Age_Europe> .
FILTER regex(?label, “culture”, “i”)
}
uri label
dbpedia: Basarabi_culture Basarabi Culture
... ...
dbpedia: Hallstatt_culture Hallstatt Culture
Bernhard Haslhofer, Linked Data Tutorial 90
Friday, March 6, 2009
91. SPARQL
• optional graph patterns
SELECT ?uri ?label ?image
WHERE {
?uri rdfs:label ?label .
?uri skos:subject <http://dbpedia.org/resource/Category:Iron_Age_Europe> .
OPTIONAL { ?uri foaf:img ?image }.
}
Bernhard Haslhofer, Linked Data Tutorial 91
Friday, March 6, 2009
92. SPARQL
• optional graph patterns with constraints
SELECT ?uri ?label ?image
WHERE {
?uri rdfs:label ?label .
?uri skos:subject <http://dbpedia.org/resource/Category:Iron_Age_Europe> .
OPTIONAL {
?uri foaf:img ?image .
FILTER regex(?image, “jpg”, “i”) .
}.
}
Bernhard Haslhofer, Linked Data Tutorial 92
Friday, March 6, 2009
93. SPARQL
• matching alternatives
SELECT ?uri ?classification
WHERE {
?uri skos:subject <http://dbpedia.org/resource/Category:Iron_Age_Europe> .
{ ?uri rdf:type ?classification . }
UNION
{ ?uri skos:subject ?classification . }
}
Bernhard Haslhofer, Linked Data Tutorial 93
Friday, March 6, 2009
94. SPARQL
• solution modifiers
SELECT ?uri ?name
WHERE {
?uri skos:subject <http://dbpedia.org/resource/Category:Iron_Age_Europe> .
?uri rdfs:label ?name
}
ORDER BY ?name
LIMIT 10
OFFSET 5
Bernhard Haslhofer, Linked Data Tutorial 94
Friday, March 6, 2009
95. SPARQL
• Other SPARQL features:
• rich set of FILTER expressions (see xquery functions)
• CONSTRUCT, DESCRIBE, ASK queries
• Named Graphs
Bernhard Haslhofer, Linked Data Tutorial 95
Friday, March 6, 2009
96. The Semantic Web Layer Cake
User Interface & Applications
Trust
Proof
Unifying Logic
Vocabulary: Ontology: Rules:
SKOS OWL RIF
Query:
SPARQL
Crypto
RDF-S
Data Interchange: RDF
XML
URI Unicode
Bernhard Haslhofer, Linked Data Tutorial 96
Friday, March 6, 2009
97. References
• General
• T. Berners-Lee, J. Hendler, O. Lassila: The Semantic Web. Scientific
American, May 2001.
• URI
• T. Berners-Lee, R. Fielding, L. Masinter: Uniform Resource Identifier (URI):
Generic Syntax (RFC 3986), January 2005.
• M. Duerst, M. Suignard: Internationalized Resource Identifiers (IRIs) (RFC
3987), January 2005.
• RDF/S
• F. Manola, E. Miller: RDF Primer. W3C Recommendation, February 2004.
• D. Brickley, R.V. Guha: RDF Vocabulary Description Language 1.0: RDF
Schema. W3C Recommendation, February 2004.
Bernhard Haslhofer, Linked Data Tutorial 97
Friday, March 6, 2009
98. References
• OWL
• Deborah L. McGuiness, Frank van Harmelen (eds.): OWL Web Ontology
Language Overview. W3C Recommendation 10 February 2004. Available
at http://www.w3.org/TR/owl-features/
• Nataly F. Noy, Deborah L. McGuinness: Ontology Development 101: A
Guide to Creating Your First Ontology. Available at http://www-
ksl.stanford.edu/people/dlm/papers/ontology-tutorial-noy-mcguinness.pdf
• SKOS
• SKOS Primer: http://www.w3.org/TR/2008/WD-skos-primer-20080829/
• SPARQL
• Eric Prud’hommeaux, Andy Seaborne (eds.): SPARQL Query Language for
RDF. W3C Candidate Recommendation 14 June 2007, available at http://
www.w3.org/TR/rdf-sparql-query
Bernhard Haslhofer, Linked Data Tutorial 98
Friday, March 6, 2009
99. Contents
• Motivation for Linked Open Data (LOD)
• Base Technologies
• The Linked Data Principles
• Publishing Solutions and Tools
• Vocabularies & Interoperability Issues
• Demos + Discussion
Bernhard Haslhofer, Linked Data Tutorial 99
Friday, March 6, 2009
100. The 4 principles
• Use URIs as names for things
• Use HTTP URIs so that people can look up those
names
• When someone looks up a URI, provide useful
information
• Include links to other other URIs, so that they can
discover more things
Bernhard Haslhofer, Linked Data Tutorial 100
Friday, March 6, 2009
101. Use HTTP URIs....
http://dbpedia.org/resource/
Hallstatt
http://dbpedia.org/resource/
Hallstatt_culture
http://dbpedia.org/resource/
Category:World_Heritage_Sites_in_Austria
Bernhard Haslhofer, Linked Data Tutorial 101
Friday, March 6, 2009
102. Provide useful information ... for humans
Bernhard Haslhofer, Linked Data Tutorial 102
Friday, March 6, 2009
103. Provide useful information ... for machines
<http://dbpedia.org/resource/Hallstatt_culture >
p:abstract “Die Hallstattzeit bezeichnet...” @de
p:abstract “The Hallstatt culture was...” @en
rdfs:label “Hallstatt culture”@en;
rdfs:label “Hallstattzeit” @de;
rdfs:label “Civilisation de Hallstatt” @fr;
rdfs:label “ ” @jp;
skos:subject dbpedia:Category:Iron_Age_Europe;
...
Bernhard Haslhofer, Linked Data Tutorial 103
Friday, March 6, 2009
104. Provide useful information for humans and machines
• return different HTTP responses depending on HTTP-
Accept-Header
Bernhard Haslhofer, Linked Data Tutorial 104
Friday, March 6, 2009
105. Provide useful information for humans and machines
• best practice: also assign names (URIs) to various
representations
http://dbpedia.org/resource/
Hallstatt_culture
Accept: application/rdf+xml Accept: text/html
http://dbpedia.org/data/ http://dbpedia.org/page/
Hallstatt_culture.rdf Hallstatt_culture
Bernhard Haslhofer, Linked Data Tutorial 105
Friday, March 6, 2009
106. Include links
dbpprop: hasPhotoCollection http://www4.wiwiss.fu-berlin.de/
flickrwrappr/photos/Hallstatt_culture
http://dbpedia.org/resource/
Hallstatt_culture
owl: sameAs
http://rdf.freebase.com/ns/guid.
9202a8c04000641f8000000000239c44
Bernhard Haslhofer, Linked Data Tutorial 106
Friday, March 6, 2009
107. Include links
• other common properties
• rdfs: seeAlso
• foaf: knows
• foaf: based_near
• foaf: topic_interest
Bernhard Haslhofer, Linked Data Tutorial 107
Friday, March 6, 2009
108. Include links
• manual link generation
• only for small data sets
• automatic link generation
• pattern-based algorithms (e.g., same ISBN number)
• more complex property-based algorithms
• see record-linkage problem in database domain
Bernhard Haslhofer, Linked Data Tutorial 108
Friday, March 6, 2009
109. Benefit
• Clients can easily look up names and retrieve
information and follow the links
Bernhard Haslhofer, Linked Data Tutorial 109
Friday, March 6, 2009
110. References
• Berners-Lee 2006: Linked Data. Available at http://
www.w3.org/DesignIssues/LinkedData.html
• Bizer et al.: How to Publish Linked Data on the Web.
Available at: http://www4.wiwiss.fu-berlin.de/bizer/pub/
LinkedDataTutorial/
Bernhard Haslhofer, Linked Data Tutorial 110
Friday, March 6, 2009
111. Contents
• Motivation for Linked Open Data (LOD)
• Base Technologies
• The Linked Data Principles
• Publishing Solutions and Tools
• Vocabularies & Interoperability Issues
• Demos + Discussion
Bernhard Haslhofer, Linked Data Tutorial 111
Friday, March 6, 2009
112. Relational Databases
Triplify D2R Server Virtuoso RDF Views
Scripting languages
Java Middleware Solution
Technology (PHP)
- Yes Yes
SPARQL Endpoint
SQL RDF based RDF based
Mapping Language
Manual Semi-automatic Manual
Mapping Generation
Medium-high
medium high
Scalability (but no SPARQL)
(c) Sören Auer, http://www.slideshare.net/soeren1611/linked-data-
Bernhard Haslhofer, Linked Data Tutorial 112 tutorial-presentation-955375
Friday, March 6, 2009
113. Triplify
• Goal: expose semantics available in RDBMS as simple
as possible
• Available for most popular Web app languages
• PHP (ready), Ruby/Python (under dev.)
• Works with most popular Web app databases
• MySQL, PHP-PDO DBs (SQLite, Oracle, DB2, MS SQL,
PostgreSQL)
Bernhard Haslhofer, Linked Data Tutorial 113
Friday, March 6, 2009
114. Triplify
• Configuration
• number of SQL queries selecting information, which
should be made publicly available
SELECT id, name AS ‘foaf:name’ FROM users
Bernhard Haslhofer, Linked Data Tutorial 114
Friday, March 6, 2009
115. Triplify
!quot;quot;#$%&'()'*()+,-).&'*)/0&12(3')4.#quot;'5)6%'*)&)3728(0)#/)
9:-).&;(03quot;<)
!quot;#$%%&'()*+,-.*(/)%0/1#'123%#(-0%455567
SELECT id, post_author AS 'sioc:has_creator->user',
post_title AS 'dc:title',
post_content AS 'sioc:content', post_date AS
'dcterms:modified^^xsd:dateTime‘,
post_modified AS 'dcterms:created^^xsd:dateTime'
FROM posts
WHERE post_status='publish‘ (AND id=xxx)
SELECT post_id id, tag_label AS 'tag:taggedWithTag‘
FROM post2tag INNER JOIN tag ON(post2tag.tag_id=tag.tag_id)
(WHERE id=xxx)
SELECT post_id id, category_id AS 'belongsToCategory->category‘
FROM post2cat
(WHERE id=xxx)
Bernhard Haslhofer, Linked Data Tutorial 115
Friday, March 6, 2009
116. D2R Server
• a tool for publishing the data from relational databases
on the Web
Bernhard Haslhofer, Linked Data Tutorial 116
Friday, March 6, 2009
117. D2R Server
• D2RQ mapping excerpt
map:Conference a d2rq:ClassMap;
d2rq:dataStorage map:Database1.
d2rq:class :Conference;
d2rq:uriPattern quot;http://conferences.org/comp/confno@@Conferences.ConfID@@quot;;
.
map:eventTitle a d2rq:PropertyBridge;
d2rq:belongsToClassMap map:Conference;
d2rq:property :eventTitle;
d2rq:column quot;Conferences.Namequot;;
d2rq:datatype xsd:string;
.
Bernhard Haslhofer, Linked Data Tutorial 117
Friday, March 6, 2009
118. D2R Server
• supports auto-generation of mapping files
• extracted from database structure
• good starting point
• requires adaption for domain-specific vocabulary
• supports dumping databases to RDF files
Bernhard Haslhofer, Linked Data Tutorial 118
Friday, March 6, 2009
119. Virtuoso RDF Views
• transforms the result of SQL SELECT statements into
RDF
• mapping steps
• define RDFS class IRIs for each table
• define construction of subject IRIs from primary key
column values
• define construction of predicate IRIs from each non-key
column
Bernhard Haslhofer, Linked Data Tutorial 119
Friday, March 6, 2009
120. OAI2LOD Server
• publishes metadata from arbitrary OAI-PMH
endpoints as linked data on the Web
• provides a simple linking framework
Bernhard Haslhofer, Linked Data Tutorial 120
Friday, March 6, 2009
121. OAI2LOD Server
• Sample harvesting configuration:
<> a oai2lod:Server;
rdfs:label quot;Example OAI2LOD Serverquot;;
oai2lod:port 2020;
oai2lod:baseURI <http://localhost:2020/>;
oai2lod:publishes <oai1>;
oai2lod:linkedWith <link1>;
.
<oai1> a oai2lod:OAIServer;
oai2lod:serverURL <http://oai-bdb.onb.ac.at/Script/oai2.aspx>;
oai2lod:metadataPrefix quot;oai_dcquot;;
oai2lod:styleSheet quot;xsl/oai_dc2rdf_xml.xslquot;;
oai2lod:maxRecords 50;
.
Bernhard Haslhofer, Linked Data Tutorial 121
Friday, March 6, 2009
122. OAI2LOD Server
• Sample linking configuration:
<link1> a oai2lod:LinkedSPARQLEndpoint;
oai2lod:sparqlService <http://DBpedia.org/sparql>;
oai2lod:maxResults 5000;
oai2lod:linkingRule <lrule1>;
.
<lrule1> a oai2lod:LinkingRule;
oai2lod:sourceType <http://www.mediaspaces.info/vocab/oai-pmh.rdf#Item>;
oai2lod:sourceProperty <http://purl.org/dc/elements/1.1/subject>;
oai2lod:targetType <http://dbpedia.org/class/yago/Capital108518505>;
oai2lod:targetProperty <http://www.w3.org/2000/01/rdf-schema#label>;
oai2lod:linkingProperty <http://www.w3.org/2000/01/rdf-schema#seeAlso>;
oai2lod:similarityMetrics quot;uk.ac.shef.wit.simmetrics.similaritymetrics.Levenshteinquot;;
oai2lod:minSimilarity 1.0;
.
Bernhard Haslhofer, Linked Data Tutorial 122
Friday, March 6, 2009
123. SILK - Link Discovery Framework
• Supports data publishers in linking their data sets with
other
• Provides a declarative Link Specification Language
• specify which links to be discovered
• specify conditions data items must fulfill in order to be
linked
• implemented in Python
Bernhard Haslhofer, Linked Data Tutorial 123
Friday, March 6, 2009
125. References
• A Survey of current approaches for mapping of relational databases to RDF:
http://esw.w3.org/topic/Rdb2RdfXG/StateOfTheArt
• Triplify: http://triplify.org/Overview
• D2R Server: http://www4.wiwiss.fu-berlin.de/bizer/d2r-server/
• OpenLink Virtuoso: http://virtuoso.openlinksw.com/Whitepapers/html/
rdf_views/virtuoso_rdf_views_example.html
• OAI2LOD Server: http://www.mediaspaces.info/tools/oai2lod
• SILK: http://www4.wiwiss.fu-berlin.de/bizer/silk/
Bernhard Haslhofer, Linked Data Tutorial 125
Friday, March 6, 2009
126. Contents
• Motivation for Linked Open Data (LOD)
• Base Technologies
• The Linked Data Principles
• Publishing Solutions and Tools
• Vocabularies & Interoperability Issues
• Demos + Discussion
Bernhard Haslhofer, Linked Data Tutorial 126
Friday, March 6, 2009
127. Vocabularies & Interoperability Issues
• LOD is about publishing data on the Web
• vocabularies define the terms that describe the
semantics of these data (e.g., creator, title, abstract,
etc.)
• we can apply the LOD principles and expose
vocabularies on the Web
Bernhard Haslhofer, Linked Data Tutorial 127
Friday, March 6, 2009
128. Vocabularies & Interoperability Issues
• Use URIs as names for vocabularies and terms
• Use HTTP URIs so that people (and machines) can
look up those names
• When someone looks up a URI, provide useful
information
• Include links (mappings) to other vocabularies
Bernhard Haslhofer, Linked Data Tutorial 128
Friday, March 6, 2009
129. Vocabularies & Interoperability Issues
• Example: Dublin Core terms
Bernhard Haslhofer, Linked Data Tutorial 129
Friday, March 6, 2009
130. Vocabularies & Interoperability Issues
• Example: Dublin Core terms
Bernhard Haslhofer, Linked Data Tutorial 130
Friday, March 6, 2009
131. Vocabularies & Interoperability Issues
• Example: Friend of a Friend (FOAF)
Bernhard Haslhofer, Linked Data Tutorial 131
Friday, March 6, 2009
132. Vocabularies & Interoperability Issues
• Example: Friend of a Friend (FOAF)
Bernhard Haslhofer, Linked Data Tutorial 132
Friday, March 6, 2009
133. Vocabularies & Interoperability Issues
• Publishing vocabularies on the Web recipe
Bernhard Haslhofer, Linked Data Tutorial 133
Friday, March 6, 2009
134. Vocabularies & Interoperability Issues
• Publishing vocabularies on the Web recipe
• Step 1: create a complete RDF/XML serialization of the
vocabulary (e.g., example.rdf)
• Step 2: copy the serialization into Web Server directory
• Step3: add a .htaccess directive to the Web Server
directory
Bernhard Haslhofer, Linked Data Tutorial 134
Friday, March 6, 2009
135. Vocabularies & Interoperability Issues
• Publishing vocabularies on the Web recipe
Bernhard Haslhofer, Linked Data Tutorial 135
Friday, March 6, 2009
136. Vocabularies & Interoperability Issues
• LOD and the Semantic Web provide the technologies
to expose data on the Web
• exposing data in RDF and describing vocabularies in
RDF/S or OWL provides data interoperability on a
technical / structural level
• semantic interoperability is not given by default
Bernhard Haslhofer, Linked Data Tutorial 136
Friday, March 6, 2009
137. Vocabularies & Interoperability Issues
rule-of-thumb to establish semantic interoperability
1. whenever possible, use terms from widely used vocabularies that are
published on the Web in a structured, machine-readable format (RDFS
or OWL)
2. if there is no such term that reflects the required semantics, re-use a
semantically broader term by establishing e.g. a rdfs:subProperty
relationship, refine its semantics for the purpose of your application
within a new namespace, and publish it on the Web
3. if (1) and (2) are not feasible, create your own vocabulary and publish
it on the Web in order to make it accessible for other users and
applications; consider to define mappings to other vocabularies
Bernhard Haslhofer, Linked Data Tutorial 137
Friday, March 6, 2009
138. Vocabularies & Interoperability Issues
Concepts/
Name Purpose Base
Properties
Dublin Core General /
RDFS 22 / 55
(http://purl.org/dc/terms) Documents
Friend of a Friend (FOAF) Contacts /
OWL 12 / 54
(http://xmlns.com/foaf/0.1/) Communication
VCard Ontology Contacts /
OWL 5 / 54
(http://www.w3.org/2006/vcard/ns#) Communication
Description of a Project (DOAP)
Projects RDFS 7 / 30
(http://usefulinc.com/ns/doap# )
Bernhard Haslhofer, Linked Data Tutorial 138
Friday, March 6, 2009
139. Vocabularies & Interoperability Issues
Concepts/
Name Purpose Base
Properties
Semantically Interlinked Online Communities
Contacts /
(SIOC) OWL 11 / 53
Communication
(http://rdfs.org/sioc/ns#)
Nepomuk Message Ontology (NMO)
Contacts /
(http://www.semanticdesktop.org/ontologies/ RDFS 7 / 23
Communication
2007/03/22/nmo# )
GeoNames Ontology
Locations OWL 7 / 18
(http://www.geonames.org/ontology#)
Music Ontology
Multimedia OWL 53 / 131
(http://purl.org/ontology/mo/ )
Bernhard Haslhofer, Linked Data Tutorial 139
Friday, March 6, 2009
141. References
• Miles et al.: Best Practices Recipes for Publishing RDF Vocabularies, Available
at: http://www.w3.org/TR/swbp-vocab-pub/
• Heery and Patel 2004: Application profiles: mixing and matching metadata
schemes. Available at: http://www.ariadne.ac.uk/issue25/app-profiles/
• Haslhofer and Klas 2009: A Survey of approaches for achieving metadata
interoperability. To be published in ACM Computing Surveys Q4/2009.
Available on request.
• Haslhofer and Schandl 2009: Interweaving OAI-PMH metadata with the
Linked Data Cloud. To be published in International Journal of Metadata,
Semantics, and Ontologies. Available on request.
Bernhard Haslhofer, Linked Data Tutorial 141
Friday, March 6, 2009
142. Contents
• Motivation for Linked Open Data (LOD)
• Base Technologies
• The Linked Data Principles
• Publishing Solutions and Tools
• Vocabularies & Interoperability Issues
• Demos + Discussion
Bernhard Haslhofer, Linked Data Tutorial 142
Friday, March 6, 2009
143. Demo 1 - browsing and querying DBPedia
• http://dbpedia.org/resource/Hallstatt_culture
• http://dbpedia.org/snorql
• http://lookup.dbpedia.org
Bernhard Haslhofer, Linked Data Tutorial 143
Friday, March 6, 2009
144. Demo 2 - cURLing the LIBRIS catalogue
• LIBRIS = The Swedish Union Catalog
• Starting point: http://libris.kb.se/bib/10432900
Bernhard Haslhofer, Linked Data Tutorial 144
Friday, March 6, 2009
145. Demo 3 - exposing MySQL DB using D2R
• sample database: http://dev.mysql.com/doc/world-
setup/en/world-setup.html
Bernhard Haslhofer, Linked Data Tutorial 145
Friday, March 6, 2009
146. Demo 4 - sample OAI2LOD instances
• http://www.mediaspaces.info/tools/oai2lod/
Bernhard Haslhofer, Linked Data Tutorial 146
Friday, March 6, 2009
147. Demo 5 - RDF Viewers
• Zitgist: http://dataviewer.zitgist.com/
• Disco: http://www4.wiwiss.fu-berlin.de/rdf_browser/
• OpenLink Data Explorer: http://demo.openlinksw.com/
rdfbrowser2
Bernhard Haslhofer, Linked Data Tutorial 147
Friday, March 6, 2009
148. Known / Open Issues
• Automatic linking is non-trival and domain specific
• how to deal with false positives?
• precision / recall of existing approaches?
• Nobody can guarantee the (long-term) availability of
LOD resources
• what to do when a resource disappears?
• annoying for humans / problematic for applications
• Licensing
Bernhard Haslhofer, Linked Data Tutorial 148
Friday, March 6, 2009
149. Summary
• Linked Data is about publishing and interlinking public
interest data on the Web
• Other applications can access these data with common
Web technologies
• It is still research, with lots of construction areas, but
within its two year (!!!) history it has attracted quite a lot
of interest
• It is an exciting research field to work on...
Bernhard Haslhofer, Linked Data Tutorial 149
Friday, March 6, 2009