The document provides an introduction to the Semantic Web by using a simplified example of combining bookstore data from different sources. It demonstrates how [1] exporting data as relations, [2] merging the relations based on identical resources, and [3] adding extra knowledge allows for more powerful queries across datasets. The key components that enable this are using URIs to identify resources, representing data as RDF triples, and classifying resources and properties with common terminologies like ontologies. This approach aims to realize a Web of Data where decentralized and heterogeneous data sources can be programmatically integrated and queried.
Slides which accompanied a day long workshop focused on helping a group of year long Catholic volunteers -- St. Joseph Workers -- imagine and build portfolios to support their vocational discernment and development.
Social Media for the Scared February 2014Bex Lewis
Day long course designed for the Church of England, encouraging people to think about why, what, when, etc. to use social media, provided by Dr Bex Lewis, Director of Digital Fingerprint Social Media Consultancy.
Introduction to information visualisation for humanities PhDsMia
Training workshop for the CHASE Arts and Humanities in the Digital Age programme. (
This session will give you an overview of a variety of techniques and tools available for data visualisation and analysis in the humanities. You will learn about common types of visualisations and the role of exploratory and explanatory visualisations, explore examples of scholarly visualisations, try some visualisation tools, and know where to find further information about analysing and building data visualisations.
Slides which accompanied a day long workshop focused on helping a group of year long Catholic volunteers -- St. Joseph Workers -- imagine and build portfolios to support their vocational discernment and development.
Social Media for the Scared February 2014Bex Lewis
Day long course designed for the Church of England, encouraging people to think about why, what, when, etc. to use social media, provided by Dr Bex Lewis, Director of Digital Fingerprint Social Media Consultancy.
Introduction to information visualisation for humanities PhDsMia
Training workshop for the CHASE Arts and Humanities in the Digital Age programme. (
This session will give you an overview of a variety of techniques and tools available for data visualisation and analysis in the humanities. You will learn about common types of visualisations and the role of exploratory and explanatory visualisations, explore examples of scholarly visualisations, try some visualisation tools, and know where to find further information about analysing and building data visualisations.
For Beyond the Black Box, University of Edinburgh, February 2017
As the datasets used by humanists become ever larger and more readily accessible, the ability to render and interpret overwhelmingly large amounts of information in graphically literate ways has become an increasingly important part of the researcher’s skillset. In this workshop, participants will be introduced to the core principles of scholarly data visualisation and shown how to use a variety of visualisation tools.
Visualisations may sound like the opposite of a black box, as they display the data provided. However, aside from 'truthiness' of things on a screen, lots of invisible algorithmic decisions affect what appears on the screen. Data used in visualisations is increasingly generated algorithmically rather than manually. What choices is software making for you, and whose world view do they reflect? Algorithms are choices - if you can't read the source code or access the learned model, how can you understand them?
Chaos&Order: Using visualization as a means to explore large heritage collec...TimelessFuture
*note: download original powerpoint to view animations*. Presentation at 4th Int. Alexandria Workshop (19./20. October 2017) - Foundations for Temporal Retrieval, Exploration and Analytics in Web Archives.
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingSemantic Web Company
See how ontologies and taxonomies can play together to reach the ultimate goal, which is the cost-efficient creation and maintenance of an enterprise knowledge graph. The knowledge modelling methodology is supported by approaches taken from NLP, data science, and machine learning.
A semantic web is a relativity modern technology coined by Sir Tim Berners-Lee in 2001. Web 2.0 is readable by humans. We have HTML 5 and CSS and it does a great job of allowing information to be read by humans. Where web 2.0 fails is supporting machine reading. This then brings up web 3.0. Being able to support data is great, but often what we are most interested in is not the data itself, but the relationships between and among data. Think about how hard it is currently to get all water features. Those features are often in different services and provided by different organizations. I want to quickly and easily get all water features nationally. This is where a semantic web would be very useful because one can store the relationships between data to give you all water features. This talk will show you some of the advantages of a semantic web and how it can be used to answer questions that one would struggle to answer without it.
For Beyond the Black Box, University of Edinburgh, February 2017
As the datasets used by humanists become ever larger and more readily accessible, the ability to render and interpret overwhelmingly large amounts of information in graphically literate ways has become an increasingly important part of the researcher’s skillset. In this workshop, participants will be introduced to the core principles of scholarly data visualisation and shown how to use a variety of visualisation tools.
Visualisations may sound like the opposite of a black box, as they display the data provided. However, aside from 'truthiness' of things on a screen, lots of invisible algorithmic decisions affect what appears on the screen. Data used in visualisations is increasingly generated algorithmically rather than manually. What choices is software making for you, and whose world view do they reflect? Algorithms are choices - if you can't read the source code or access the learned model, how can you understand them?
Chaos&Order: Using visualization as a means to explore large heritage collec...TimelessFuture
*note: download original powerpoint to view animations*. Presentation at 4th Int. Alexandria Workshop (19./20. October 2017) - Foundations for Temporal Retrieval, Exploration and Analytics in Web Archives.
Taxonomies and Ontologies – The Yin and Yang of Knowledge ModellingSemantic Web Company
See how ontologies and taxonomies can play together to reach the ultimate goal, which is the cost-efficient creation and maintenance of an enterprise knowledge graph. The knowledge modelling methodology is supported by approaches taken from NLP, data science, and machine learning.
A semantic web is a relativity modern technology coined by Sir Tim Berners-Lee in 2001. Web 2.0 is readable by humans. We have HTML 5 and CSS and it does a great job of allowing information to be read by humans. Where web 2.0 fails is supporting machine reading. This then brings up web 3.0. Being able to support data is great, but often what we are most interested in is not the data itself, but the relationships between and among data. Think about how hard it is currently to get all water features. Those features are often in different services and provided by different organizations. I want to quickly and easily get all water features nationally. This is where a semantic web would be very useful because one can store the relationships between data to give you all water features. This talk will show you some of the advantages of a semantic web and how it can be used to answer questions that one would struggle to answer without it.
Slides for the iDB summer school (Sapporo, Japan) http://db-event.jpn.org/idb2013/
Typically, Web mining approaches have focused on enhancing or learning about user seeking behavior, from query log analysis and click through usage, employing the web graph structure for ranking to detecting spam or web page duplicates. Lately, there's a trend on mining web content semantics and dynamics in order to enhance search capabilities by either providing direct answers to users or allowing for advanced interfaces or capabilities. In this tutorial we will look into different ways of mining textual information from Web archives, with a particular focus on how to extract and disambiguate entities, and how to put them in use in various search scenarios. Further, we will discuss how web dynamics affects information access and how to exploit them in a search context.
Научно-технический семинар «Веб-семантика: учим поисковых роботов «понимать» тексты» в петербургском офисе Яндекса, 9 октября 2012 г.
Питер Мика, старший научный сотрудник Yahoo! Research, Барселона.
If information stewards and custodians are to collect, create, appraise, preserve, store, use and access sophisticated, flexible, responsive and future- friendly content at scale, then they will have to think strategically about who's going to use the content, how and where they're going to consume it. COPE – Create Once, Publish Everywhere - is an acronym that describes how content should be conceived once and then disseminated through multiple conduits. The goal of COPE is to capture all content (text, media), context and metadata in a single manner, and then ensure that this content can be accessed and used across a range of publishing platforms.
Web History 101, or How the Future is UnwrittenBookNet Canada
In 1989 computer scientist Tim Berners-Lee wrote “Information Management: A Proposal” to persuade CERN management that a global hypertext system was in their interests. That proposal gradually grew into what we now call the World Wide Web. This originating document contains not only the bits that would later become the Web, but also features for a future we’ve yet to realize. In this talk, we’ll take a look at some of those highlights and focus them on the world of publishing, proposing solutions to problems we’re still attempting to solve and fostering ideas for further daydreaming.
The convergence of Publishing and the WebIvan Herman
Presentation given at the Markup Forum in Stuttgart, in November 2015, on the convergence between digital publishing and Web technologies. A vision of a world where (Digital) Publishing is not as disjoint from the Web as it is today, when the separation between publishing "online", as web sites, and "offline" and/or packaged is diminished to zero.
Tutorial on RDFa, to be held at ISWC2010 in Shanghai, China. (I was supposed to hold the tutorial but last minute issues made it impossible for me to travel there...)
Presentation given during a tour of Australia, in May 2009. The targeted audience are people who are already familiar with the fundamentals of Semantic Web, and this presentation gives an overview of what is happening at W3C
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
GridMate - End to end testing is a critical piece to ensure quality and avoid...ThomasParaiso2
End to end testing is a critical piece to ensure quality and avoid regressions. In this session, we share our journey building an E2E testing pipeline for GridMate components (LWC and Aura) using Cypress, JSForce, FakerJS…
GraphSummit Singapore | The Art of the Possible with Graph - Q2 2024Neo4j
Neha Bajwa, Vice President of Product Marketing, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Climate Impact of Software Testing at Nordic Testing Days
Introduction to Semantic Web
1. 1
Introduction to the Semantic Web
(tutorial)
2009 Semantic Technology Conference
San Jose, California, USA
June 15, 2009
Ivan Herman, W3C
ivan@w3.org
22. 22
What happened here?
• You had to consult a large number of sites, all
different in style, purpose, possibly language…
• You had to mentally integrate all those information
to achieve your goals
• We all know that, sometimes, this is a long and
tedious process!
23. 23
• All those pages are only tips of respective icebergs:
• the real data is hidden somewhere in databases, XML files,
Excel sheets, …
• you have only access to what the Web page designers
allow you to see
24. 24
• Specialized sites (Expedia, TripAdvisor) do a bit
more:
• they gather and combine data from other sources (usually
with the approval of the data owners)
• but they still control how you see those sources
• But sometimes you want to personalize: access the
original data and combine it yourself!
31. 31
• I had to type in and connect with friends again and
again for each site independently
• This is even worse then before: I feed the icebergs,
but I still do not have an easy access to data…
32. 32
What would we like to have?
• Use the data on the Web the same way as we do
with documents:
• be able to link to data (independently of their presentation)
• use that data the way I want (present it, mine it, etc)
• agents, programs, scripts, etc, should be able to interpret
part of that data
33. 33
Put it another way…
• We would like to extend the current Web to a “Web
of data”:
• allow for applications to exploit the data directly
36. 36
• In some ways, yes, and that shows the huge power
of what such Web of data provides
• But mashup sites are forced to do very ad-hoc jobs
• various data sources expose their data via Web Services
• each with a different API, a different logic, different structure
• these sites are forced to reinvent the wheel many times
because there is no standard way of doing things
37. 37
Put it another way (again)…
• We would like to extend the current Web to a
standard way for a “Web of data”
38. 38
But what does this mean?
• What makes the current (document) Web work?
• people create different documents
• they give an address to it (ie, a URI) and make it accessible
to others on the Web
40. 40
Then some magic happens…
• Others discover the site and they link to it
• The more they link to it, the more important and
well known the page becomes
• remember, this is what, eg, Google exploits!
• This is the “Network effect”: some pages become
important, and others begin to rely on it even if the
author did not expect it…
42. but this one, from the other side of the Globe,
42
was not…
43. 43
What would that mean for a Web of Data?
• Lessons learned: we should be able to:
• “publish” the data to make it known on the Web
• standard ways should be used instead of ad-hoc approaches
• the analogous approach to documents: give URI-s to the data
• make it possible to “link” to that URI from other sources of
data (not only Web pages)
• ie, applications should not be forced to make targeted
developments to access the data
• generic, standard approaches should suffice
• and let the network effect work its way…
44. 44
But it is a little bit more complicated
• On the traditional Web, humans are implicitly taken
into account
• A Web link has a “context” that a person may use
47. 47
• A human understands that this is my institution’s
home page
• He/she knows what it means (realizes that it is a
research institute in Amsterdam)
• On a Web of Data, something is missing; machines
can’t make sense of the link alone
48. 48
• New lesson learned:
• extra information (“label”) must be added to a link: “this links
to my institution, which is a research institute”
• this information should be machine readable
• this is a characterization (or “classification”) of both the link
and its target
• in some cases, the classification should allow for some
limited “reasoning”
49. 49
Let us put it together
• What we need for a Web of Data:
• use URI-s to publish data, not only full documents
• allow the data to link to other data
• characterize/classify the data and the links (the “terms”) to
convey some extra meaning
• and use standards for all these!
51. 51
It is a collection of standard technologies
to realize a Web of Data
52. 52
• It is that simple…
• Of course, the devil is in the details
• a common model has to be provided for machines to
describe, query, etc, the data and their connections
• the “classification” of the terms can become very complex
for specific knowledge areas: this is where ontologies,
thesauri, etc, enter the game…
53. 53
In what follows…
• We will use a simplistic example to introduce the
main technical concepts
• The details will be for later during the course
54. 54
The rough structure of data integration
1. Map the various data onto an abstract data
representation
• make the data independent of its internal representation…
2. Merge the resulting representations
3. Start making queries on the whole!
• queries that could not have been done on the individual data
sets
55. 55
A simplified bookstore data (dataset “A”)
ID Author Title Publisher Year
ISBN0-00-651409-X id_xyz The Glass Palace id_qpr 2000
ID Name Home Page
id_xyz Ghosh, Amitav http://www.amitavghosh.com
ID Publ. Name City
id_qpr Harper Collins London
57. 57
Some notes on the exporting the data
• Relations form a graph
• the nodes refer to the “real” data or contain some literal
• how the graph is represented in machine is immaterial for
now
• Data export does not necessarily mean physical
conversion of the data
• relations can be generated on-the-fly at query time
• via SQL “bridges”
• scraping HTML pages
• extracting data from Excel sheets
• etc.
• One can export part of the data
58. 58
Another bookstore data (dataset “F”)
A B D E
1 ID Titre Traducteur Original
ISBN0 2020386682 Le Palais A13 ISBN-0-00-651409-X
des
2 miroirs
3
6 ID Auteur
7 ISBN-0-00-651409-X A12
11 Nom
12 Ghosh, Amitav
13 Besse, Christianne
63. 63
Start making queries…
• User of data “F” can now ask queries like:
• “give me the title of the original”
• well, … « donnes-moi le titre de l’original »
• This information is not in the dataset “F”…
• …but can be retrieved by merging with dataset “A”!
64. 64
However, more can be achieved…
• We “feel” that a:author and f:auteur should be
the same
• But an automatic merge doest not know that!
• Let us add some extra information to the merged
data:
• a:author same as f:auteur
• both identify a “Person”
• a term that a community may have already defined:
• a “Person” is uniquely identified by his/her name and, say,
homepage
• it can be used as a “category” for certain type of resources
66. 66
Start making richer queries!
• User of dataset “F” can now query:
• “donnes-moi la page d’accueil de l’auteur de l’originale”
• well… “give me the home page of the original’s ‘auteur’”
• The information is not in datasets “F” or “A”…
• …but was made available by:
• merging datasets “A” and datasets “F”
• adding three simple extra statements as an extra “glue”
67. 67
Combine with different datasets
• Using, e.g., the “Person”, the dataset can be
combined with other sources
• For example, data in Wikipedia can be extracted
using dedicated tools
• e.g., the “dbpedia” project can extract the “infobox”
information from Wikipedia already…
71. 71
Is that surprising?
• It may look like it but, in fact, it should not be…
• What happened via automatic means is done every
day by Web users!
• The difference: a bit of extra rigour so that
machines could do this, too
72. 72
What did we do?
• We combined different datasets that
• are somewhere on the web
• are of different formats (mysql, excel sheet, XHTML, etc)
• have different names for relations
• We could combine the data because some URI-s
were identical (the ISBN-s in this case)
• We could add some simple additional information
(the “glue”), possibly using common terminologies
that a community has produced
• As a result, new relations could be found and
retrieved
73. 73
It could become even more powerful
• We could add extra knowledge to the merged
datasets
• e.g., a full classification of various types of library data
• geographical information
• etc.
• This is where ontologies, extra rules, etc, come in
• ontologies/rule sets can be relatively simple and small, or
huge, or anything in between…
• Even more powerful queries can be asked as a
result
76. 76
RDF triples
• Let us begin to formalize what we did!
• we “connected” the data…
• but a simple connection is not enough… data should be
named somehow
• hence the RDF Triples: a labelled connection between two
resources
77. 77
RDF triples (cont.)
• An RDF Triple (s,p,o) is such that:
• “s”, “p” are URI-s, ie, resources on the Web; “o” is a URI or
a literal
• “s”, “p”, and “o” stand for “subject”, “property”, and “object”
• here is the complete triple:
(<http://…isbn…6682>, <http://…/original>, <http://…isbn…409X>)
(<http://…isbn…6682>, <http://…/original>, <http://…isbn…409X>)
• RDF is a general model for such triples (with
machine readable formats like RDF/XML, Turtle,
N3, RXR, …)
78. 78
RDF triples (cont.)
• Resources can use any URI, e.g.:
• http://www.example.org/file.xml#element(home)
• http://www.example.org/file.html#home
• http://www.example.org/file2.xml#xpath1(//q[@a=b])
• URI-s can also denote non Web entities:
• http://www.ivan-herman.net/me is me
• not my home page, not my publication list, but me
• RDF triples form a directed, labelled graph
79. 79
A simple RDF example (in RDF/XML)
<rdf:Description rdf:about="http://…/isbn/2020386682">
<rdf:Description rdf:about="http://…/isbn/2020386682">
<f:titre xml:lang="fr">Le palais des mirroirs</f:titre>
<f:titre xml:lang="fr">Le palais des mirroirs</f:titre>
<f:original rdf:resource="http://…/isbn/000651409X"/>
<f:original rdf:resource="http://…/isbn/000651409X"/>
</rdf:Description>
</rdf:Description>
(Note: namespaces are used to simplify the URI-s)
80. 80
A simple RDF example (in Turtle)
<http://…/isbn/2020386682>
<http://…/isbn/2020386682>
f:titre "Le palais des mirroirs"@fr ;
f:titre "Le palais des mirroirs"@fr ;
f:original <http://…/isbn/000651409X> .
f:original <http://…/isbn/000651409X> .
81. 81
“Internal” nodes
• Consider the following statement:
• “the publisher is a «thing» that has a name and an address”
• Until now, nodes were identified with a URI. But…
• …what is the URI of «thing»?
82. 82
Internal identifier (“blank nodes”)
<rdf:Description rdf:about="http://…/isbn/000651409X">
<rdf:Description rdf:about="http://…/isbn/000651409X">
<a:publisher rdf:nodeID="A234"/>
<a:publisher rdf:nodeID="A234"/>
</rdf:Description>
</rdf:Description>
<rdf:Description rdf:nodeID="A234">
<rdf:Description rdf:nodeID="A234">
<a:p_name>HarpersCollins</a:p_name>
<a:p_name>HarpersCollins</a:p_name>
<a:city>HarpersCollins</a:city>
<a:city>HarpersCollins</a:city>
</rdf:Description>
</rdf:Description>
<http://…/isbn/2020386682> a:publisher _:A234.
<http://…/isbn/2020386682> a:publisher _:A234.
_:A234 a:p_name "HarpersCollins".
_:A234 a:p_name "HarpersCollins".
• Syntax is serialization dependent
• A234 is invisible from outside (it is not a “real”
URI!); it is an internal identifier for a resource
83. 83
Blank nodes: the system can also do it
• Let the system create a “nodeID” internally (you do
not really care about the name…)
<rdf:Description rdf:about="http://…/isbn/000651409X">
<rdf:Description rdf:about="http://…/isbn/000651409X">
<a:publisher>
<a:publisher>
<rdf:Description>
<rdf:Description>
<a:p_name>HarpersCollins</a:p_name>
<a:p_name>HarpersCollins</a:p_name>
……
</rdf:Description>
</rdf:Description>
</a:publisher>
</a:publisher>
</rdf:Description>
</rdf:Description>
84. 84
Same in Turtle
<http://…/isbn/000651409X> a:publisher [
<http://…/isbn/000651409X> a:publisher [
a:p_name "HarpersCollins";
a:p_name "HarpersCollins";
……
].
].
85. 85
Blank nodes: some more remarks
• Blank nodes require attention when merging
• blanks nodes with identical nodeID-s in different graphs are
different
• implementations must be careful…
• Many applications prefer not to use blank nodes
and define new URI-s “on-the-fly”
86. 86
RDF in programming practice
• For example, using Java+Jena (HP’s Bristol Lab):
• a “Model” object is created
• the RDF file is parsed and results stored in the Model
• the Model offers methods to retrieve:
• triples
• (property,object) pairs for a specific subject
• (subject,property) pairs for specific object
• etc.
• the rest is conventional programming…
• Similar tools exist in Python, PHP, etc.
87. 87
Jena example
// create a model
// create a model
Model model=new ModelMem();
Model model=new ModelMem();
Resource subject=model.createResource("URI_of_Subject")
Resource subject=model.createResource("URI_of_Subject")
// 'in' refers to the input file
// 'in' refers to the input file
model.read(new InputStreamReader(in));
model.read(new InputStreamReader(in));
StmtIterator iter=model.listStatements(subject,null,null);
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
while(iter.hasNext()) {
st = iter.next();
st = iter.next();
p = st.getProperty();
p = st.getProperty();
o = st.getObject();
o = st.getObject();
do_something(p,o);
do_something(p,o);
}}
88. 88
Merge in practice
• Environments merge graphs automatically
• e.g., in Jena, the Model can load several files
• the load merges the new statements automatically
89. 89
Example: integrate experimental data
• Goal: reuse of older
experimental data
• Keep data in
databases or XML,
just export key “fact”
as RDF
• Use a faceted
browser to visualize
and interact with the
result
Courtesy of Nigel Wilkinson, Lee Harland, Pfizer Ltd, Melliyal Annamalai, Oracle (SWEO Case Study)
91. 91
Need for RDF schemas
• First step towards the “extra knowledge”:
• define the terms we can use
• what restrictions apply
• what extra relationships are there?
• Officially: “RDF Vocabulary Description Language”
• the term “Schema” is retained for historical reasons…
92. 92
Classes, resources, …
• Think of well known traditional ontologies or
taxonomies:
• use the term “novel”
• “every novel is a fiction”
• “«The Glass Palace» is a novel”
• etc.
• RDFS defines resources and classes:
• everything in RDF is a “resource”
• “classes” are also resources, but…
• …they are also a collection of possible resources (i.e.,
“individuals”)
• “fiction”, “novel”, …
93. 93
Classes, resources, … (cont.)
• Relationships are defined among classes and
resources:
• “typing”: an individual belongs to a specific class
• “«The Glass Palace» is a novel”
• to be more precise: “«http://.../000651409X» is a novel”
• “subclassing”: all instances of one are also the instances of
the other (“every novel is a fiction”)
• RDFS formalizes these notions in RDF
94. 94
Classes, resources in RDF(S)
• RDFS defines the meaning of these terms
• (these are all special URI-s, we just use the namespace
abbreviation)
95. 95
Schema example in RDF/XML
• The schema part:
<rdf:Description rdf:ID="Novel">
<rdf:Description rdf:ID="Novel">
<rdf:type
<rdf:type
rdf:resource="http://www.w3.org/2000/01/rdf-schema#Class"/>
rdf:resource="http://www.w3.org/2000/01/rdf-schema#Class"/>
</rdf:Description>
</rdf:Description>
• The RDF data on a specific novel:
<rdf:Description rdf:about="http://…/isbn/000651409X">
<rdf:Description rdf:about="http://…/isbn/000651409X">
<rdf:type rdf:resource="http://…/bookSchema.rdf#Novel"/>
<rdf:type rdf:resource="http://…/bookSchema.rdf#Novel"/>
</rdf:Description>
</rdf:Description>
96. 96
Further remarks on types
• A resource may belong to several classes
• rdf:type is just a property…
• “«The Glass Palace» is a novel, but «The Glass Palace» is
also an «inventory item»…”
• i.e., it is not like a datatype!
• The type information may be very important for
applications
• e.g., it may be used for a categorization of possible nodes
• probably the most frequently used RDF property…
• (remember the “Person” in our example?)
97. 97
Inferred properties
(<http://…/isbn/000651409X> rdf:type #Fiction)
(<http://…/isbn/000651409X> rdf:type #Fiction)
• is not in the original RDF data…
• …but can be inferred from the RDFS rules
• RDFS environments return that triple, too
98. 98
Inference: let us be formal…
• The RDF Semantics document has a list of (33)
entailment rules:
• “if such and such triples are in the graph, add this and this”
• do that recursively until the graph does not change
• The relevant rule for our example:
If:
If:
uuu rdfs:subClassOf xxx .
uuu rdfs:subClassOf xxx .
vvv rdf:type uuu .
vvv rdf:type uuu .
Then add:
Then add:
vvv rdf:type xxx .
vvv rdf:type xxx .
99. 99
Properties
• Property is a special class (rdf:Property)
• properties are also resources identified by URI-s
• There is also a possibility for a “sub-property”
• all resources bound by the “sub” are also bound by the other
• Range and domain of properties can be specified
• i.e., what type of resources serve as object and subject
101. 101
What does this mean?
• Again, new relations can be deduced. Indeed, if
:title
:title
rdf:type
rdf:type rdf:Property;
rdf:Property;
rdfs:domain :Fiction;
rdfs:domain :Fiction;
rdfs:range rdfs:Literal.
rdfs:range rdfs:Literal.
<http://…/isbn/000651409X> :title "The Glass Palace" .
<http://…/isbn/000651409X> :title "The Glass Palace" .
• then the system can infer that:
<http://…/isbn/000651409X> rdf:type :Fiction .
<http://…/isbn/000651409X> rdf:type :Fiction .
102. 102
Literals
• Literals may have a data type
• floats, integers, booleans, etc, defined in XML Schemas
• full XML fragments
• (Natural) language can also be specified
104. 104
A bit of RDFS can take you far…
• Remember the power of merge?
• We could have used, in our example:
• f:auteur is a subproperty of a:author and vice versa
(although we will see other ways to do that…)
• Of course, in some cases, more complex
knowledge is necessary (see later…)
105. 105
Example: find the right experts at NASA
• Expertise locater for nearly 70,000 NASA civil
servants, using RDF integration techniques over 6
or 7 geographically distributed databases, data
sources, and web services…
Michael Grove, Clark & Parsia, LLC, and Andrew Schain, NASA, (SWEO Case Study)
107. 107
Simple approach
• Write RDF/XML or Turtle “manually”
• In some cases that is necessary, but it really does
not scale…
108. 108
RDF with XHTML
• Obviously, a huge source of information
• By adding some “meta” information, the same
source can be reused for, eg, data integration,
better mashups, etc
• typical example: your personal information, like address,
should be readable for humans and processable by
machines
• Two solutions have emerged:
• extract the structure from the page and convert the content
into RDF
• add RDF statements directly into XHTML via RDFa
109. 109
Extract RDF
• Use intelligent “scrapers” or “wrappers” to extract a
structure (hence RDF) from a Web pages or XML
files…
• … and then generate RDF automatically (e.g., via
an XSLT script)
111. 111
GRDDL
• The transformation itself has to be provided for
each set of conventions
• A more general syntax is defined for XML formats
in general (e.g., via the namespace document)
• a method to get data in other formats to RDF (e.g., XBRL)
112. 112
Example for “structure”: microformats
• Not a Semantic Web specification, originally
• there is a separate microformat community
• Approach: re-use (X)HTML attributes and elements
to add “meta” information
• typically @abbr, @class, @title, …
• different community agreements for different applications
113. 113
RDFa
• RDFa extends (X)HTML a bit by:
• defining general attributes to add metadata to any elements
• provides an almost complete “serialization” of RDF in
XHTML
• It is a bit like the microformats/GRDDL approach
but fully generic
114. 114
RDFa example
• For example:
<div about="http://uri.to.newsitem">
<div about="http://uri.to.newsitem">
<span property="dc:date">March 23, 2004</span>
<span property="dc:date">March 23, 2004</span>
<span property="dc:title">Rollers hit casino for £1.3m</span>
<span property="dc:title">Rollers hit casino for £1.3m</span>
By <span property="dc:creator">Steve Bird</span>. See
By <span property="dc:creator">Steve Bird</span>. See
<a href="http://www.a.b.c/d.avi" rel="dcmtype:MovingImage">
<a href="http://www.a.b.c/d.avi" rel="dcmtype:MovingImage">
also video footage</a>…
also video footage</a>…
</div>
</div>
• yields, through an RDFa processor:
<http://uri.to.newsitem>
<http://uri.to.newsitem>
dc:date
dc:date "March 23, 2004";
"March 23, 2004";
dc:title
dc:title "Rollers hit casino for £1.3m;
"Rollers hit casino for £1.3m;
dc:creator
dc:creator "Steve Bird";
"Steve Bird";
dcmtype:MovingImage <http://www.a.b.c/d.avi>.
dcmtype:MovingImage <http://www.a.b.c/d.avi>.
115. 115
Example: Yahoo’s SearchMonkey
• Search based results may be customized via small
applications
• Metadata in pages (in RDFa, microformats etc) are
reused
Courtesy of Peter Mika, Yahoo! Research, (SWEO Case Study)
118. 118
Bridge to relational databases
• Data on the Web are mostly stored in databases
• “Bridges” are being defined:
• a layer between RDF and the relational data
• RDB tables are “mapped” to RDF graphs, possibly on the fly
• different mapping approaches are being used
• a number RDB systems offer this facility already (eg,
Oracle, OpenLink, …)
• A survey on mapping techniques has been
published at W3C
• W3C plans to engage in a standardization work in
this area
120. 120
Linking Open Data Project
• Goal: “expose” open datasets in RDF
• Set RDF links among the data items from different
datasets
• Set up query endpoints
• Altogether billions of triples, millions of links…
121. 121
Example data source: DBpedia
• DBpedia is a community effort to
• extract structured (“infobox”) information from Wikipedia
• provide a query endpoint to the dataset
• interlink the DBpedia dataset with other datasets on the
Web
130. 130
RDF data access
• How do I query the RDF data?
• e.g., how do I get to the DBpedia data?
131. 131
Querying RDF graphs
• Remember the Jena idiom:
StmtIterator iter=model.listStatements(subject,null,null);
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
while(iter.hasNext()) {
st = iter.next();
st = iter.next();
p = st.getProperty(); o = st.getObject();
p = st.getProperty(); o = st.getObject();
do_something(p,o);
do_something(p,o);
• In practice, more complex queries into the RDF
data are necessary
• something like: “give me the (a,b) pair of resources, for
which there is an x such that (x parent a) and (b brother x)
holds” (ie, return the uncles)
• these rules may become quite complex
• The goal of SPARQL (Query Language for RDF)
132. 132
Analyse the Jena example
StmtIterator iter=model.listStatements(subject,null,null);
StmtIterator iter=model.listStatements(subject,null,null);
while(iter.hasNext()) {
while(iter.hasNext()) {
st = iter.next();
st = iter.next();
p = st.getProperty(); o = st.getObject();
p = st.getProperty(); o = st.getObject();
do_something(p,o);
do_something(p,o);
• The (subject,?p,?o) is a pattern for what we
are looking for (with ?p and ?o as “unknowns”)
133. 133
General: graph patterns
• The fundamental idea: use graph patterns
• the pattern contains unbound symbols
• by binding the symbols, subgraphs of the RDF graph are
selected
• if there is such a selection, the query returns bound
resources
134. 134
Our Jena example in SPARQL
SELECT ?p ?o
SELECT ?p ?o
WHERE {subject ?p ?o}
WHERE {subject ?p ?o}
• The triples in WHERE define the graph pattern,
with ?p and ?o “unbound” symbols
• The query returns all p,o pairs
135. 135
Simple SPARQL example
SELECT ?isbn ?price ?currency # note: not ?x!
SELECT ?isbn ?price ?currency # note: not ?x!
WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
136. 136
Simple SPARQL example
SELECT ?isbn ?price ?currency # note: not ?x!
SELECT ?isbn ?price ?currency # note: not ?x!
WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
WHERE {?isbn a:price ?x. ?x rdf:value ?price. ?x p:currency ?currency.}
• Returns:
[[<..49X>,33,£], [<..49X>,50,€], [<..6682>,60,€],
[<..6682>,78,$]]
138. 138
Other SPARQL features
• Limit the number of returned results; remove
duplicates, sort them, …
• Optional branches in the query
• Specify several data sources (via URI-s) within the
query (essentially, a merge!)
• Construct a graph combining a separate pattern
and the query results
• Use datatypes and/or language tags when
matching a pattern
139. 139
SPARQL usage in practice
• SPARQL is usually used over the network
• separate documents define the protocol and the result
format
• SPARQL Protocol for RDF with HTTP and SOAP bindings
• SPARQL results in XML or JSON formats
• Big datasets usually offer “SPARQL endpoints”
using this protocol
• typical example: SPARQL endpoint to DBpedia
141. 141
Example: integrate Chinese medical data
• Integration of a large number of TCM databases
• around 80 databases, around 200,000 records each
• A visual tool to map databases to the semantic
layer using a specialized ontology
• Form based query interface for end users
Courtesy of Huajun Chen, Zhejiang University, (SWEO Case Study)
143. 143
Ontologies
• RDFS is useful, but does not solve all possible
requirements
• Complex applications may want more possibilities:
• characterization of properties
• identification of objects with different URI-s
• disjointness or equivalence of classes
• construct classes, not only name them
• can a program reason about some terms? E.g.:
• “if «Person» resources «A» and «B» have the same
«foaf:email» property, then «A» and «B» are identical”
• etc.
144. 144
Ontologies (cont.)
• The term ontologies is used in this respect:
“defines the concepts and relationships used to describe
“defines the concepts and relationships used to describe
and represent an area of knowledge”
and represent an area of knowledge”
• RDFS can be considered as a simple ontology
language
• Languages should be a compromise between
• rich semantics for meaningful applications
• feasibility, implementability
145. 145
Web Ontology Language = OWL
• OWL is an extra layer, a bit like RDF Schemas
• own namespace, own terms
• it relies on RDF Schemas
• It is a separate recommendation
• actually… there is a 2004 version of OWL (“OWL 1”)
• and there is an update (“OWL 2”) that should be finalized in
2009
• you will surely hear about it at the conference…
146. 146
OWL is complex…
• OWL is a large set of additional terms
• We will not cover the whole thing here…
147. 147
Term equivalences
• For classes:
• owl:equivalentClass: two classes have the same
individuals
• owl:disjointWith: no individuals in common
• For properties:
• owl:equivalentProperty
• remember the a:author vs. f:auteur
• owl:propertyDisjointWith
• For individuals:
• owl:sameAs: two URIs refer to the same concept
(“individual”)
• owl:differentFrom: negation of owl:sameAs
149. 149
Typical usage of owl:sameAs
• Linking our example of Amsterdam from one data
set (DBpedia) to the other (Geonames):
<http://dbpedia.org/resource/Amsterdam>
<http://dbpedia.org/resource/Amsterdam>
owl:sameAs <http://sws.geonames.org/2759793>;
owl:sameAs <http://sws.geonames.org/2759793>;
• This is the main mechanism of “Linking” in the
Linking Open Data project
150. 150
Property characterization
• In OWL, one can characterize the behaviour of
properties (symmetric, transitive, functional, inverse
functional…)
• One property may be the inverse of another
• OWL also separates data and object properties
• “datatype property” means that its range are typed literals
151. 151
What this means is…
• If the following holds in our triples:
:email rdf:type owl:InverseFunctionalProperty.
:email rdf:type owl:InverseFunctionalProperty.
<A> :email "mailto:a@b.c".
<A> :email "mailto:a@b.c".
<B> :email "mailto:a@b.c".
<B> :email "mailto:a@b.c".
then, processed through OWL, the following
holds, too:
<A> owl:sameAs <B>.
<A> owl:sameAs <B>.
• I.e., new relationships were discovered again
(beyond what RDFS could do)
152. 152
Classes in OWL
• In RDFS, you can subclass existing classes…
that’s all
• In OWL, you can construct classes from existing
ones:
• enumerate its content
• through intersection, union, complement
• Etc
153. 153
Classes in OWL (cont)
• OWL makes a stronger conceptual distinction
between classes and individuals
• there is a separate term for owl:Class, to make the
difference (a specialization of the RDFS class)
• individuals are separated into a special class called
owl:Thing
• Eg, a precise classification would be:
ex:Person rdf:type owl:Class.
ex:Person rdf:type owl:Class.
<uri-for-Amitav-Ghosh>
<uri-for-Amitav-Ghosh>
rdf:type owl:Thing;
rdf:type owl:Thing;
rdf:type owl:Person .
rdf:type owl:Person .
154. 154
Classes contents can be enumerated
:£ rdf:type owl:Thing.
:£ rdf:type owl:Thing.
:€ rdf:type owl:Thing.
:€ rdf:type owl:Thing.
:$ rdf:type owl:Thing.
:$ rdf:type owl:Thing.
:Currency
:Currency
rdf:type owl:Class;
rdf:type owl:Class;
owl:oneOf (:€ :£ :$).
owl:oneOf (:€ :£ :$).
• I.e., the class consists of exactly of those
individuals
155. 155
Union of classes can be defined
:Novel
:Novel rdf:type owl:Class.
rdf:type owl:Class.
:Short_Story
:Short_Story rdf:type owl:Class.
rdf:type owl:Class.
:Poetry
:Poetry rdf:type owl:Class.
rdf:type owl:Class.
:Literature rdf:type owl:Class;
:Literature rdf:type owl:Class;
owl:unionOf (:Novel :Short_Story :Poetry).
owl:unionOf (:Novel :Short_Story :Poetry).
• Other possibilities: complementOf,
intersectionOf, …
157. 157
It can be a bit more complicated…
If:
:Novel
:Novel rdf:type owl:Class.
rdf:type owl:Class.
:Short_Story
:Short_Story rdf:type owl:Class.
rdf:type owl:Class.
:Poetry
:Poetry rdf:type owl:Class.
rdf:type owl:Class.
:Literature rdf:type owlClass;
:Literature rdf:type owlClass;
owl:unionOf (:Novel :Short_Story :Poetry).
owl:unionOf (:Novel :Short_Story :Poetry).
fr:Roman owl:equivalentClass :Novel .
fr:Roman owl:equivalentClass :Novel .
<myWork> rdf:type fr:Roman .
<myWork> rdf:type fr:Roman .
then, through the combination of different terms,
the following still holds:
<myWork> rdf:type :Literature .
<myWork> rdf:type :Literature .
158. 158
What we have so far…
• The OWL features listed so far are already fairly
powerful
• E.g., various databases can be linked via
owl:sameAs, functional or inverse functional
properties, etc.
• Many inferred relationship can be found using a
traditional rule engine
159. 159
However… that may not be enough
• Very large vocabularies might require even more
complex features
• typical usage example: definition of all concepts in a health
care environment
• a major issue: the way classes (i.e., “concepts”) are defined
• OWL includes those extra features but… the
inference engines become (much) more complex
160. 160
Property value restrictions
• Classes are created by restricting the property
values on its individuals
• For example: how would I characterize a “listed
price”?
• it is a price (which may be a general term), but one that is
given in one of the “allowed” currencies (say, €, £, or $)
• more formally:
• the value of “p:currency”, when applied to a resource on
listed price, must be of one of those values…
• …thereby defining the class of “listed price”
161. 161
Restrictions formally
• Defines a class of type owl:Restriction with a
• reference to the property that is constrained
• definition of the constraint itself
• One can, e.g., subclass from this node when
defining a particular class
:Listed_Price rdfs:subClassOf [
:Listed_Price rdfs:subClassOf [
rdf:type
rdf:type owl:Restriction;
owl:Restriction;
owl:onProperty
owl:onProperty p:currency;
p:currency;
owl:allValuesFrom
owl:allValuesFrom :Currency.
:Currency.
].
].
163. 163
Other restrictions
• allValuesFrom could be replaced by:
• someValuesFrom
• e.g., I could have said: there should be a price given in at
least one of those currencies
• hasValue, when restricted to one specific value
• Cardinality restrictions: instead of looking at the
values of properties, their number is considered
• eg, a specific property should occur exactly once
164. 164
But: OWL is hard!
• The combination of class constructions with various
restrictions is extremely powerful
• What we have so far follows the same logic as
before
• extend the basic RDF and RDFS possibilities with new
features
• define their semantics, ie, what they “mean” in terms of
relationships
• expect to infer new relationships based on those
• However… a full inference procedure is hard
• not implementable with simple rule engines, for example
165. 165
OWL “species”
• OWL species comes to the fore:
• restricting which terms can be used and under what
circumstances (restrictions)
• if one abides to those restrictions, then simpler inference
engines can be used
• They reflect compromises: expressibility vs.
implementability
166. 166
OWL Full
• No constraints on any of the constructs
• owl:Class is just syntactic sugar for rdfs:Class
• owl:Thing is equivalent to rdfs:Resource
• this means that:
• Class can also be an individual, a URI can denote a property
as well as a Class
• e.g., it is possible to talk about class of classes, apply properties
on them
• etc
• etc.
• Extension of RDFS in all respects
• But: no system may exist that infers everything one
might expect
167. 167
OWL Full usage
• Nevertheless OWL Full is essential
• it gives a generic framework to express many things
• some application just need to express and interchange
terms (with possible scruffiness)
• Applications may control what terms are used and
how
• in fact, they may define their own sub-language via, eg, a
vocabulary
• thereby ensuring a manageable inference procedure
168. 168
OWL DL
• A number of restrictions are defined
• classes, individuals, object and datatype properties, etc, are
fairly strictly separated
• object properties must be used with individuals
• i.e., properties are really used to create relationships between
individuals
• no characterization of datatype properties
• …
• But: well known inference algorithms exist!
169. 169
Examples for restrictions
• The following is not “legal” OWL DL:
<q> rdf:type <A>.
<q> rdf:type <A>. # A is a class, q is an individual
# A is a class, q is an individual
<r> rdf:type <q>.
<r> rdf:type <q>. # error: q cannot be used for a class, too
# error: q cannot be used for a class, too
<A> ex:something <B>.
<A> ex:something <B>. # error: properties are for individuals only
# error: properties are for individuals only
<q> ex:something <s>.
<q> ex:something <s>. # error: same property cannot be used as
# error: same property cannot be used as
<p> ex:something “54”.
<p> ex:something “54”. ## object and datatype property
object and datatype property
170. 170
OWL DL usage
• Abiding to the restrictions means that very large
ontologies can be developed that require precise
procedures
• eg, in the medical domain, biological research, energy
industry, financial services (eg, XBRL), etc
• the number of classes and properties described this way
can go up to the many thousands
• OWL DL has become a language of choice to
define and manage formal ontologies in general
• even if their usage is not necessarily on the Web
171. 171
OWL 2 defines further species
a.k.a. “profiles”
• Further restrictions on how terms can be used and
what inferences can be expected
• Classification and instance queries in polynomial time:
OWL-EL
• Implementable on top of conventional relational database
engines: OWL-QL
• Implementable on top of traditional rule engines: OWL-RL
172. 172
Ontology development
• The hard work is to create the ontologies
• requires a good knowledge of the area to be described
• some communities have good expertise already (e.g.,
librarians)
• OWL is just a tool to formalize ontologies
• large scale ontologies are often developed in a community
process
• Ontologies should be shared and reused
• can be via the simple namespace mechanisms…
• …or via explicit import
173. 173
Must I use large ontologies?
• NO!!!
• Many applications are possible with RDFS and a
just a little bit of OWL
• a few terms, whose meaning is defined in OWL, and that
application can handle directly
• OWL RL is a step to create such a generic OWL level
• Big ontologies can be expensive (both in time and
money); use them only when really necessary!
174. 174
Ontologies examples
• eClassOwl: eBusiness ontology for products and
services, 75,000 classes and 5,500 properties
• National Cancer Institute’s ontology: about 58,000
classes
• Open Biomedical Ontologies Foundry: a collection
of ontologies, including the Gene Ontology to
describe gene and gene product attributes in any
organism or protein sequence and annotation
terminology and data (UniProt)
• BioPAX: for biological pathway data
175. 175
Example: improved search via ontology
• Search results are re-ranked using ontologies
• Related terms are highlighted, usable for further
search
176. 176
Example: improved search via ontology
• Same dataset, different ontology
• (ontology is on non-animal experimentation)
177. 177
Example: Eli Lilly’s target assessment tool
• Prioritization of
drug target,
integrating data
from different
sources and
formats
• Integration,
search via
ontologies
(proprietary and
public)
Courtesy of Susie Stephens, Eli Lilly (SWEO Case Study)
179. 179
Other SW technologies
• There are other technologies that we do not have
time for here
• find RDF data associated with general URI-s: POWDER
• bridge to thesauri, glossaries, etc: SKOS
• use Rule engines on RDF data
182. 182
Example: personalized tourist itinerary
• Integration of
relevant data in
Zaragoza (using
RDF and ontologies)
• Use rules on the
RDF data to provide
a proper itinerary
Courtesy of Jesús Fernández, Mun. of Zaragoza, and Antonio Campos, CTIC (SWEO Use Case)
184. 184
Available specifications: Primers, Guides
• The “RDF Primer” and the “OWL Guide” give a
formal introduction to RDF(S) and OWL
• GRDDL and RDFa Primers have also been
published
• The W3C Semantic Web Activity Homepage has
links to all the specifications:
• http://www.w3.org/2001/sw/
185. 185
“Core” vocabularies
• There are also a number widely used “core
vocabularies”
• Dublin Core: about information resources, digital libraries,
with extensions for rights, permissions, digital right
management
• FOAF: about people and their organizations
• DOAP: on the descriptions of software projects
• SIOC: Semantically-Interlinked Online Communities
• vCard in RDF
• …
• One should never forget: ontologies/vocabularies
must be shared and reused!
186. 186
Some books
• G. Antoniu and F. van Harmelen: Semantic Web
nd
Primer, 2 edition in 2008
• D. Allemang and J. Hendler: Semantic Web for the
Working Ontologist, 2008
• Jeffrey Pollock: Semantic Web for Dummies, 2009
• …
See the separate Wiki page collecting book references:
http://esw.w3.org/topic/SwBooks
187. 187
Further information
• Planet RDF aggregates a number of SW blogs:
• http://planetrdf.com/
• Semantic Web Interest Group
• a forum developers with archived (and public) mailing list,
and a constant IRC presence on freenode.net#swig
• anybody can sign up on the list:
• http://www.w3.org/2001/sw/interest/
189. 189
Conclusions
• The Semantic Web is about creating a Web of
Data
• There is a great and very active user and
developer community, with new applications
• witness the size and diversity of this event