Linked Data Basics Slot in WWW2012 Tutorial: Practical Cross-Dataset Queries on the Web of Data
http://latc-project.eu/events/www2012-tutorial-cross-dataset-queries
Open data is a crucial prerequisite for inventing and disseminating the innovative practices needed for agricultural development. To be usable, data must not just be open in principle—i.e., covered by licenses that allow re-use. Data must also be published in a technical form that allows it to be integrated into a wide range of applications. The webinar will be of interest to any institution seeking ways to publish and curate data in the Linked Data cloud.
This webinar describes the technical solutions adopted by a widely diverse global network of agricultural research institutes for publishing research results. The talk focuses on AGRIS, a central and widely-used resource linking agricultural datasets for easy consumption, and AgriDrupal, an adaptation of the popular, open-source content management system Drupal optimized for producing and consuming linked datasets.
Agricultural research institutes in developing countries share many of the constraints faced by libraries and other documentation centers, and not just in developing countries: institutions are expected to expose their information on the Web in a re-usable form with shoestring budgets and with technical staff working in local languages and continually lured by higher-paying work in the private sector. Technical solutions must be easy to adopt and freely available.
Libraries around the world have a long tradition of maintaining authority files to assure the consistent presentation and indexing of names. As library authority files have become available online, the authority data has become accessible -- and many have been published as Linked Open Data (LOD) -- but names in one library authority file typically had no link to corresponding records for persons and organizations in other library authority files. After a successful experiment in matching the Library of Congress/NACO authority file with the German National Library's authority file, an online system called the Virtual International Authority File was developed to facilitate sharing by ingesting, matching, and displaying the relations between records in multiple authority files.
The Virtual International Authority File (VIAF) has grown from three source files in 2007 to more than two dozen files today. The system harvests authority records, enhances them with bibliographic information and brings them together into clusters when it is confident the records describe the same identity. Although the most visible part of VIAF is a HTML interface, the API beneath it supports a linked data view of VIAF with URIs representing the identities themselves, not just URIs for the clusters. It supports names for person, corporations, geographic entities, works, and expressions. With English, French, German, Spanish interfaces (and a Japanese in process), the system is used around the world, with over a million queries per day.
Speaker
Thomas Hickey is Chief Scientist at OCLC where he helped found OCLC Research. Current interests include metadata creation and editing systems, authority control, parallel systems for bibliographic processing, and information retrieval and display. In addition to implementing VIAF, his group looks into exploring Web access to metadata, identification of FRBR works and expressions in WorldCat, the algorithmic creation of authorities, and the characterization of collections. He has an undergraduate degree in Physics and a Ph.D. in Library and Information Science.
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
Open data is a crucial prerequisite for inventing and disseminating the innovative practices needed for agricultural development. To be usable, data must not just be open in principle—i.e., covered by licenses that allow re-use. Data must also be published in a technical form that allows it to be integrated into a wide range of applications. The webinar will be of interest to any institution seeking ways to publish and curate data in the Linked Data cloud.
This webinar describes the technical solutions adopted by a widely diverse global network of agricultural research institutes for publishing research results. The talk focuses on AGRIS, a central and widely-used resource linking agricultural datasets for easy consumption, and AgriDrupal, an adaptation of the popular, open-source content management system Drupal optimized for producing and consuming linked datasets.
Agricultural research institutes in developing countries share many of the constraints faced by libraries and other documentation centers, and not just in developing countries: institutions are expected to expose their information on the Web in a re-usable form with shoestring budgets and with technical staff working in local languages and continually lured by higher-paying work in the private sector. Technical solutions must be easy to adopt and freely available.
Libraries around the world have a long tradition of maintaining authority files to assure the consistent presentation and indexing of names. As library authority files have become available online, the authority data has become accessible -- and many have been published as Linked Open Data (LOD) -- but names in one library authority file typically had no link to corresponding records for persons and organizations in other library authority files. After a successful experiment in matching the Library of Congress/NACO authority file with the German National Library's authority file, an online system called the Virtual International Authority File was developed to facilitate sharing by ingesting, matching, and displaying the relations between records in multiple authority files.
The Virtual International Authority File (VIAF) has grown from three source files in 2007 to more than two dozen files today. The system harvests authority records, enhances them with bibliographic information and brings them together into clusters when it is confident the records describe the same identity. Although the most visible part of VIAF is a HTML interface, the API beneath it supports a linked data view of VIAF with URIs representing the identities themselves, not just URIs for the clusters. It supports names for person, corporations, geographic entities, works, and expressions. With English, French, German, Spanish interfaces (and a Japanese in process), the system is used around the world, with over a million queries per day.
Speaker
Thomas Hickey is Chief Scientist at OCLC where he helped found OCLC Research. Current interests include metadata creation and editing systems, authority control, parallel systems for bibliographic processing, and information retrieval and display. In addition to implementing VIAF, his group looks into exploring Web access to metadata, identification of FRBR works and expressions in WorldCat, the algorithmic creation of authorities, and the characterization of collections. He has an undergraduate degree in Physics and a Ph.D. in Library and Information Science.
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
Build Narratives, Connect Artifacts: Linked Open Data for Cultural HeritageOntotext
Many issues are faced by scholars, book researchers, museum directors who try to find the underlying connection between resources. Scholars in particular continuously emphasizes the role of digital humanities and the value of linked data in cultural heritage information systems.
This presentation addresses the main issues of Linked Data and scalability. In particular, it provides gives details on approaches and technologies for clustering, distributing, sharing, and caching data. Furthermore, it addresses the means for publishing data trough could deployment and the relationship between Big Data and Linked Data, exploring how some of the solutions can be transferred in the context of Linked Data.
Overview of how data on the Web of Data can be consumed (first and foremost Linked Data) and implications for the development of usage mining approaches.
References:
Elbedweihy, K., Mazumdar, S., Cano, A. E., Wrigley, S. N., & Ciravegna, F. (2011). Identifying Information Needs by Modelling Collective Query Patterns. COLD, 782.
Elbedweihy, K., Wrigley, S. N., & Ciravegna, F. (2012). Improving Semantic Search Using Query Log Analysis. Interacting with Linked Data (ILD 2012), 61.
Raghuveer, A. (2012). Characterizing machine agent behavior through SPARQL query mining. In Proceedings of the International Workshop on Usage Analysis and the Web of Data, Lyon, France.
Arias, M., Fernández, J. D., Martínez-Prieto, M. A., & de la Fuente, P. (2011). An empirical study of real-world SPARQL queries. arXiv preprint arXiv:1103.5043.
Hartig, O., Bizer, C., & Freytag, J. C. (2009). Executing SPARQL queries over the web of linked data (pp. 293-309). Springer Berlin Heidelberg.
Verborgh, R., Hartig, O., De Meester, B., Haesendonck, G., De Vocht, L., Vander Sande, M., ... & Van de Walle, R. (2014). Querying datasets on the web with high availability. In The Semantic Web–ISWC 2014 (pp. 180-196). Springer International Publishing.
Verborgh, R., Vander Sande, M., Colpaert, P., Coppens, S., Mannens, E., & Van de Walle, R. (2014, April). Web-Scale Querying through Linked Data Fragments. In LDOW.
Luczak-Rösch, M., & Bischoff, M. (2011). Statistical analysis of web of data usage. In Joint Workshop on Knowledge Evolution and Ontology Dynamics (EvoDyn2011), CEUR WS.
Luczak-Rösch, M. (2014). Usage-dependent maintenance of structured Web data sets (Doctoral dissertation, Freie Universität Berlin, Germany), http://edocs.fu-berlin.de/diss/receive/FUDISS_thesis_000000096138.
As described in the April NISO/DCMI webinar by Dan Brickley, schema.org is a search-engine initiative aimed at helping webmasters use structured data markup to improve the discovery and display of search results. Drupal 7 makes it easy to markup HTML pages with schema.org terms, allowing users to quickly build websites with structured data that can be understood by Google and displayed as Rich Snippets.
Improved search results are only part of the story, however. Data-bearing documents become machine-processable once you find them. The subject matter, important facts, calendar events, authorship, licensing, and whatever else you might like to share become there for the taking. Sales reports, RSS feeds, industry analysis, maps, diagrams and process artifacts can now connect back to other data sets to provide linkage to context and related content. The key to this is the adoption standards for both the data model (RDF) and the means of weaving it into documents (RDFa). Drupal 7 has become the leading content platform to adopt these standards.
This webinar will describe how RDFa and Drupal 7 can improve how organizations publish information and data on the Web for both internal and external consumption. It will discuss what is required to use these features and how they impact publication workflow. The talk will focus on high-level and accessible demonstrations of what is possible. Technical people should learn how to proceed while non-technical people will learn what is possible.
https://doi.org/10.6084/m9.figshare.11854626.v1
Presented at Dutch National Librarian/Information Professianal Association annual conference 2011 - NVB2011
November 17, 2011
This presentation was given by Tim Thompson of Princeton University during the NISO Virtual Conference, BIBFRAME & Real World Applications for Linked Bibliographic Data, held on June 15, 2016.
These slides go with the paper "Reminiscing About 15 Years of Interoperability Efforts" which is available at http://dx.doi.org/10.1045/november2015-vandesompel
Slides were used for a presentation at the Fall 2015 Membership Meeting of the Coalition for Networked Information.
Build Narratives, Connect Artifacts: Linked Open Data for Cultural HeritageOntotext
Many issues are faced by scholars, book researchers, museum directors who try to find the underlying connection between resources. Scholars in particular continuously emphasizes the role of digital humanities and the value of linked data in cultural heritage information systems.
This presentation addresses the main issues of Linked Data and scalability. In particular, it provides gives details on approaches and technologies for clustering, distributing, sharing, and caching data. Furthermore, it addresses the means for publishing data trough could deployment and the relationship between Big Data and Linked Data, exploring how some of the solutions can be transferred in the context of Linked Data.
Overview of how data on the Web of Data can be consumed (first and foremost Linked Data) and implications for the development of usage mining approaches.
References:
Elbedweihy, K., Mazumdar, S., Cano, A. E., Wrigley, S. N., & Ciravegna, F. (2011). Identifying Information Needs by Modelling Collective Query Patterns. COLD, 782.
Elbedweihy, K., Wrigley, S. N., & Ciravegna, F. (2012). Improving Semantic Search Using Query Log Analysis. Interacting with Linked Data (ILD 2012), 61.
Raghuveer, A. (2012). Characterizing machine agent behavior through SPARQL query mining. In Proceedings of the International Workshop on Usage Analysis and the Web of Data, Lyon, France.
Arias, M., Fernández, J. D., Martínez-Prieto, M. A., & de la Fuente, P. (2011). An empirical study of real-world SPARQL queries. arXiv preprint arXiv:1103.5043.
Hartig, O., Bizer, C., & Freytag, J. C. (2009). Executing SPARQL queries over the web of linked data (pp. 293-309). Springer Berlin Heidelberg.
Verborgh, R., Hartig, O., De Meester, B., Haesendonck, G., De Vocht, L., Vander Sande, M., ... & Van de Walle, R. (2014). Querying datasets on the web with high availability. In The Semantic Web–ISWC 2014 (pp. 180-196). Springer International Publishing.
Verborgh, R., Vander Sande, M., Colpaert, P., Coppens, S., Mannens, E., & Van de Walle, R. (2014, April). Web-Scale Querying through Linked Data Fragments. In LDOW.
Luczak-Rösch, M., & Bischoff, M. (2011). Statistical analysis of web of data usage. In Joint Workshop on Knowledge Evolution and Ontology Dynamics (EvoDyn2011), CEUR WS.
Luczak-Rösch, M. (2014). Usage-dependent maintenance of structured Web data sets (Doctoral dissertation, Freie Universität Berlin, Germany), http://edocs.fu-berlin.de/diss/receive/FUDISS_thesis_000000096138.
As described in the April NISO/DCMI webinar by Dan Brickley, schema.org is a search-engine initiative aimed at helping webmasters use structured data markup to improve the discovery and display of search results. Drupal 7 makes it easy to markup HTML pages with schema.org terms, allowing users to quickly build websites with structured data that can be understood by Google and displayed as Rich Snippets.
Improved search results are only part of the story, however. Data-bearing documents become machine-processable once you find them. The subject matter, important facts, calendar events, authorship, licensing, and whatever else you might like to share become there for the taking. Sales reports, RSS feeds, industry analysis, maps, diagrams and process artifacts can now connect back to other data sets to provide linkage to context and related content. The key to this is the adoption standards for both the data model (RDF) and the means of weaving it into documents (RDFa). Drupal 7 has become the leading content platform to adopt these standards.
This webinar will describe how RDFa and Drupal 7 can improve how organizations publish information and data on the Web for both internal and external consumption. It will discuss what is required to use these features and how they impact publication workflow. The talk will focus on high-level and accessible demonstrations of what is possible. Technical people should learn how to proceed while non-technical people will learn what is possible.
https://doi.org/10.6084/m9.figshare.11854626.v1
Presented at Dutch National Librarian/Information Professianal Association annual conference 2011 - NVB2011
November 17, 2011
This presentation was given by Tim Thompson of Princeton University during the NISO Virtual Conference, BIBFRAME & Real World Applications for Linked Bibliographic Data, held on June 15, 2016.
These slides go with the paper "Reminiscing About 15 Years of Interoperability Efforts" which is available at http://dx.doi.org/10.1045/november2015-vandesompel
Slides were used for a presentation at the Fall 2015 Membership Meeting of the Coalition for Networked Information.
Linked Data for the Masses: The approach and the SoftwareIMC Technologies
Title: Linked Data for the Masses: The approach and the Software
@ EELLAK (GFOSS) Conference 2010
Athens, Greece
15/05/2010
Creator: George Anadiotis (R&D Director)
An introduction deck for the Web of Data to my team, including basic semantic web, Linked Open Data, primer, and then DBpedia, Linked Data Integration Framework (LDIF), Common Crawl Database, Web Data Commons.
morning session talk at the second Keystone Training School "Keyword search in Big Linked Data" held in Santiago de Compostela.
https://eventos.citius.usc.es/keystone.school/
Slides from my workshop at Open Repositories 2016 about DSpace's Linked Data support. The slides include a short introduction into the Semantic Web and Linked Data, the main ideas behind the Linked Data support of DSpace, information on how to configure this feature and some examples about how to query DSpace installations for Linked Data.
This is part 2 of the ISWC 2009 tutorial on the GoodRelations ontology and RDFa for e-commerce on the Web of Linked Data.
See also
http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
This is part 2 of the ISWC 2009 tutorial on the GoodRelations ontology and RDFa for e-commerce on the Web of Linked Data.
See also
http://www.ebusiness-unibw.org/wiki/Web_of_Data_for_E-Commerce_Tutorial_ISWC2009
Talk given at Open Knowledge Foundation 'Opening Up Metadata: Challenges, Standards and Tools' Workshop, Queen Mary University of London, 13th June 2012.
Info on the event at http://openglam.org/2012/05/31/last-places-left-for-opening-up-metadata-challenges-standards-and-tools/
Synthetic Fiber Construction in lab .pptxPavel ( NSTU)
Synthetic fiber production is a fascinating and complex field that blends chemistry, engineering, and environmental science. By understanding these aspects, students can gain a comprehensive view of synthetic fiber production, its impact on society and the environment, and the potential for future innovations. Synthetic fibers play a crucial role in modern society, impacting various aspects of daily life, industry, and the environment. ynthetic fibers are integral to modern life, offering a range of benefits from cost-effectiveness and versatility to innovative applications and performance characteristics. While they pose environmental challenges, ongoing research and development aim to create more sustainable and eco-friendly alternatives. Understanding the importance of synthetic fibers helps in appreciating their role in the economy, industry, and daily life, while also emphasizing the need for sustainable practices and innovation.
A Strategic Approach: GenAI in EducationPeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
Embracing GenAI - A Strategic ImperativePeter Windle
Artificial Intelligence (AI) technologies such as Generative AI, Image Generators and Large Language Models have had a dramatic impact on teaching, learning and assessment over the past 18 months. The most immediate threat AI posed was to Academic Integrity with Higher Education Institutes (HEIs) focusing their efforts on combating the use of GenAI in assessment. Guidelines were developed for staff and students, policies put in place too. Innovative educators have forged paths in the use of Generative AI for teaching, learning and assessments leading to pockets of transformation springing up across HEIs, often with little or no top-down guidance, support or direction.
This Gasta posits a strategic approach to integrating AI into HEIs to prepare staff, students and the curriculum for an evolving world and workplace. We will highlight the advantages of working with these technologies beyond the realm of teaching, learning and assessment by considering prompt engineering skills, industry impact, curriculum changes, and the need for staff upskilling. In contrast, not engaging strategically with Generative AI poses risks, including falling behind peers, missed opportunities and failing to ensure our graduates remain employable. The rapid evolution of AI technologies necessitates a proactive and strategic approach if we are to remain relevant.
How to Make a Field invisible in Odoo 17Celine George
It is possible to hide or invisible some fields in odoo. Commonly using “invisible” attribute in the field definition to invisible the fields. This slide will show how to make a field invisible in odoo 17.
Honest Reviews of Tim Han LMA Course Program.pptxtimhan337
Personal development courses are widely available today, with each one promising life-changing outcomes. Tim Han’s Life Mastery Achievers (LMA) Course has drawn a lot of interest. In addition to offering my frank assessment of Success Insider’s LMA Course, this piece examines the course’s effects via a variety of Tim Han LMA course reviews and Success Insider comments.
Biological screening of herbal drugs: Introduction and Need for
Phyto-Pharmacological Screening, New Strategies for evaluating
Natural Products, In vitro evaluation techniques for Antioxidants, Antimicrobial and Anticancer drugs. In vivo evaluation techniques
for Anti-inflammatory, Antiulcer, Anticancer, Wound healing, Antidiabetic, Hepatoprotective, Cardio protective, Diuretics and
Antifertility, Toxicity studies as per OECD guidelines
Model Attribute Check Company Auto PropertyCeline George
In Odoo, the multi-company feature allows you to manage multiple companies within a single Odoo database instance. Each company can have its own configurations while still sharing common resources such as products, customers, and suppliers.
1. Linked Data Basics
Anja Jentzsch, Freie Universität Berlin
17 April 2012
Tutorial: Practical Cross-Dataset Queries on the Web of Data
WWW2012, Lyon, France
1
2. Architecture of the classic Web
Single global document space
Web Search
Browsers Engines
Small set of simple standards
1. HTML as document format
2. HTTP URLs as
HTML HTML HTML
• globally unique IDs hyper-
links
• retrieval mechanism
3. Hyperlinks to connect everything
A B C
2
3. Web 2.0 APIs and Mashups
No single global data space
Shortcomings
1. APIs have proprietary interfaces Mashup
2. Mashups are based on a fixed set of data
sources
3. No hyperlinks between data items within Web Web Web Web
API API API API
different APIs
A B C D
3
4. Web APIs slice the Web into Walled Gardens
Image: Bob Jagensdorf, http://flickr.com/photos/darwinbell/, CC-BY 4
5. Linked Data
Extend the Web with a single global data space
1. by using RDF to publish structured data on the Web
2. by setting links between data items within different data sources
RDF RDF RDF RDF RDF
RDF RDF RDF RDF RDF
RDF RDF RDF RDF
Links Links Links Links
A B C D E
5
6. Linked Data Principles
Set of best practices for publishing structured data on the Web in
accordance with the general architecture of the Web.
1. Use URIs as names for things.
2. Use HTTP URIs so that people can look up those names.
3. When someone looks up a URI, provide useful RDF information.
4. Include RDF statements that link to other URIs so that they can discover
related things.
Tim Berners-Lee, http://www.w3.org/DesignIssues/LinkedData.html, 2006
6
7. The RDF Data Model
rdf:type
pd:chris foaf:Person
foaf:name Chris Bizer
foaf:based_near
dbpedia:Berlin
7
8. Data Items are identified with HTTP URIs
rdf:type
pd:chris foaf:Person
foaf:name Chris Bizer
foaf:based_near
dbpedia:Berlin
pd:chris = http://www.bizer.de#chris
dbpedia:Berlin = http://dbpedia.org/resource/Berlin
8
9. Resolving URIs over the Web
rdf:type
pd:chris foaf:Person
foaf:name Chris Bizer 3.450.889
foaf:based_near dp:population
dbpedia:Berlin
skos:subject
dp:Cities_in_Germany
9
10. Dereferencing URIs over the Web
rdf:type
pd:chris foaf:Person
foaf:name Chris Bizer 3.450.889
foaf:based_near dp:population
dbpedia:Berlin
skos:subject
skos:subject
dbpedia:Hamburg dp:Cities_in_Germany
skos:subject
dbpedia:Muenchen
10
11. RDF
• RDF is just a data model, it requires a serialization format
• For transmission over the network
• For storage as files
• Multiple serialization formats have been defined
• RDF/XML
• Turtle
• N-Triples
• RDFa
• ...
• It’s all triples!
• Syntax doesn’t matter much and can be chosen case-by-case for
pragmatic reasons
11
12. Properties of the Web of Linked Data
• Global, distributed data space build on a simple set of standards
• RDF, URIs, HTTP
• Entities are connected by links
• creating a global data graph that spans data sources and
• enables the discovery of new data sources
• Provides for data-coexistence
• Everyone can publish data to the Web of Linked Data
• Everyone can express their personal view on things
• Everybody can use the vocabularies/schema that they like
12
13. W3C Linking Open Data Project
• Grassroots community effort to
• publish existing open license datasets as Linked Data on the Web
• interlink things between different data sources
13
14. LOD Data Sets on the Web: May 2007
• 12 data sets
• Over 500 million RDF triples
• Around 120,000 RDF links between data sources 14
15. LOD Data Sets on the Web: November 2007
• 28 data sets
15
16. LOD Data Sets on the Web: September 2008
• 45 data sets
• Over 2 billion RDF triples 16
17. LOD Data Sets on the Web: July 2009
• 95 data sets
• Over 6.5 billion RDF triples 17
18. LOD Data Sets on the Web: September 2010
• 203 data sets
• Over 24,7 billion RDF triples
• Over 436 million RDF links between data sources 18
19. LOD Data Sets on the Web: September 2011
• 295 data sets
• Over 31 billion RDF triples
• Over 504 million RDF links between data sources 19
20. LOD Data Set statistics as of 09/2011
LOD Cloud Data Catalog on CKAN
• http://www.ckan.net/group/lodcloud
More statistics
• http://lod-cloud.net/state/
20
21. Uptake in the Government Domain
• The EU is pushing Linked Data (LOD2, LATC, Eurostat)
• W3C Government Linked Data (GLD) Working Group
22. Uptake in the Libraries Community
• Institutions publishing Linked Data
• Library of Congress (subject headings)
• German National Library (PND dataset and subject headings)
• Swedish National Library (Libris - catalog)
• Hungarian National Library (OPAC and Digital Library)
• British National Library
• Europeana project
22
23. Uptake in the Libraries Community
• W3C Library Linked Data Incubator Group (2010)
• OKFN Working Group on Bibliographic Data (2010)
• Goals:
• Integrate Library Catalogs on global scale
• Interconnect resources between repositories (by topic, by location, by
historical period, by ...)
23
24. Uptake in the Media Industry
• Publish data as RDF or embed as
RDFa
• Goal: Drive traffic to websites via
search engines
24
25. schema.org
• jointly proposed vocabularies for embedding data into HTML pages (Microdata)
• available since June 2011 25
26. Linked Data Applications
Linked Data Linked Data Search
Browsers Mashups Engines
Thing Thing Thing Thing Thing
Thing Thing Thing Thing Thing
typed typed typed typed
links links links links
A B C D E
26
30. Lower Data Integration Costs
The overall data integration effort is split between
the data publisher, the data consumer and third parties.
• Data Publisher
• publishes data as RDF
• sets identity links
• reuses terms or publishes mappings
• Third Parties
• set identity links pointing at your data
• publish mappings to the Web
• Data Consumer
• has to do the rest
• using record linkage and schema matching techniques 30
31. Is your data 5 star?
★ Make your stuff available on the Web (whatever format) under
an open license.
★★ Make it available as structured data (e.g., Excel instead of image
scan of a table) so that it can be reused.
★★★ Use non-proprietary, open formats (e.g., CSV instead of Excel).
★★★★ Use URIs to identify things, so that people can point at your stuff
and serve RDF from it.
★ ★ ★ ★ ★ Link your data to other data to provide context.
Tim Berners-Lee, http://www.w3.org/DesignIssues/LinkedData.html, 2010
31
32. How to publish Linked Data
Tasks:
1. Make data available as RDF via HTTP
2. Set RDF links pointing at other data sources
3. Make your data self-descriptive
4. Reuse common vocabularies
Tom Heath, Christian Bizer: Linked Data: Evolving the Web into a Global Data
Space
http://linkeddatabook.com/
32
33. Make Data available as RDF via HTTP
•Ready to use tools (examples)
• D2R Server
• provides for mapping relational
databases into RDF and for
serving them as Linked Data
• Pubby
• Linked Data Frontend for
SPARQL Endpoints
• More tools
• http://esw.w3.org/TaskForces/
CommunityProjects/
LinkingOpenData/PublishingTools
33
34. Set RDF links to other data sources
• Examples of RDF links
<http://dbpedia.org/resource/Berlin> owl:sameAs <http://
sws.geonames.org/2950159> .
<http://richard.cyganiak.de/foaf.rdf#cygri> foaf:topic_interest
<http://dbpedia.org/resource/Semantic_Web> .
<http://example-bookshop.com/book006251587X> owl:sameAs <http://
www4.wiwiss.fu-berlin.de/bookmashup/books/006251587X> .
34
35. How to generate RDF links?
• Pattern-based approaches
• Exploit naming conventions within URIs (for instance ISBNs, ISINs, …)
• Similarity-based approaches
• Compare items within different data sources using various similarity metrics
• Ready to use tools (Examples)
• Silk Link Discovery Framework
• provides a declarative language for specifying link conditions
which may combine different similarity metrics
• More tools
• http://esw.w3.org/TaskForces/CommunityProjects/LinkingOpenData/
EquivalenceMining
35
36. Make your Data Self-Descriptive
• Increase the usefulness of your data and ease data integration
• Aspects of self-descriptiveness
• Enable clients to retrieve the schema
• Reuse terms from common vocabularies
• Publish schema mappings for proprietary terms
• Provide provenance metadata
• Provide licensing metadata
• Provide data-set-level metadata using voiD
• Refer to additional access methods using voiD
36
37. Enable Clients to retrieve the Schema
Clients can resolve the URIs that identify vocabulary terms in
order to get their RDFS or OWL definitions.
Some data on the Web
<http://richard.cyganiak.de/foaf.rdf#cygri>
foaf:name "Richard Cyganiak" ;
rdf:type <http://xmlns.com/foaf/0.1/Person> .
Resolve unknown term http://xmlns.com/foaf/0.1/Person
RDFS or OWL definition
<http://xmlns.com/foaf/0.1/Person>
rdf:type owl:Class ;
rdfs:label "Person";
rdfs:subClassOf <http://xmlns.com/foaf/0.1/Agent> ;
rdfs:subClassOf <http://xmlns.com/wordnet/1.6/Agent> .
37
38. Reuse Terms from Common Vocabularies
• Common Vocabularies
• Friend-of-a-Friend for describing people and their social network
• SIOC for describing forums and blogs
• SKOS for representing topic taxonomies
• Organization Ontology for describing the structure of organizations
• GoodRelations provides terms for describing products and business entities
• Music Ontology for describing artists, albums, and performances
• Review Vocabulary provides terms for representing reviews
• Common sources of identifiers (URIs) for real world objects
• LinkedGeoData and Geonames locations
• GeneID and UniProt life science identifiers 38
40. Conclusion
• Linked Data provides a standardized data access interface
• Linked Data allows for the development of a variety of tools to integrate,
enhance and and view the data
• The Web of Data is growing rapidly
• There are active deployment communities in different domains
• Web search is evolving into query answering
• Search engines will increasingly rely on structured data from the Web
40
41. Thanks
Questions?
Email: anja@anjeve.de
Twitter: @anjeve
References
• Tom Heath, Christian Bizer: Linked Data: Evolving the Web into a Global Data Space
http://linkeddatabook.com/
• Christian Bizer, Tom Heath, Tim Berners-Lee: Linked Data – The Story So Far
http://tomheath.com/papers/bizer-heath-berners-lee-ijswis-linked-data.pdf
• Linking Open Data Project Wiki
http://esw.w3.org/topic/SweoIG/TaskForces/CommunityProjects/LinkingOpenData
41