This presentation introduces the semantic web concepts that enable the publication of linked open data. It also introduces LodLive, a linked open data visualization, and discover-me-semantically, an RDF authoring tool.
S. Dixon, C. Mesnage, B. Norton. LinkedBrainz LiveMusicNet
Simon Dixon, Cedric Mesnage and Barry Norton (Centre for Digital Music, Queen Mary University of London).
Music Linked Data Workshop, 12 May 2011, JISC, London.
Combining Social Music and Semantic Web for music-related recommender systemsAlexandre Passant
This document discusses combining social music data and the semantic web for music recommendation systems. It outlines how social music data from services like Last.fm can be modeled and interconnected using ontologies like FOAF, SIOC and MOAT. This unified semantic social music data can then be used for music recommendations by exploring relationships between artists, genres, social connections and tagged content. Examples of recommendation approaches are provided that leverage different aspects of the semantic social music graph.
Intro to Linked Open Data in Libraries Archives & Museums.Jon Voss
This document discusses a presentation on Linked Open Data in libraries, archives, and museums. The presentation introduces Linked Open Data and how it is being used in cultural heritage institutions. It discusses representing data as graphs using triples and RDF, important vocabularies and ontologies, and following Tim Berners-Lee's principles of Linked Data. The presentation also covers legal and licensing considerations for publishing open cultural data on the web.
ALIAOnline Practical Linked (Open) Data for Libraries, Archives & MuseumsJon Voss
This document discusses practical applications of Linked Open Data (LOD) for libraries, archives, and museums. It describes how LOD allows these institutions to publish structured data on the web in ways that are interoperable and can be connected to other open datasets. Examples are given of how LOD is being used by various institutions to share metadata, images, and other cultural heritage assets on the web in open, machine-readable formats. The presenter argues that LOD represents a new paradigm that these cultural organizations should embrace to make their collections more accessible and useful on the web.
The most exciting development in PR (and marketing) since the Cluetrain.
The presentation introduces and explains the Semantic Web (aka Web 3.0) and identifies why this is of critical importance, now, to the influence disciplines.
It concludes by outlining two Semantic Web ontologies required of the PR industry in its contribution to the growth and usefulness of Linked Data and calls for collaborative support in their development.
Presented to members of the CIPR Social Media panel and other geeky types, London, 21st April 2010.
Generous Interfaces - rich websites for digital collections Mitchell Whitelaw
The document discusses principles for designing generous interfaces for digital cultural collections that go beyond traditional search-based interfaces. Generous interfaces provide context about the collection through visual samples before any search is conducted. They allow exploration of the collection through linked facets, timelines, and relationships within the collection. Examples discussed include interfaces using histograms, timelines, linked artists, and views that combine macro and micro levels of the collection.
FOAF (Friend of a Friend) is the most used ontology in the history of the universe. The document discusses the origins and rise of FOAF, which started as the RDFWebRing in 2000 to describe personal profiles and connections between individuals on the semantic web. It became widely used through applications like LiveJournal and Tribe in the early 2000s. The simple concept of describing people and their relationships enabled FOAF to spread organically and become very active despite starting as a side project.
S. Dixon, C. Mesnage, B. Norton. LinkedBrainz LiveMusicNet
Simon Dixon, Cedric Mesnage and Barry Norton (Centre for Digital Music, Queen Mary University of London).
Music Linked Data Workshop, 12 May 2011, JISC, London.
Combining Social Music and Semantic Web for music-related recommender systemsAlexandre Passant
This document discusses combining social music data and the semantic web for music recommendation systems. It outlines how social music data from services like Last.fm can be modeled and interconnected using ontologies like FOAF, SIOC and MOAT. This unified semantic social music data can then be used for music recommendations by exploring relationships between artists, genres, social connections and tagged content. Examples of recommendation approaches are provided that leverage different aspects of the semantic social music graph.
Intro to Linked Open Data in Libraries Archives & Museums.Jon Voss
This document discusses a presentation on Linked Open Data in libraries, archives, and museums. The presentation introduces Linked Open Data and how it is being used in cultural heritage institutions. It discusses representing data as graphs using triples and RDF, important vocabularies and ontologies, and following Tim Berners-Lee's principles of Linked Data. The presentation also covers legal and licensing considerations for publishing open cultural data on the web.
ALIAOnline Practical Linked (Open) Data for Libraries, Archives & MuseumsJon Voss
This document discusses practical applications of Linked Open Data (LOD) for libraries, archives, and museums. It describes how LOD allows these institutions to publish structured data on the web in ways that are interoperable and can be connected to other open datasets. Examples are given of how LOD is being used by various institutions to share metadata, images, and other cultural heritage assets on the web in open, machine-readable formats. The presenter argues that LOD represents a new paradigm that these cultural organizations should embrace to make their collections more accessible and useful on the web.
The most exciting development in PR (and marketing) since the Cluetrain.
The presentation introduces and explains the Semantic Web (aka Web 3.0) and identifies why this is of critical importance, now, to the influence disciplines.
It concludes by outlining two Semantic Web ontologies required of the PR industry in its contribution to the growth and usefulness of Linked Data and calls for collaborative support in their development.
Presented to members of the CIPR Social Media panel and other geeky types, London, 21st April 2010.
Generous Interfaces - rich websites for digital collections Mitchell Whitelaw
The document discusses principles for designing generous interfaces for digital cultural collections that go beyond traditional search-based interfaces. Generous interfaces provide context about the collection through visual samples before any search is conducted. They allow exploration of the collection through linked facets, timelines, and relationships within the collection. Examples discussed include interfaces using histograms, timelines, linked artists, and views that combine macro and micro levels of the collection.
FOAF (Friend of a Friend) is the most used ontology in the history of the universe. The document discusses the origins and rise of FOAF, which started as the RDFWebRing in 2000 to describe personal profiles and connections between individuals on the semantic web. It became widely used through applications like LiveJournal and Tribe in the early 2000s. The simple concept of describing people and their relationships enabled FOAF to spread organically and become very active despite starting as a side project.
Unlocking doors: recent initiatives in open and linked data at National Libra...Gill Hamilton
Presentation given on 21 Sept 2012 at Cataloguing and Index Group (Scotland) seminar on "Opening Library Linked Data to National Heritage: Perspectives on International
Practice" http://www.slainte.org.uk/events/EvntShow.cfm?uEventID=2999
The presentation discusses linked data and its potential impact on libraries. Linked data uses URIs and HTTP to identify things on the web and link related resources. It enables libraries to publish structured data on the web and connect their resources to other data sources. While challenging, linked data offers a way for libraries to share data on the web of data and transition from cataloging to "catalinking", connecting their resources on the semantic web.
The document discusses using linked open data and linked data principles for libraries. It covers key concepts like URIs, RDF triples, ontologies and vocabularies. It then outlines options for libraries to both consume and publish linked data, such as enriching existing catalog data by linking to external sources, creating new information aggregates, and publishing library holdings and metadata as linked open data. Challenges include a lack of common identifiers, FRBRization of existing data, and the need for content curation and new technical systems to fully realize the benefits of linked open data for libraries.
This document discusses linking open data with Drupal. It begins with an introduction to open data and the semantic web. It explains how to transform open data into linked data using ontologies and semantic metadata. Several Drupal modules are presented for importing, publishing, and querying linked data. The document concludes by proposing a hackathon where participants could consume, publish, and build applications with linked open government data and the Drupal framework.
From the Semantic Web to the Web of Data: ten years of linking upDavide Palmisano
This document discusses the concepts and technologies behind the Semantic Web. It describes how RDF, RDF Schema, and OWL allow structured data and relationships to be represented and shared across the web. It also discusses tools for working with semantic data in Java, such as Jena, Sesame, and Any23 for extracting and working with RDF. The document provides examples of representing data and relationships in RDF and querying semantic data with SPARQL.
How to Build Linked Data Sites with Drupal 7 and RDFascorlosquet
Slides of the tutorial Stéphane Corlosquet, Lin Clark and Alexandre Passant presented at SemTech 2010 in San Francisco http://semtech2010.semanticuniverse.com/sessionPop.cfm?confid=42& proposalid=2889
This chapter discusses metadata and ontologies for digitally documenting cultural heritage. It introduces XML, RDF, Dublin Core, and the Semantic Web as standards for representing metadata. It also discusses OWL for defining ontologies and CIDOC-CRM as an ontology for cultural heritage documentation. The chapter aims to explain how these standards help achieve interoperability when sharing digital cultural heritage information on the internet.
This document discusses the Digital Public Library of America (DPLA) and linked library data. It begins by asking questions about what the DPLA is, where its materials and metadata are coming from, and what problems it may encounter. It then discusses that libraries have metadata in many forms beyond catalogs and that standards need to account for computers' abilities while allowing flexibility. Unique identifiers, controlled vocabularies, and machine-readable data are important. The document proposes several ways to connect library data, such as metadata standards or metasearch, and discusses issues with each. It introduces linked data using URIs, RDF triples, and vocabularies as a way to integrate data while allowing different implementations.
WTF is the Semantic Web and Linked DataJuan Sequeda
This document provides an overview of the Semantic Web and Linked Data. It begins by explaining some of the limitations of the current web, which treats all content as unstructured documents rather than structured data. It then introduces the Semantic Web and its data model, RDF, which allows publishing structured data on the web in a standardized way using graph-based representations. This enables linking different data sources on the web, addressing the problem of data silos. The document provides examples of representing bibliographic data about books in RDF and linking it to other datasets, demonstrating how the Semantic Web enables integrating and finding related information on the web.
My Linked Data tutorial presentation that I presented at Semtech 2012.
http://semtechbizsf2012.semanticweb.com/sessionPop.cfm?confid=65&proposalid=4724
From Feb 19 2014 NISO Virtual Conference: NISO Virtual Conference: The Semantic Web Coming of Age: Technologies and Implementations
Kevin Ford, Semantic Web Applications in Libraries: The Road to BIBFRAME
This document introduces linked data and discusses how publishing data as linked RDF triples on the web allows for a global linked database. It explains that linked data uses HTTP URIs to identify things and links data from different sources to be queried using SPARQL. Publishing linked data provides benefits like being able to integrate and discover related data on the web. Tools are available to convert existing data or publish new data as linked open data.
This document provides an overview of a Linked Data tutorial presented on March 6, 2009. The tutorial covered topics such as the motivation for Linked Open Data, relevant technologies like URIs, RDF, and SPARQL, and principles for publishing and interlinking data on the web in a way that is accessible to both humans and machines. The goal of Linked Data is to open up data silos and make public data available on the web in a standardized format.
This document discusses getting organizations and websites on the Linked Data web by following Linked Data principles. It provides an overview of Linked Data and its growth over time. The key Linked Data principles are to publish semantic data using RDF, enable linking between data through URIs, and use real URIs for identifying things. Adopting these principles allows data integration and querying across diverse datasets through standards like SPARQL. The document also discusses challenges in applying Linked Data to existing web content and standards like RDFa that embed semantic metadata directly in web pages.
Talk about Exploring the Semantic Web, and particularly Linked Data, and the Rhizomer approach. Presented August 14th 2012 at the SRI AIC Seminar Series, Menlo Park, CA
Presentation for a workshop about persistent identifiers organized by the Royal Library of The Netherlands and DANS. Highlights the non-trivial commitments required of all parties involved in persistent identifier systems to actually keep links based on persistent identifiers ... err ... persistent.
This document discusses how archives can use semantic web technologies like linked data to improve access to archival descriptions and resources. It provides background on the semantic web and linked data, and examples of how libraries are already using these approaches. While archival description standards like EAD currently focus on human-readable documents rather than linked data, the presenter argues the standards should evolve to represent information in a more computer-friendly and interoperable way, such as the emerging EAC standard. Overall, the presentation promotes the idea that archives can benefit from adopting semantic web best practices to better connect and expose archival information online.
Semantic Web technologies such as RDF and OWL have become World Wide Web Consortium (W3C) standards for knowledge representation and reasoning. RDF triples about triples, or meta triples, form the basis for a contextualized knowledge graph. They represent the contextual information about individual triples such as the source, the occurring time or place, or the certainty.
However, an efficient RDF representation for such meta-knowledge of triples remains a major limitation of the RDF data model. The existing reification approach allows such meta-knowledge of RDF triples to be expressed in RDF by using four triples per reified triple. While reification is simple and intuitive, this approach does not have a formal foundation and is not commonly used in practice as described in the RDF Primer.
This dissertation presents the foundations for representing, querying, reasoning and traversing the contextualized knowledge graphs (CKG) using Semantic Web technologies.
A triple-based compact representation for CKGs. We propose a principled approach and construct RDF triples about triples by extending the current RDF data model with a new concept, called singleton property (SP), as a triple identifier. The SP representation needs two triples to the RDF datasets and can be queried with SPARQL.
A formal model-theoretic semantics for CKGs. We formalize the semantics of the singleton property and its relationships with the triple it represents. We extend the current RDF model-theoretic semantics to capture the semantics of the singleton properties and provide the interpretation at three levels: simple, RDF, and RDFS. It provides a single interpretation of the singleton property semantics across applications and systems.
A sound and complete inference mechanism for CKGs. Based on the semantics we propose, we develop a set of inference rules for validating and inferring new triples based on the SP syntax. We also develop different sets of context-based inference rules for provenance, time, and uncertainty.
A graph-based formalism for CKGs. We propose a formal contextualized graph model for the SP representation. We formalize the RDF triples as a mathematical graph by combining the model theory and the graph theory into a hybrid RDF formal semantics. The unified semantics allows the RDF formal semantics to be leveraged in the graph-based algorithms.
The document discusses the history of libraries capturing data from handwritten catalog cards in the 19th century to the development of machine-readable cataloging formats like MARC. It then describes how libraries are now publishing data as linked open data on the web using standards like Schema.org and RDFa, with over 270 million resources available. The talk encourages libraries to stop just copying data and instead start linking to other data sources to fully leverage the potential of the semantic web.
The document discusses the Semantic Web and linked data. It defines the current web as consisting of documents linked by hyperlinks that are readable by humans but difficult for computers to understand. The Semantic Web aims to publish structured data on the web using common standards like RDF so that data can be linked, queried, and integrated across sources. Key points include:
- The Semantic Web uses RDF to represent data as a graph so that data from different sources can be linked together.
- Linked data follows principles like using URIs to identify things and including links to other related data.
- Query languages like SPARQL allow searching and integrating linked data from multiple sources.
- There are now
The document discusses a webinar presented by NISO and DCMI on Schema.org and Linked Data. The webinar provides an overview of Schema.org and Linked Data, examines the advantages and challenges of using RDF and Linked Data, looks at Schema.org in more detail, and discusses how Schema.org and Linked Data can be combined. The goals of the webinar are to illustrate the different design choices for identifying entities and describing structured data, integrating vocabularies, and incentives for publishing accurate data, as well as to help guide adoption of Schema.org and Linked Data approaches.
The document discusses the LOCAH Project which aims to expose data from the Archives Hub and Copac as linked open data. It describes creating URIs and an RDF data model for archival descriptions. It also discusses enhancing the data by linking to external vocabularies and creating a prototype visualization using tools like Timemap and Simile. Key challenges mentioned include the complexity of archival data and ensuring sustainability and scalability of the linked data.
Unlocking doors: recent initiatives in open and linked data at National Libra...Gill Hamilton
Presentation given on 21 Sept 2012 at Cataloguing and Index Group (Scotland) seminar on "Opening Library Linked Data to National Heritage: Perspectives on International
Practice" http://www.slainte.org.uk/events/EvntShow.cfm?uEventID=2999
The presentation discusses linked data and its potential impact on libraries. Linked data uses URIs and HTTP to identify things on the web and link related resources. It enables libraries to publish structured data on the web and connect their resources to other data sources. While challenging, linked data offers a way for libraries to share data on the web of data and transition from cataloging to "catalinking", connecting their resources on the semantic web.
The document discusses using linked open data and linked data principles for libraries. It covers key concepts like URIs, RDF triples, ontologies and vocabularies. It then outlines options for libraries to both consume and publish linked data, such as enriching existing catalog data by linking to external sources, creating new information aggregates, and publishing library holdings and metadata as linked open data. Challenges include a lack of common identifiers, FRBRization of existing data, and the need for content curation and new technical systems to fully realize the benefits of linked open data for libraries.
This document discusses linking open data with Drupal. It begins with an introduction to open data and the semantic web. It explains how to transform open data into linked data using ontologies and semantic metadata. Several Drupal modules are presented for importing, publishing, and querying linked data. The document concludes by proposing a hackathon where participants could consume, publish, and build applications with linked open government data and the Drupal framework.
From the Semantic Web to the Web of Data: ten years of linking upDavide Palmisano
This document discusses the concepts and technologies behind the Semantic Web. It describes how RDF, RDF Schema, and OWL allow structured data and relationships to be represented and shared across the web. It also discusses tools for working with semantic data in Java, such as Jena, Sesame, and Any23 for extracting and working with RDF. The document provides examples of representing data and relationships in RDF and querying semantic data with SPARQL.
How to Build Linked Data Sites with Drupal 7 and RDFascorlosquet
Slides of the tutorial Stéphane Corlosquet, Lin Clark and Alexandre Passant presented at SemTech 2010 in San Francisco http://semtech2010.semanticuniverse.com/sessionPop.cfm?confid=42& proposalid=2889
This chapter discusses metadata and ontologies for digitally documenting cultural heritage. It introduces XML, RDF, Dublin Core, and the Semantic Web as standards for representing metadata. It also discusses OWL for defining ontologies and CIDOC-CRM as an ontology for cultural heritage documentation. The chapter aims to explain how these standards help achieve interoperability when sharing digital cultural heritage information on the internet.
This document discusses the Digital Public Library of America (DPLA) and linked library data. It begins by asking questions about what the DPLA is, where its materials and metadata are coming from, and what problems it may encounter. It then discusses that libraries have metadata in many forms beyond catalogs and that standards need to account for computers' abilities while allowing flexibility. Unique identifiers, controlled vocabularies, and machine-readable data are important. The document proposes several ways to connect library data, such as metadata standards or metasearch, and discusses issues with each. It introduces linked data using URIs, RDF triples, and vocabularies as a way to integrate data while allowing different implementations.
WTF is the Semantic Web and Linked DataJuan Sequeda
This document provides an overview of the Semantic Web and Linked Data. It begins by explaining some of the limitations of the current web, which treats all content as unstructured documents rather than structured data. It then introduces the Semantic Web and its data model, RDF, which allows publishing structured data on the web in a standardized way using graph-based representations. This enables linking different data sources on the web, addressing the problem of data silos. The document provides examples of representing bibliographic data about books in RDF and linking it to other datasets, demonstrating how the Semantic Web enables integrating and finding related information on the web.
My Linked Data tutorial presentation that I presented at Semtech 2012.
http://semtechbizsf2012.semanticweb.com/sessionPop.cfm?confid=65&proposalid=4724
From Feb 19 2014 NISO Virtual Conference: NISO Virtual Conference: The Semantic Web Coming of Age: Technologies and Implementations
Kevin Ford, Semantic Web Applications in Libraries: The Road to BIBFRAME
This document introduces linked data and discusses how publishing data as linked RDF triples on the web allows for a global linked database. It explains that linked data uses HTTP URIs to identify things and links data from different sources to be queried using SPARQL. Publishing linked data provides benefits like being able to integrate and discover related data on the web. Tools are available to convert existing data or publish new data as linked open data.
This document provides an overview of a Linked Data tutorial presented on March 6, 2009. The tutorial covered topics such as the motivation for Linked Open Data, relevant technologies like URIs, RDF, and SPARQL, and principles for publishing and interlinking data on the web in a way that is accessible to both humans and machines. The goal of Linked Data is to open up data silos and make public data available on the web in a standardized format.
This document discusses getting organizations and websites on the Linked Data web by following Linked Data principles. It provides an overview of Linked Data and its growth over time. The key Linked Data principles are to publish semantic data using RDF, enable linking between data through URIs, and use real URIs for identifying things. Adopting these principles allows data integration and querying across diverse datasets through standards like SPARQL. The document also discusses challenges in applying Linked Data to existing web content and standards like RDFa that embed semantic metadata directly in web pages.
Talk about Exploring the Semantic Web, and particularly Linked Data, and the Rhizomer approach. Presented August 14th 2012 at the SRI AIC Seminar Series, Menlo Park, CA
Presentation for a workshop about persistent identifiers organized by the Royal Library of The Netherlands and DANS. Highlights the non-trivial commitments required of all parties involved in persistent identifier systems to actually keep links based on persistent identifiers ... err ... persistent.
This document discusses how archives can use semantic web technologies like linked data to improve access to archival descriptions and resources. It provides background on the semantic web and linked data, and examples of how libraries are already using these approaches. While archival description standards like EAD currently focus on human-readable documents rather than linked data, the presenter argues the standards should evolve to represent information in a more computer-friendly and interoperable way, such as the emerging EAC standard. Overall, the presentation promotes the idea that archives can benefit from adopting semantic web best practices to better connect and expose archival information online.
Semantic Web technologies such as RDF and OWL have become World Wide Web Consortium (W3C) standards for knowledge representation and reasoning. RDF triples about triples, or meta triples, form the basis for a contextualized knowledge graph. They represent the contextual information about individual triples such as the source, the occurring time or place, or the certainty.
However, an efficient RDF representation for such meta-knowledge of triples remains a major limitation of the RDF data model. The existing reification approach allows such meta-knowledge of RDF triples to be expressed in RDF by using four triples per reified triple. While reification is simple and intuitive, this approach does not have a formal foundation and is not commonly used in practice as described in the RDF Primer.
This dissertation presents the foundations for representing, querying, reasoning and traversing the contextualized knowledge graphs (CKG) using Semantic Web technologies.
A triple-based compact representation for CKGs. We propose a principled approach and construct RDF triples about triples by extending the current RDF data model with a new concept, called singleton property (SP), as a triple identifier. The SP representation needs two triples to the RDF datasets and can be queried with SPARQL.
A formal model-theoretic semantics for CKGs. We formalize the semantics of the singleton property and its relationships with the triple it represents. We extend the current RDF model-theoretic semantics to capture the semantics of the singleton properties and provide the interpretation at three levels: simple, RDF, and RDFS. It provides a single interpretation of the singleton property semantics across applications and systems.
A sound and complete inference mechanism for CKGs. Based on the semantics we propose, we develop a set of inference rules for validating and inferring new triples based on the SP syntax. We also develop different sets of context-based inference rules for provenance, time, and uncertainty.
A graph-based formalism for CKGs. We propose a formal contextualized graph model for the SP representation. We formalize the RDF triples as a mathematical graph by combining the model theory and the graph theory into a hybrid RDF formal semantics. The unified semantics allows the RDF formal semantics to be leveraged in the graph-based algorithms.
The document discusses the history of libraries capturing data from handwritten catalog cards in the 19th century to the development of machine-readable cataloging formats like MARC. It then describes how libraries are now publishing data as linked open data on the web using standards like Schema.org and RDFa, with over 270 million resources available. The talk encourages libraries to stop just copying data and instead start linking to other data sources to fully leverage the potential of the semantic web.
The document discusses the Semantic Web and linked data. It defines the current web as consisting of documents linked by hyperlinks that are readable by humans but difficult for computers to understand. The Semantic Web aims to publish structured data on the web using common standards like RDF so that data can be linked, queried, and integrated across sources. Key points include:
- The Semantic Web uses RDF to represent data as a graph so that data from different sources can be linked together.
- Linked data follows principles like using URIs to identify things and including links to other related data.
- Query languages like SPARQL allow searching and integrating linked data from multiple sources.
- There are now
The document discusses a webinar presented by NISO and DCMI on Schema.org and Linked Data. The webinar provides an overview of Schema.org and Linked Data, examines the advantages and challenges of using RDF and Linked Data, looks at Schema.org in more detail, and discusses how Schema.org and Linked Data can be combined. The goals of the webinar are to illustrate the different design choices for identifying entities and describing structured data, integrating vocabularies, and incentives for publishing accurate data, as well as to help guide adoption of Schema.org and Linked Data approaches.
The document discusses the LOCAH Project which aims to expose data from the Archives Hub and Copac as linked open data. It describes creating URIs and an RDF data model for archival descriptions. It also discusses enhancing the data by linking to external vocabularies and creating a prototype visualization using tools like Timemap and Simile. Key challenges mentioned include the complexity of archival data and ensuring sustainability and scalability of the linked data.
This document discusses linking data on the semantic web. It questions when data providers will value links enough to consistently create and maintain them between resources. It also questions how to link data in the absence of persistent identifiers. Specifically, it raises challenges around making persistent links without identifiers that remain constant over time.
Linked data presentation for libraries (COMO)robin fay
The document provides an overview of linked data and libraries. It discusses basic principles of linked data such as reusing and linking data to make it reusable, easy to correct, and potentially useful to others. The document also discusses how linked data fits into the semantic web vision by allowing machines to better understand and utilize data. Finally, it discusses getting started with linked data through terminology, advantages, and modeling library data in linked data formats like RDF.
This presentation gives a brief overview on achievements and challenges of the Data Web and describes different aspects of using the Semantic Data Wiki OntoWiki for Linked Data management.
The document discusses the opportunities and challenges of using Linked Data to connect libraries and their resources on the web. It describes what Linked Data is, how libraries can make their data available on the semantic web by following Linked Data principles, and the benefits this could provide including sending users to library resources and providing a richer experience. However, it also notes challenges in getting libraries to make this change and fully participate in the web of data.
Usage of Linked Data: Introduction and Application ScenariosEUCLID project
This presentation introduces the main principles of Linked Data, the underlying technologies and background standards. It provides basic knowledge for how data can be published over the Web, how it can be queried, and what are the possible use cases and benefits. As an example, we use the development of a music portal (based on the MusicBrainz dataset), which facilitates access to a wide range of information and multimedia resources relating to music.
The document provides an overview of how the LOCAH project is applying Linked Data concepts to expose archival and bibliographic data from the Archives Hub and Copac as Linked Open Data. It describes the process of (1) modeling the data as RDF triples, (2) transforming existing XML data to RDF, (3) enhancing the data by linking to external vocabularies and datasets, (4) loading the RDF into a triplestore, and (5) creating Linked Data views to expose the data on the web. The goal is to publish structured data that can be interconnected across domains to enable new uses by both humans and machines.
Should We Expect a Bang or a Whimper? Will Linked Data Revolutionize Scholar Authoring and Workflow Tools?
Jeff Baer, Senior Director of Product Management, Research Development Services, Proquest
RDF and Open Linked Data, a first approachhorvadam
This document discusses the potential benefits of libraries publishing their data as linked open data using semantic web technologies. It describes how linked data allows for standardized access to data across the web as a single API. Libraries can make their data more discoverable on the web and searchable by services like Google by publishing it as linked open data. Semantic web technologies like RDF and SPARQL allow for more powerful search capabilities. Several large libraries are already publishing portions of their data as linked open data, including authority files and entire catalogs. The document outlines some semantic web applications libraries could use to enhance discovery and provides examples of vocabularies for describing different types of metadata.
The document discusses how libraries can connect their resources and metadata through linked data and BIBFRAME to make their collections discoverable on the web. It notes that libraries currently have over 300 million resources available through linked data, but more participation is needed to fully realize the potential of linked data and reassert libraries' role as a discoverable source for all materials. The presentation was given by Richard Wallis of OCLC on guiding users to library resources through metadata and linked data standards.
From Open Linked Data towards an Ecosystem of Interlinked KnowledgeSören Auer
This document discusses the development of linked open data and its potential to create an ecosystem of interlinked knowledge. It outlines achievements in extending the web with structured data and the growth of an open research community. However, it also identifies challenges regarding coherence, quality, performance and usability that must be addressed for linked data to reach its full potential as a global platform for knowledge integration. The document proposes that addressing these issues could ultimately lead to an ecosystem of interlinked knowledge on the semantic web.
The document discusses several ontologies for the social web including FOAF, SIOC, and SKOS. FOAF describes personal information and social networks. SIOC provides methods for interconnecting online communities like blogs and forums. It aims to address interoperability issues on the social web. SIOC has been adopted in over 400 sites and has the potential to become a foundational vocabulary for the semantic web.
This is an informal overview of Linked Data and the usage made of it for the project http://res.space (presented on August 11th 2016 during a team meeting)
The document discusses making web content machine readable through linked open data and APIs in order to increase discoverability. It provides examples of how metadata from documents and databases can be extracted and linked together in semantic graphs to allow for complex queries across multiple sources. By making content and metadata accessible via APIs, cultural institutions like libraries, archives and museums are able to publish their collections as linked open data and have their resources incorporated and linked to by other semantic web applications and databases. This improves discovery of materials while also providing opportunities for new types of applications to be built by developers using the data.
Linked data demystified:Practical efforts to transform CONTENTDM metadata int...Cory Lampert
This document outlines a presentation about transforming metadata from a CONTENTdm digital collection into linked data. It discusses the concepts of linked data, including defining linked data, linked data principles, technologies and standards. It then explains how these concepts can be applied to digital collection records, including anticipated challenges working with CONTENTdm. The document describes a linked data project at UNLV Libraries to transform collection records into linked data and publish it on the linked data cloud. It provides tips for creating metadata that is more suitable for linked data.
Talk delivered at YOW! Developer Conferences in Melbourne, Brisbane and Sydney Australia on 1-9 December 2016.
Abstract: Governments collect a lot of data. Data on air quality, toxic chemicals, laws and regulations, public health, and the census are intended to be widely distributed. Some data is not for public consumption. This talk focuses on open government data — the information that is meant to be made available for benefit of policy makers, researchers, scientists, industry, community organisers, journalists and members of civil society.
We’ll cover the evolution of Linked Data, which is now being used by Google, Apple, IBM Watson, federal governments worldwide, non-profits including CSIRO and OpenPHACTS, and thousands of others worldwide.
Next we’ll delve into the evolution of the U.S. Environmental Protection Agency’s Open Data service that we implemented using Linked Data and an Open Source Data Platform. Highlights include how we connected to hundreds of billions of open data facts in the world’s largest, open chemical molecules database PubChem and DBpedia.
WHO SHOULD ATTEND
Data scientists, software engineers, data analysts, DBAs, technical leaders and anyone interested in utilising linked data and open government data.
Information Extraction and Linked Data CloudDhaval Thakker
The document discusses Press Association's semantic technology project which aims to generate a knowledge base using information extraction and the Linked Data Cloud. It outlines Press Association's operations and workflow, and how semantic technologies can be used to develop taxonomies, annotate images, and extract entities from captions into an ontology-based knowledge base. The knowledge base can then be populated and interlinked with external datasets from the Linked Data Cloud like DBpedia to provide a comprehensive, semantically-structured source of information.
About the Webinar
The library and cultural institution communities have generally accepted the vision of moving to a Linked Data environment that will align and integrate their resources with those of the greater Semantic Web. But moving from vision to implementation is not easy or well-understood. A number of institutions have begun the needed infrastructure and tools development with pilot projects to provide structured data in support of discovery and navigation services for their collections and resources.
Join NISO for this webinar where speakers will highlight actual Linked Data projects within their institutions—from envisioning the model to implementation and lessons learned—and present their thoughts on how linked data benefits research, scholarly communications, and publishing.
Speakers:
Jon Voss - Strategic Partnerships Director, We Are What We Do
LODLAM + Historypin: A Collaborative Global Community
Matt Miller - Front End Developer, NYPL Labs at the New York Public Library
The Linked Jazz Project: Revealing the Relationships of the Jazz Community
Cory Lampert - Head, Digital Collections , UNLV University Libraries
Silvia Southwick - Digital Collections Metadata Librarian, UNLV University Libraries
Linked Data Demystified: The UNLV Linked Data Project
FAIR Data Prototype - Interoperability and FAIRness through a novel combinati...Mark Wilkinson
This slide deck accompanies the manuscript "Interoperability and FAIRness through a novel combination of Web technologies", submitted to PeerJ Computer Science: https://doi.org/10.7287/peerj.preprints.2522v1
It describes the output of the "Skunkworks" FAIR implementation group, who were tasked with building a prototype infrastructure that would fulfill the FAIR Principles for scholarly data publishing. We show how a novel combination of the Linked Data Platform, RDF Mapping Language (RML) and Triple Pattern Fragments (TPF) can be combined to create a scholarly publishing infrastructure that is markedly interoperable, at both the metadata and the data level.
This slide deck (or something close) will be presented at the Dutch Techcenter for Life Sciences Partners Workshop, November 4, 2016.
Spanish Ministerio de Economía y Competitividad grant number TIN2014-55993-R
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
1. Chasing Serendipity with Linked Open Data
Rob Stewart, Jamie Forth & Diana Bental
Members of the SerenA development team
27th June, 2012
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 1 / 32
2. SerenA
Project vision
“The vision of the SerenA project is to transform
research processes by proactively creating surprising
connection opportunities. We will deliver novel
technologies, methods and evaluation techniques for
supporting serendipitous interactions in the research
arena.”
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 2 / 32
3. Introductions
Design team
Human factors team
Dev team
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 3 / 32
4. Aims of the workshop
Introduce the basic concepts of the Semantic Web.
Creating a semantic user profile.
Introduce and explore Linked Open Data.
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 4 / 32
5. SerenA
A web of connectivity
Where might SerenA find information about
Places
Universities
Researchers
Research domains
. . . “Things”
To connect people with pertinent and interesting ideas
. . . and to each other through pathways of related areas
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 5 / 32
6. SerenA
A web of connectivity
Where might SerenA find information about
Places
Universities
Researchers
Research domains
. . . “Things”
To connect people with pertinent and interesting ideas
. . . and to each other through pathways of related areas
The web!
The worlds largest distributed information resource
Evolving to become more meaningfully connected
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 5 / 32
7. Evolution of the WWW
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 6 / 32
8. The World Wide Web
Legacy web
<html>
<body>
<h1>About me</h1>
This is a page describing me.
<h2>My Interests</h2>
<li> Football
<li> Computer science
<li> Jazz funk
</body>
</html>
A distributed network of connected documents
<html> jargon web browser rendering
× Not machine readable
× . . . Lacks deeper meaning
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 7 / 32
9. SerenA
WWW forecast
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 8 / 32
10. The Semantic Web
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 9 / 32
11. The Semantic Web
Principles
Building blocks are very simple!
Describing resources with Resource Description Framework
Subject the resource being described
Property denotes traits or aspects of a resource
Object the associated resource
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 10 / 32
12. The Semantic Web
Principles
Building blocks are very simple!
Describing resources with Resource Description Framework
Subject the resource being described
Property denotes traits or aspects of a resource
Object the associated resource
e.g.
<Joe Bloggs> age "40"
<Joe Bloggs> friend <Harry Hill>
<Hary Hill> location <Edinburgh>
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 10 / 32
13. The Semantic Web
Terminology
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 11 / 32
14. The Semantic Web
Principles
Unique Resource Identifier’s represent things
http://www.serena.ac.uk/users/robstewart
The same thing can have many URIs.
The same thing can have different names in different data
sources.
http://www.macs.hw.ac.uk/~rs46/rob
https://www.twitter.com/!#/robstewartUK
http://www.serena.ac.uk/users/robstewart
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 12 / 32
15. The Semantic Web
Terminology
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 13 / 32
16. The Semantic Web
Semantic interlinking
RDF on Rob’s home page
http://www.macs.hw.ac.uk/~rs46/rob
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
<http://www.macs.hw.ac.uk/~rs46/rob> a foaf:Person;
foaf:depiction <http://www.macs.hw.ac.uk/~rs46/images/headshot.jpg>;
foaf:name "Rob Stewart";
foaf:topic_interest <http://dbpedia.org/resource/Semantic_web> .
RDF available from Twitter
https://www.twitter.com/!#/robstewartUK
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
<https://www.twitter.com/!#/robstewartUK>
rdfs:comment "Tweets about GNU and FOSS, but mostly Real Ale";
foaf:based_near <http://dbpedia.org/resource/Edinburgh> .
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 14 / 32
17. The Semantic Web
Semantic inference
We can link these together using owl:sameAs.
http://www.macs.hw.ac.uk/~rs46/rob
owl:sameAs
https://www.twitter.com/!#/robstewartUK
owl:sameAs
http://www.serena.ac.uk/users/robstewart
Some basic reasoning
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
<http://www.serena.ac.uk/users/robstewart> a foaf:Person;
rdfs:comment "Tweets about GNU and FOSS, but mostly Real Ale";
foaf:based_near <http://dbpedia.org/resource/Edinburgh>;
foaf:depiction <http://www.macs.hw.ac.uk/~rs46/images/headshot.jpg>;
foaf:name "Rob Stewart";
foaf:topic_interest <http://dbpedia.org/resource/Semantic_web> .
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 15 / 32
19. RDF Authoring
Discover me semantically
A web tool that enables RDF authorship
Encourages the use of URIs with auto-completion
Provides the ability to
Download your RDF to file
Visualize your RDF connected to linked open data
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 17 / 32
20. RDF Authoring
Discover me semantically
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 18 / 32
21. RDF Authoring
Discover me semantically
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 19 / 32
22. Discover me Semantically
Go play
http://serena.macs.hw.ac.uk/serena/discover-me-semantically/
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 20 / 32
23. Linked Open Data
What is it?
Linked Open Data is real data
published according to Semantic Web
principles.
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 21 / 32
24. Linked Open Data
What is it?
Europeana: example LOD service provider.
An EU funded project to produce a multi-lingual online
collection of millions of digitised items from European
museums, libraries, archives and multi-media collections.
. . . video by Europeana http://vimeo.com/36752317
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 22 / 32
25. Linked Open Data
What is it?
Linked
LOV User Slideshare tags2con
Audio
Feedback 2RDF delicious
Moseley Scrobbler Bricklink Sussex
Folk (DBTune) Reading St.
GTAA
Magna- Lists Andrews
Klapp-
tune stuhl- Resource NTU
DB club Lists Resource
Tropes Lotico Semantic yovisto
John Music Man- Lists
Music Tweet chester
Hellenic Peel Brainz NDL
(DBTune) (Data Brainz Reading
subjects
FBD (zitgist) Lists Open
EUTC Incubator) Linked
Hellenic Library Open t4gm
Produc- Crunch-
PD Surge RDF info
tions
Discogs base Library
Radio Ontos Source Code
Crime ohloh Plymouth (Talis)
(Data News LEM
Ecosystem Reading RAMEAU
Reports business Incubator)
Crime data.gov. Portal Linked Data Lists SH
UK Music Jamendo
(En- uk
Brainz (DBtune) LinkedL
Ox AKTing) FanHubz gnoss ntnusc
(DBTune) SSW CCN
Points Thesau-
Last.FM Poké- Thesaur
Popula- artists pédia Didactal us rus W LIBRIS
tion (En- (DBTune) Last.FM ia theses. LCSH Rådata
reegle research patents MARC
AKTing) (rdfize) my fr nå!
data.gov. data.go Codes
Ren.
NHS uk v.uk Good- Experi-
Classical List
Energy (En- win flickr ment
(DB Pokedex Family Norwe-
Genera- AKTing) Mortality BBC wrappr Sudoc PSH
Tune) gian
(En-
tors Program MeSH
AKTing) semantic
mes BBC IdRef GND
CO2 educatio OpenEI web.org SW
Energy Sudoc ndlna
Emission n.data.g Music Dog VIAF
EEA (En- Chronic- Linked
(En- ov.uk Portu- Food UB
AKTing) ling Event MDB
AKTing) guese Mann- Europeana
BBC America Media
DBpedia Calames heim
Ord- Recht- Wildlife Deutsche
Open Revyu DDC
Openly spraak. Finder Bio- lobid
Election nance
legislation Local nl RDF graphie
Resources NSZL Swedish
Data Survey Tele- data Ulm
EU New Book
Project data.gov.uk graphis bnf.fr Catalog Open
Insti- York
URI Open Mashup Cultural
tutions Times Greek P20
UK Post- Burner Calais Heritage
codes DBpedia ECS Wiki
statistics lobid
GovWILD data.gov. Taxon iServe South- Organi-
LOIUS BNB
Brazilian
uk Concept ECS ampton sations
Geo World OS BibBase STW GESIS
Poli- ESD South- ECS
Names Fact- (RKB
ticians stan- reference ampton
data.gov.uk book Freebase Explorer) Budapest
dards data.gov. NASA EPrints
uk intervals Project OAI
Lichfield transport (Data DBpedia data
Guten- Pisa
Spen- data.gov. Incu- dcs RESEX Scholaro-
ISTAT ding bator) Fishes berg DBLP DBLP
uk Geo
meter
Immi- Scotland of Texas (FU (L3S)
Pupils & Uberblic DBLP
gration Species Berlin) IRIT
Exams Euro- dbpedia data- (RKB
London TCM ACM
stat lite open- Explorer) NVD
Gazette (FUB) Gene IBM
Traffic Geo ac-uk
Scotland TWC LOGD Eurostat Daily DIT
Linked UN/
Data UMBEL Med ERA
Data LOCODE DEPLOY
Gov.ie CORDIS YAGO New-
lingvoj Disea-
(RKB some SIDER RAE2001 castle LOCAH
CORDIS Explorer) Linked Eurécom
Eurostat Drug CiteSeer Roma
(FUB) Sensor Data
GovTrack (Ontology (Kno.e.sis) Open Bank Pfam Course-
Central) riese Enipedia
Cyc Lexvo LinkedCT ware
Linked PDB
UniProt VIVO
EURES EDGAR dotAC
US SEC Indiana ePrints IEEE
(Ontology totl.net
(rdfabout)
Central) WordNet RISKS
(VUA) Taxono UniProt
US Census EUNIS Twarql HGNC
Semantic Cornetto (Bio2RDF)
(rdfabout) my VIVO
FTS XBRL PRO- ProDom STITCH Cornell LAAS
SITE KISTI NSF
Scotland
Geo- GeoWord LODE
graphy Net WordNet WordNet JISC
(W3C) (RKB
Climbing
Linked Affy- KEGG
SMC Explorer) SISVU Pub VIVO UF
Piedmont GeoData metrix Drug
ECCO-
Finnish Journals PubMed Gene SGD Chem
Munici-
Accomo- El AGROV Ontology TCP Media
dations Alpine bible
palities Viajero OC
Ski ontology
Tourism KEGG
Ocean
Austria
Enzyme PBAC Geographic
Metoffice GEMET ChEMBL
Italian Drilling OMIM KEGG
Weather Open
public Codices AEMET Linked MGI Pathway
schools Forecasts
Data
Open InterPro GeneID Publications
EARTh Thesau- KEGG
Turismo
rus Colors Reaction
de
Zaragoza Product Smart KEGG
User-generated content
Weather DB Link Medi Glycan
Janus Stations Product Care KEGG
AMP UniParc UniRef UniSTS Government
Types Italian
Homolo Com-
Yahoo! Airports Museums pound
Ontology Google
Gene
Geo Art
Planet National
wrapper
Chem2 Cross-domain
Radio- Bio2RDF
activity UniPath
JP Sears Open Linked OGOLOD way
Life sciences
Corpo- Amster- Reactome
dam medu- Open
rates Numbers
Museum cator
As of September 2011
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 23 / 32
26. DBpedia
What is it
Extracts structured information from Wikipedia
Links Wikipedia to other semantic datasets on the web
Allows Wikipedia information to be used in new and
interesting ways
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 24 / 32
27. Linked Open Data
How do I find it?
The Data Hub is a community-maintained catalogue of datasets,
managed by the Open Knowledge Foundation.
http://thedatahub.org/
Of these, the current 327 datasets making up the LOD cloud are
listed here:
http://thedatahub.org/group/lodcloud
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 25 / 32
28. Linked Open Data
5* linked data
* Information available on the web under open license
** Information as structured data (e.g. spreadsheets)
*** Non-proprietary formats (e.g. CSV not Excel)
**** URI identification allowing other dataset to link to
your data
***** Data linked to other dataset to provide context
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 26 / 32
29. The Semantic Web
Terminology
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 27 / 32
30. Linked Open Data in the Wild
Linked Geo Data
OpenStreetMap (OSM): The free worldwide map.
http://www.openstreetmap.org/
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 28 / 32
31. Linked Open Data in the Wild
Linked Geo Data
OpenStreetMap (OSM): The free worldwide map.
http://www.openstreetmap.org/
Linked Open Data: OSM data as RDF – the canonical
reference for all things spatial.
More than 1 billion resources
100 million ”ways” from OSM
20 billion triples
http://browser.linkedgeodata.org/
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 28 / 32
32. Linked Open Data
LOD makes it possible to make connections across heterogeneous
data sources.
Adds value to the data.
Makes data more accessible and meaningful.
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 29 / 32
33. Linked Open Data
Visualizations
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 30 / 32
34. LOD Visualizations
LodLive
Visual navigation of RDF resources
Users can easily navigate between datasets
Can interlink LOD resources with RDF files on the web
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 31 / 32
35. LOD Visualization
LodLive
e.g.
http://dbpedia.org/resource/Dundee
http://goo.gl/ewxa3
Chasing Serendipity with Linked Open Data Rob Stewart, Jamie Forth & Diana Bental 32 / 32