The document discusses library linked data and the British Library Data Model. It provides an overview of the classes, properties, and relationships that make up the British Library's conceptual data model for representing bibliographic metadata and events as linked data. The model links bibliographic resources to related concepts like publications events, topics, agents (authors), and places.
Richard Wallis is a technology evangelist who works on semantic technologies and linked data. He gave a presentation in June 2012 about cultural linked data and how libraries, archives, and museums can help provide the backbone of information on the Web of Data, similar to how they have historically served as the backbone of information for centuries in other formats.
Status Quo and (current) Limitations of Library Linked DataDaniel Vila Suero
Talk at the Semantic Web in Libraries Conference 2012 (SWIB2012). Cologne 28/12/2012 during the session "TOWARDS AN INTERNATIONAL LOD LIBRARY ECOLOGY".
(http://swib.org/swib12/programme.php)
This document summarizes Maphub and Annotorious, which are tools for annotating historical maps and images. Maphub is an online app that allows users to explore and annotate digitized historical maps. It has features like geo-referencing, map overlays, textual annotations with semantic tagging, multilingual search, and integration with the W3C Open Annotation API. Annotorious is a JavaScript library that adds annotation capabilities to existing web pages. It allows images to be marked as annotatable and supports features like bounding boxes and polygons. The document encourages users to get involved with these open source projects and discusses how they fit into the broader context of linked data and semantic annotation tools.
This document contains an ontology defined as a set of RDF triples for an intelligent agent that helps a lost traveler. The ontology defines properties like "Captures" and "EstimatesValue" that link sensors and rules to perceptions and state. It also defines concepts like "LocalSearch" and "Actions" that the agent can learn, execute and use to navigate its environment and help the traveler.
The document discusses BioPAX, a standard language for representing biological pathway data. It notes that BioPAX aims to enable integration, exchange, visualization and analysis of pathway data by formalizing terminology as an OWL ontology and instantiating data that validates against the ontology. However, it states that BioPAX data is not yet ready for the semantic web due to issues like duplicity in database terminology and lack of resolvable identifiers in cross-references. It suggests addressing these issues by normalizing cross-references through identifiers.org and maintaining the type of relationship in cross-references through more specific predicate properties.
The document describes how to use SPARQL to query Linked Open Data from the LODAC Museum dataset to retrieve information about art spots in Yokohama. It provides a SPARQL query that selects the URI, title, latitude, longitude, postal code, address and access information for organizations that are within the specified bounding box coordinates. The query utilizes prefixes to define namespaces and joins data from multiple sources using properties like dc:references.
The document discusses library linked data and the British Library Data Model. It provides an overview of the classes, properties, and relationships that make up the British Library's conceptual data model for representing bibliographic metadata and events as linked data. The model links bibliographic resources to related concepts like publications events, topics, agents (authors), and places.
Richard Wallis is a technology evangelist who works on semantic technologies and linked data. He gave a presentation in June 2012 about cultural linked data and how libraries, archives, and museums can help provide the backbone of information on the Web of Data, similar to how they have historically served as the backbone of information for centuries in other formats.
Status Quo and (current) Limitations of Library Linked DataDaniel Vila Suero
Talk at the Semantic Web in Libraries Conference 2012 (SWIB2012). Cologne 28/12/2012 during the session "TOWARDS AN INTERNATIONAL LOD LIBRARY ECOLOGY".
(http://swib.org/swib12/programme.php)
This document summarizes Maphub and Annotorious, which are tools for annotating historical maps and images. Maphub is an online app that allows users to explore and annotate digitized historical maps. It has features like geo-referencing, map overlays, textual annotations with semantic tagging, multilingual search, and integration with the W3C Open Annotation API. Annotorious is a JavaScript library that adds annotation capabilities to existing web pages. It allows images to be marked as annotatable and supports features like bounding boxes and polygons. The document encourages users to get involved with these open source projects and discusses how they fit into the broader context of linked data and semantic annotation tools.
This document contains an ontology defined as a set of RDF triples for an intelligent agent that helps a lost traveler. The ontology defines properties like "Captures" and "EstimatesValue" that link sensors and rules to perceptions and state. It also defines concepts like "LocalSearch" and "Actions" that the agent can learn, execute and use to navigate its environment and help the traveler.
The document discusses BioPAX, a standard language for representing biological pathway data. It notes that BioPAX aims to enable integration, exchange, visualization and analysis of pathway data by formalizing terminology as an OWL ontology and instantiating data that validates against the ontology. However, it states that BioPAX data is not yet ready for the semantic web due to issues like duplicity in database terminology and lack of resolvable identifiers in cross-references. It suggests addressing these issues by normalizing cross-references through identifiers.org and maintaining the type of relationship in cross-references through more specific predicate properties.
The document describes how to use SPARQL to query Linked Open Data from the LODAC Museum dataset to retrieve information about art spots in Yokohama. It provides a SPARQL query that selects the URI, title, latitude, longitude, postal code, address and access information for organizations that are within the specified bounding box coordinates. The query utilizes prefixes to define namespaces and joins data from multiple sources using properties like dc:references.
This document discusses how archives can use semantic web technologies like linked data to improve access to archival descriptions and resources. It provides background on the semantic web and linked data, and examples of how libraries are already using these approaches. While archival description standards like EAD currently focus on human-readable documents rather than linked data, the presenter argues the standards should evolve to represent information in a more computer-friendly and interoperable way, such as the emerging EAC standard. Overall, the presentation promotes the idea that archives can benefit from adopting semantic web best practices to better connect and expose archival information online.
2011 4IZ440 Semantic Web – RDF, SPARQL, and software APIsJosef Petrák
The document discusses the Semantic Web and RDF data formats. It provides an overview of RDF syntaxes like RDF/XML, N3, N-Triples, RDF/JSON, and RDFa. It also discusses software APIs for working with RDF data in languages like Java, PHP, and Ruby. The document outlines handling RDF data using statement-centric, resource-centric, and ontology-centric models, as well as named graphs. It provides examples of reading RDF data from files and querying RDF data using SPARQL.
This document discusses BIBFRAME, a new bibliographic framework being developed as a replacement for the MARC cataloging standard. It provides an overview of BIBFRAME, including its goals of utilizing linked data and resolving issues with MARC. The document also examines the BIBFRAME model and vocabulary, experiments being conducted with it, and questions around its future adoption.
LITA 2010: The Linked Library Data Cloud: it's time to stop think and start l...Ross Singer
The document discusses the need for libraries to link their data using semantic web technologies in order to overcome the problem of data existing in disconnected "silos". It describes how bibliographic data is currently created and stored separately by different libraries and organizations without being connected. The document advocates adopting the principles of linked data by assigning URIs to entities and objects, and encoding relationships between them using RDF to link bibliographic data across library catalogs and domains. This will allow libraries to leverage external sources of metadata and facilitate novel searches and discoveries across previously disconnected data silos.
The document contains log information from web server requests and access logs. It includes details like IP addresses, dates, request URLs, response codes, user agents, and referer URLs from multiple requests. The logs show information about requests for book information and search redirects.
GDG Meets U event - Big data & Wikidata - no lies codelabCAMELIA BOBAN
This document discusses using SPARQL to query RDF data from DBPedia. It provides an overview of key concepts like RDF triples, SPARQL, and Apache Jena framework. It also includes a sample SPARQL query to retrieve cities in Abruzzo, Italy with a population over 50,000. Resources and prefixes for working with DBPedia, Wikidata, and other linked data sets are listed.
Web spam involves intentional manipulation of web pages to influence search engine rankings. Some common techniques used by spammers include term spamming by repetitively including certain keywords, and link spamming by creating link farms or exchanges to boost page rank. Detecting web spam helps provide more relevant search results for users and avoids distorting the true importance of pages. Continued research explores improved methods for identifying spamming techniques and their structures.
This document provides an introduction to Resource Description Framework (RDF) and RDF XML. It defines key RDF concepts like URI references, qualified names, basic RDF triples, RDF graphs, and RDF Schema. It also explains how to represent RDF models and descriptions in RDF XML format using elements like rdf:RDF, rdf:Description, and properties. Examples are provided to illustrate RDF triples and RDF XML representations.
BetweenCreation is an art social network and marketplace that aims to address the limited diffusion of artworks and lack of knowledge about art movements in different places. It allows artists to publish their latest works and events, sell their pieces with low commissions, and connect with art lovers worldwide to expand their audiences. The platform also helps art lovers discover new artists and events near them or around the world through an integrated social network and mobile apps.
Dokumen ini membahas tentang sosialisasi Kurikulum Tingkat Satuan Pendidikan (KTSP) yang mencakup pengertian KTSP, landasan hukum dan acuan operasional penyusunan KTSP, komponen-komponen KTSP, dan isi/muatan KTSP.
The Power of Sharing Linked Data: Bibliothekartag 2014Richard Wallis
The document discusses OCLC's efforts to share library data as linked open data on the web. It describes OCLC releasing WorldCat data including 311 million records as linked data, using schemas like Schema.org and linking to other sources like VIAF. It also discusses the release of 197 million linked data work descriptions from WorldCat in April 2014. The goal is to make library data part of the web by giving search engines and users what they want, like structured data at web scale with identifiers and links.
Waves of Innovation: Signposts to a new web of informationRichard Wallis
The document discusses emerging technologies and trends that will shape the future of information, including cloud computing, linked open data, and a more open and connected web. It notes that technologies can help break down data silos and enable new types of collaboration and innovation across different domains like libraries, academia, and government. The presentation signs that these technologies may point to a more open sharing of data and knowledge online.
This document describes the British Library Data Model, which defines classes, properties and relationships for representing bibliographic metadata and linked data. It shows how concepts like works, expressions, manifestations, and items are modeled, along with authors, subjects, and publication events. Properties are defined to link these concepts and describe their relationships according to standards like FRBR, RDA, and SKOS.
The document discusses linked data and how it can be used to share information on the web in a structured format. It provides an overview of linked data and the Resource Description Framework (RDF), describes how URIs can be used to name things and link data on the web, and gives examples of publishing and querying linked data using RDF and SPARQL. Recent developments in using linked data by Facebook, Google, and other companies are also mentioned.
Presentation given at the CILIP Cataloguing and Indexing Group Conference 2014 "The Impact of Metadata" #cig14 on Monday 8 September 2014 at the University of Kent, Canterbury.
The document discusses the motivation for developing Semantic Automated Discovery and Integration (SADI) services as a way to represent important information that cannot be represented directly on the Semantic Web, such as data from analytical algorithms and statistical analyses, and presents SADI as a design pattern for making web services interoperable with the Semantic Web by explicitly labeling the relationships between entities.
The document provides examples of representing data in RDF formats including RDF/XML, Notation 3, Turtle and triples. It shows how to represent basic statements and relationships between resources as well as more complex data structures like bags, sequences and collections. Examples are given for converting between the different RDF syntaxes and representing graphs in RDF/XML.
This document discusses how archives can use semantic web technologies like linked data to improve access to archival descriptions and resources. It provides background on the semantic web and linked data, and examples of how libraries are already using these approaches. While archival description standards like EAD currently focus on human-readable documents rather than linked data, the presenter argues the standards should evolve to represent information in a more computer-friendly and interoperable way, such as the emerging EAC standard. Overall, the presentation promotes the idea that archives can benefit from adopting semantic web best practices to better connect and expose archival information online.
2011 4IZ440 Semantic Web – RDF, SPARQL, and software APIsJosef Petrák
The document discusses the Semantic Web and RDF data formats. It provides an overview of RDF syntaxes like RDF/XML, N3, N-Triples, RDF/JSON, and RDFa. It also discusses software APIs for working with RDF data in languages like Java, PHP, and Ruby. The document outlines handling RDF data using statement-centric, resource-centric, and ontology-centric models, as well as named graphs. It provides examples of reading RDF data from files and querying RDF data using SPARQL.
This document discusses BIBFRAME, a new bibliographic framework being developed as a replacement for the MARC cataloging standard. It provides an overview of BIBFRAME, including its goals of utilizing linked data and resolving issues with MARC. The document also examines the BIBFRAME model and vocabulary, experiments being conducted with it, and questions around its future adoption.
LITA 2010: The Linked Library Data Cloud: it's time to stop think and start l...Ross Singer
The document discusses the need for libraries to link their data using semantic web technologies in order to overcome the problem of data existing in disconnected "silos". It describes how bibliographic data is currently created and stored separately by different libraries and organizations without being connected. The document advocates adopting the principles of linked data by assigning URIs to entities and objects, and encoding relationships between them using RDF to link bibliographic data across library catalogs and domains. This will allow libraries to leverage external sources of metadata and facilitate novel searches and discoveries across previously disconnected data silos.
The document contains log information from web server requests and access logs. It includes details like IP addresses, dates, request URLs, response codes, user agents, and referer URLs from multiple requests. The logs show information about requests for book information and search redirects.
GDG Meets U event - Big data & Wikidata - no lies codelabCAMELIA BOBAN
This document discusses using SPARQL to query RDF data from DBPedia. It provides an overview of key concepts like RDF triples, SPARQL, and Apache Jena framework. It also includes a sample SPARQL query to retrieve cities in Abruzzo, Italy with a population over 50,000. Resources and prefixes for working with DBPedia, Wikidata, and other linked data sets are listed.
Web spam involves intentional manipulation of web pages to influence search engine rankings. Some common techniques used by spammers include term spamming by repetitively including certain keywords, and link spamming by creating link farms or exchanges to boost page rank. Detecting web spam helps provide more relevant search results for users and avoids distorting the true importance of pages. Continued research explores improved methods for identifying spamming techniques and their structures.
This document provides an introduction to Resource Description Framework (RDF) and RDF XML. It defines key RDF concepts like URI references, qualified names, basic RDF triples, RDF graphs, and RDF Schema. It also explains how to represent RDF models and descriptions in RDF XML format using elements like rdf:RDF, rdf:Description, and properties. Examples are provided to illustrate RDF triples and RDF XML representations.
BetweenCreation is an art social network and marketplace that aims to address the limited diffusion of artworks and lack of knowledge about art movements in different places. It allows artists to publish their latest works and events, sell their pieces with low commissions, and connect with art lovers worldwide to expand their audiences. The platform also helps art lovers discover new artists and events near them or around the world through an integrated social network and mobile apps.
Dokumen ini membahas tentang sosialisasi Kurikulum Tingkat Satuan Pendidikan (KTSP) yang mencakup pengertian KTSP, landasan hukum dan acuan operasional penyusunan KTSP, komponen-komponen KTSP, dan isi/muatan KTSP.
The Power of Sharing Linked Data: Bibliothekartag 2014Richard Wallis
The document discusses OCLC's efforts to share library data as linked open data on the web. It describes OCLC releasing WorldCat data including 311 million records as linked data, using schemas like Schema.org and linking to other sources like VIAF. It also discusses the release of 197 million linked data work descriptions from WorldCat in April 2014. The goal is to make library data part of the web by giving search engines and users what they want, like structured data at web scale with identifiers and links.
Waves of Innovation: Signposts to a new web of informationRichard Wallis
The document discusses emerging technologies and trends that will shape the future of information, including cloud computing, linked open data, and a more open and connected web. It notes that technologies can help break down data silos and enable new types of collaboration and innovation across different domains like libraries, academia, and government. The presentation signs that these technologies may point to a more open sharing of data and knowledge online.
This document describes the British Library Data Model, which defines classes, properties and relationships for representing bibliographic metadata and linked data. It shows how concepts like works, expressions, manifestations, and items are modeled, along with authors, subjects, and publication events. Properties are defined to link these concepts and describe their relationships according to standards like FRBR, RDA, and SKOS.
The document discusses linked data and how it can be used to share information on the web in a structured format. It provides an overview of linked data and the Resource Description Framework (RDF), describes how URIs can be used to name things and link data on the web, and gives examples of publishing and querying linked data using RDF and SPARQL. Recent developments in using linked data by Facebook, Google, and other companies are also mentioned.
Presentation given at the CILIP Cataloguing and Indexing Group Conference 2014 "The Impact of Metadata" #cig14 on Monday 8 September 2014 at the University of Kent, Canterbury.
The document discusses the motivation for developing Semantic Automated Discovery and Integration (SADI) services as a way to represent important information that cannot be represented directly on the Semantic Web, such as data from analytical algorithms and statistical analyses, and presents SADI as a design pattern for making web services interoperable with the Semantic Web by explicitly labeling the relationships between entities.
The document provides examples of representing data in RDF formats including RDF/XML, Notation 3, Turtle and triples. It shows how to represent basic statements and relationships between resources as well as more complex data structures like bags, sequences and collections. Examples are given for converting between the different RDF syntaxes and representing graphs in RDF/XML.
It's not rocket surgery - Linked In: ALA 2011Ross Singer
This document provides a brief introduction to linked library data and linked data concepts. It explains the core principles of linked data, including using URIs as names for things and including links between URIs so that additional related data can be discovered. It also discusses common vocabularies and schemas used in linked data like Dublin Core, Bibliontology, and RDA Elements. The document uses a sample book record to demonstrate how linked data can be modeled and interconnected using these vocabularies and external data sources like VIAF, LOC, and Geonames.
This document provides an introduction to bio-ontologies and the semantic web. It discusses what ontologies are and how they are used in the bio domain through initiatives like the OBO Foundry. Key ontologies like the Gene Ontology are described. The document then introduces semantic web technologies like RDF, URIs, triples, and ontology languages like RDFS and OWL. It provides examples of representing data and metadata in these formats. Finally, it discusses storing and querying RDF data through SPARQL.
The document discusses the BioSamples Database (BioSD) and its conversion to linked data. BioSD aims to provide information about biological samples used in experiments in a centralized reference system. It was converted to linked data to allow for integration with other datasets, exploitation of ontologies, and improved searching. The conversion included changes to the data model and several improvements to the software. SPARQL queries are demonstrated to retrieve sample data and attributes. Potential new areas discussed include integrating geo-located samples with Google Maps and search by feature similarity.
The document discusses representing data in the Resource Description Framework (RDF). It describes how relational data can be represented as RDF triples with rows becoming subjects, columns becoming properties, and values becoming objects. It also discusses using URIs instead of internal IDs and names to allow data integration. The document then covers serializing RDF data in different formats like RDF/XML, N-Triples, N3, and Turtle and describes syntax for representing literals, language tags, and abbreviating subject and predicate pairs.
Presentation of SPARQL Anything at the MEI Linked Data IG Meeting in July 2021. We try SPARQL Anything with MEI XML files and experiment with simple and difficult tasks.
RDFa Introductory Course Session 2/4 How RDFaPlatypus
RDFa is a method for embedding Rich Data Formats metadata within HTML documents. It allows metadata like titles, descriptions and URLs to be added to HTML pages in a way that is readable both by humans and machines. The summary describes how RDFa works by defining resources with URIs and properties, and how this extracted data can be distilled and validated using various RDFa tools on the W3C website.
RDFa is a method for embedding Rich Data Formats metadata within HTML documents. It allows metadata like titles, descriptions and URLs to be added to HTML pages in a way that is readable both by humans and machines. The summary describes how RDFa works by defining things with URIs and assigning them properties and values as triples. It also mentions the RDFa distiller tool that can extract the RDF metadata from HTML pages marked up with RDFa.
This document summarizes SPARQL, the SPARQL query language used for querying and retrieving data stored in RDF format. It discusses key concepts such as RDF, terms, syntax, patterns, and constraints. RDF represents information as subject-predicate-object triples that can be queried using SPARQL. SPARQL allows constructing basic and complex graph patterns to match against the RDF graph. It also supports value filters, ordering, pagination and other solution modifiers. The document provides examples of SPARQL queries to retrieve data from RDF graphs based on different conditions and constraints.
The document discusses Linked Data and RDF, describing how data from different sources on the web can be connected using URIs, HTTP, and structured data formats like RDF. It provides examples of retrieving and representing data from DBpedia in RDF format using Ruby tools and libraries. It also discusses publishing RDF from a Rails application by adding MIME types and generating RDF representations.
Maphub - Annotations and Semantic Tags on Historical MapsBernhard Haslhofer
The document summarizes a presentation about Maphub, a project that allows users to annotate and semantically tag historical maps. It discusses how Maphub uses the Open Annotation Collaboration framework to enable georeferencing and commentarial annotations on maps from the Library of Congress. It also describes a study that found semantic tagging in Maphub did not significantly affect how many tags users created or their workload, but helped relate tags to defined concepts.
This document discusses RDF and SPARQL. It provides an introduction to RDF, including the basic RDF data model of subject-predicate-object triples. It then discusses SPARQL, the query language for retrieving and manipulating RDF data, including basic SPARQL syntax examples. It also briefly mentions the SPARQL protocol for accessing RDF data via HTTP endpoints.
A document-inspired way for tracking changes of RDF data - The case of the Op...University of Bologna
The document describes an approach for tracking changes to RDF data inspired by document engineering. It involves using PROV-O and SPARQL UPDATE queries to record snapshots of entity metadata at different times. This allows restoring entities to previous states. The approach is implemented in the OpenCitations Corpus to track provenance of citation data. Snapshots record entity compositions and curation activities. This facilitates retrieving current and previous states of entities as the data evolves.
SPARQL1.1 Tutorial, given in UChile by Axel Polleres (DERI)net2-project
This document provides an introduction to SPARQL 1.1. It begins by explaining that SPARQL is a query language for the semantic web that allows users to query RDF data stores similarly to how SQL queries relational databases. It then describes SPARQL 1.0, the initial standard version, and the new features being added in SPARQL 1.1, including aggregate functions, subqueries, property paths and federated querying. The document concludes by discussing SPARQL implementations and the status of the 1.1 specification.
Presentation at the EMBL-EBI Industry RDF meetingJohannes Keizer
The document discusses how AGROVOC, AGRIS, and the CIARD RING leverage RDF vocabularies and technologies to improve data interoperability. It provides examples of how AGRIS retrieves information on its centers through SPARQL queries of the RING, and how data in AGRIS is associated with RING URIs for centers to allow retrieving records by center. The RING is an openly accessible RDF store of datasets described using DCAT, accessible via its SPARQL endpoint.
The National Library Board of Singapore embarked on a journey to create an operational Linked Data Management and Discovery System. Their goals were to enable discovery of entities from different sources in a combined interface, bring together physical and digital resources, and provide a staff interface to manage entities and relationships. They selected a cloud-based system from metaphactory to ingest and link data from their integrated library system, content management system, national archives system, and authority files. Various scripts were used to transform the data and represent it using Schema.org for the public interface and BIBFRAME internally. This new system aimed to provide unified discovery and management of the Library Board's vast resources.
Structured Data: It's All About the Graph!Richard Wallis
The document discusses structured data and knowledge graphs. It explains that a knowledge graph is a dataset of entities, their descriptions, attributes, relationships and context that powers rich content and drives contextually relevant answers. It provides examples of marking up entities like places, people and articles with schema.org to add them to a knowledge graph. Entities should be fully described and related to each other to build a graph rather than just a collection of disconnected entities.
Schema.org Structured data the What, Why, & HowRichard Wallis
This document discusses Schema.org structured data, including its origins in the Semantic Web and Linked Open Data movements. Schema.org was created in 2011 to provide a common vocabulary for structured data markup on web pages. It allows search engines and other applications to understand the intended meaning and relationships of information on web pages. The document provides examples of using Schema.org structured data and microdata, and recommends applying it across various page types to help search engines better understand websites.
This document discusses three options for libraries to implement linked data: BIBFRAME 2.0, Schema.org, and Linky MARC. BIBFRAME 2.0 is a library standard for linked data but is not recognized outside the library community. Schema.org is the main standard for structured data on the web and could increase library discoverability, but lacks detail for library cataloging. Linky MARC adds HTTP URIs to existing MARC records to preserve entity identifiers without converting to linked data. The document also proposes a new open project called "bibframe2schema.org" to map BIBFRAME to Schema.org and promote its adoption for libraries.
The document discusses three options for libraries to adopt linked data: BIBFRAME 2.0, Schema.org, and Linky MARC. BIBFRAME 2.0 is a library standard that allows standardized RDF interchange but is not recognized outside libraries. Schema.org is the de facto web standard that improves discovery on the web but lacks detail for library needs. Linky MARC adds URIs to MARC without changing its format. The document evaluates the pros and cons of each and who may want to adopt each standard.
Structured data: Where did that come from & why are Google asking for itRichard Wallis
Structured data and Schema.org have become increasingly important for websites and search engines. Schema.org was created in 2011 as a joint effort by Google, Microsoft, Yahoo, and others to create a common set of schemas for structured data markup on web pages. Google and others now use structured data to better understand websites and display richer information in search features like Knowledge Panels. At a recent conference, a Google employee emphasized that implementing structured data using Schema.org can help websites appear in more search features and be better understood during crawling.
Contextual Computing - Knowledge Graphs & Web of EntitiesRichard Wallis
Richard Wallis gave a presentation on contextual computing and knowledge graphs at the SmartData 2017 conference. He discussed how knowledge graphs powered by structured data on the web are providing global context that enables new applications of cognitive and contextual computing. Schema.org plays a key role by defining a common vocabulary and enabling a web of related entities laid out as a global graph. This graph of entities delivers context on a global scale and lays the foundation for the next revolution in computing.
This document summarizes the origins and development of Schema.org. It began as an effort by Tim Berners-Lee in 1989 to conceive of the World Wide Web. Later developments included the semantic web in 2001 and linked open data in 2009. Schema.org was introduced in 2011 as a joint effort between Google, Bing, Yahoo, and Yandex to create a common set of schemas for structured data on web pages. It has since grown significantly, with over 12 million websites now using Schema.org markup and over 500 types and 800 properties defined. Various communities like libraries have also influenced Schema.org through extensions and standards like LRMI.
Contextual Computing: Laying a Global Data FoundationRichard Wallis
Richard Wallis presented on laying a global data foundation for contextual computing. He discussed how knowledge graphs and structured data on the web are building global context by connecting related entities. This will enable cognitive computing to evolve from local to global contexts, having access to data on flexible models and a de facto vocabulary from millions of websites. Schema.org plays a key role by delivering on the current structured data revolution and laying foundations for cognitive computing through a contextual web of entities.
Telling the World and Our Users What We HaveRichard Wallis
This document summarizes a presentation by Richard Wallis on discovery and discoverability. It introduces Schema.org as a vocabulary for structured data on the web and its use by major organizations like Google, OCLC, and the Library of Congress. It discusses motivations for sharing bibliographic data on the web using Schema.org, including connecting library data and reaching users. Key initiatives are summarized, such as the Schema Bib Extend community group, BiblioGraph.net extension vocabulary, and the bib.schema.org hosted extension.
This document summarizes Richard Wallis and his work. Richard Wallis is an independent consultant and founder of Data Liberate. He currently works with OCLC and Google to develop schema standards. He chairs several W3C community groups focused on developing schemas for bibliographic data and archives data using Schema.org.
The document discusses Richard Wallis and his work extending Schema.org to better describe bibliographic data. Wallis is an independent consultant who chairs several W3C community groups focused on expanding Schema.org for bibliographic and archives data. He has worked with organizations like OCLC and Google to develop vocabularies that extend Schema.org to describe over 330 million bibliographic resources in linked data.
This document discusses Richard Wallis and his work extending the Schema.org vocabulary. It notes that Wallis is an independent consultant who founded Data Liberate and currently works with OCLC and Google. He chairs several W3C community groups focused on extending Schema.org for bibliographic and archive data. The document outlines how Schema.org was created in 2011 as a general purpose vocabulary for describing things on the web and how it can be extended through groups like the Schema Bib Extend community to cover additional domains beyond its original 640 types.
The document discusses the benefits of linked data and provides instructions for creating linked data. It describes how linked data allows for connecting and sharing information on the web through the use of URIs and RDF triples. The key steps outlined for creating linked data include establishing the entities in your data, giving them URIs, describing each entity, and linking to authoritative hubs. Schema.org is presented as a vocabulary that is widely used and can be extended for specific domains.
Richard Wallis, an OCLC Technology Evangelist, discusses how libraries can make their data more visible and connected on the web by publishing it as linked open data using common web vocabularies like Schema.org. Currently, library linked data exists in silos using different local vocabularies, making the data hard to discover and integrate. Adopting Schema.org could help library data reach the billions of web pages and domains that already use this general purpose vocabulary to describe things on the web.
The document discusses the Web of Data and linked data. It notes that while many libraries and institutions have published linked data, it remains isolated in "silos" using different vocabularies. The document promotes the use of Schema.org as a common vocabulary that has become a de facto standard for describing things on the web, and has the potential to help connect library linked data by providing a shared schema.
Richard Wallis from OCLC presented on building a library knowledge graph to improve library workflows like cataloging and discovery. He discussed modeling entities like people, places, concepts and linking them together to form a graph. This knowledge graph could improve data quality, enable point-and-click cataloging, and help libraries better expose their unique content on the web. OCLC's approach involves modeling things of interest and making them available using web-friendly structures.
This document discusses using linked data in libraries. It notes that several national libraries have implemented linked data projects. Linked data allows for entity-based descriptions of things on the web using common vocabularies. This helps users more easily discover resources across institutional silos. The document advocates for libraries to publish their data as linked open data using common schemas, and transform records into interconnected web entities rather than standalone data. This enables new discovery experiences and ways for users to explore library collections on the web.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Salesforce Integration for Bonterra Impact Management (fka Social Solutions A...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on integration of Salesforce with Bonterra Impact Management.
Interested in deploying an integration with Salesforce for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
1. MashCat - Cambridge UK - 5th July 2012
The Cultural Linked
Data Backbone
Richard Wallis
Technology Evangelist
OCLC
@rjw
The world’s libraries. Connected.
34. British Library Data Model
@prefix blt: <http://data.bl.uk/schema/bibliographic#> . Series Publication Events Key
event:Event
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
bibo:Series foaf:Agent External
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . dcterms:Agent An Instance Link
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix dct: <http://purl.org/dc/terms/> . MARC country code rdfs:subClassOf
A Class A Literal
@prefix isbd: <http://iflastandards.info/ns/isbd/elements/> . URI
@prefix skos: <http://www.w3.org/2004/02/skos/core#> . a
@prefix bibo: <http://purl.org/ontology/bibo/> . a geo:SpatialThing rdfs:label blt:PublicationEvent
@prefix rda: <http://RDVocab.info/ElementsGr2/> . rdfs:label foaf:focus
@prefix bio: <http://purl.org/vocab/bio/0.1/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
bibo:issn
Author
@prefix event: <http://purl.org/NET/c4dm/event.owl#> . rdfs:label
@prefix org: <http://www.w3.org/ns/org#> . Agent bio:Birth
Series GeoNames URI a BL URI
@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> .
BL URI
event:place
Place a
BL URI bio:Death a
LCSH URI if
available CalendarYear
event:agent bio:date
dct:isPartOf
event:place bio:date a
id.loc.gov URI for owl:sameAs a Birth BL URI
scheme rdfs:label
skos:inScheme
http://r.d.g/id/year/ Death BL URI
Topic LCSH dct:hasPart xxxx foaf:familyName
BL URI event:time PublicationEvent
a BL URI
blt:TopicLCSH
foaf:givenName
bio:event
rdfs:label
dct:BibliographicResource bio:event
rdfs:label foaf:name
rdfs:label bibo:Book
Person-as-Concept Person-as-Agent
rdfs:subClassOf a dct:subject BL URI
BL URI a foaf:Agent
a dct:creator dct:Agent
blt:PersonConcept blt:publication foaf:Person
skos:inScheme foaf:focus a
blt:hasCreated owl:sameAs
rdfs:subClassOf id.loc.gov URI for rdfs:label
scheme rda:periodOfActivityOfThePerson
rdfs:label dct:contributor
dct:subject
Skos:Concept blt:hasContributedTo
Family-as-Concept Resource VIAF URI if available
BL URI dct:subject
BL URI dct:creator
rdfs:subClassOf
a
blt:hasCreated Organization-as-Agent BL
skos:inScheme foaf:focus dct:subject URI
blt:FamilyConcept
dct:contributor
rdfs:label
blt:hasContributedTo
Family-as-Agent a
id.loc.gov URI for BL URI
scheme rdfs:label
[foaf:name]
rdfs:subClassOf foaf:Agent
dct:language dct:Agent
rdfs:label foaf:focus foaf:Organization
org:Organization
Organization-as-Concept Lexvo URI blt:bnb
blt:OrganizationConcept a dct:subject bibo:isbn10
BL URI bibo:isbn13
rdfs:label dct:subject dct:abstract
skos:inScheme foaf:focus
dct:tableOfContents
rdfs:subClassOf
isbd:P1008 dct:title
Identifiers
id.loc.gov URI for MARC language
code URI (edition statement)
scheme dct:spatial dct:alternative
blt:TopicDDC isbd:P1073 dct:description
a Dewey (note on language)
BL URI
skos:inScheme isbd:P1042
(content note)
Dewey Info URI for skos:broader isbd:P1053
Title
skos:notation (extent)
scheme Place-as-Concept
BL URI foaf:focus
Place-as-Thing
Dewey Info URI BL URI
a
owl:sameAs rdfs:label
rdfs:label Miscellaneous literals
a
rdfs:subClassOf
blt:PlaceConcept LCSH URI if
Subject available geo:SpatialThing
Tim Hodson - tim.hodson@talis.com
Corine Deliot - Corine.Deliot@bl.uk
dct:Location Alan Danskin - Alan.Danskin@bl.uk
V.1.01 Heather Rosie - Heather.Rosie@bl.uk
1st August 2011 Jan Ashton - Jan.Ashton@bl.uk
35. British Library Data Model
@prefix blt: <http://data.bl.uk/schema/bibliographic#> . Series Publication Events Key
event:Event
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
bibo:Series foaf:Agent External
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . dcterms:Agent An Instance Link
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix dct: <http://purl.org/dc/terms/> . MARC country code rdfs:subClassOf
A Class A Literal
@prefix isbd: <http://iflastandards.info/ns/isbd/elements/> . URI
@prefix skos: <http://www.w3.org/2004/02/skos/core#> . a
@prefix bibo: <http://purl.org/ontology/bibo/> . a geo:SpatialThing rdfs:label blt:PublicationEvent
@prefix rda: <http://RDVocab.info/ElementsGr2/> . rdfs:label foaf:focus
@prefix bio: <http://purl.org/vocab/bio/0.1/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
bibo:issn
Author
@prefix event: <http://purl.org/NET/c4dm/event.owl#> . rdfs:label
@prefix org: <http://www.w3.org/ns/org#> . Agent bio:Birth
Series GeoNames URI a BL URI
@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> .
BL URI
event:place
Place a
BL URI bio:Death a
LCSH URI if
available CalendarYear
event:agent bio:date
dct:isPartOf
event:place bio:date a
id.loc.gov URI for owl:sameAs a Birth BL URI
scheme rdfs:label
skos:inScheme
http://r.d.g/id/year/ Death BL URI
Topic LCSH dct:hasPart xxxx foaf:familyName
BL URI event:time PublicationEvent
a BL URI
blt:TopicLCSH
foaf:givenName
bio:event
rdfs:label
dct:BibliographicResource bio:event
rdfs:label foaf:name
rdfs:label bibo:Book
Person-as-Concept Person-as-Agent
rdfs:subClassOf a dct:subject BL URI
BL URI a foaf:Agent
a dct:creator dct:Agent
blt:PersonConcept blt:publication foaf:Person
skos:inScheme foaf:focus a
blt:hasCreated owl:sameAs
rdfs:subClassOf id.loc.gov URI for rdfs:label
scheme rda:periodOfActivityOfThePerson
rdfs:label dct:contributor
dct:subject
Skos:Concept blt:hasContributedTo
Family-as-Concept Resource VIAF URI if available
BL URI dct:subject
BL URI dct:creator
rdfs:subClassOf
a
blt:hasCreated Organization-as-Agent BL
skos:inScheme foaf:focus dct:subject URI
blt:FamilyConcept
dct:contributor
rdfs:label
blt:hasContributedTo
Family-as-Agent a
id.loc.gov URI for BL URI
scheme rdfs:label
[foaf:name]
rdfs:subClassOf foaf:Agent
dct:language dct:Agent
rdfs:label foaf:focus foaf:Organization
org:Organization
Organization-as-Concept Lexvo URI blt:bnb
blt:OrganizationConcept a dct:subject bibo:isbn10
BL URI bibo:isbn13
rdfs:label dct:subject dct:abstract
skos:inScheme foaf:focus
dct:tableOfContents
rdfs:subClassOf
isbd:P1008 dct:title
Identifiers
id.loc.gov URI for MARC language
code URI (edition statement)
scheme dct:spatial dct:alternative
blt:TopicDDC isbd:P1073 dct:description
a Dewey (note on language)
BL URI
skos:inScheme isbd:P1042
(content note)
Dewey Info URI for skos:broader isbd:P1053
Title
skos:notation (extent)
scheme Place-as-Concept
BL URI foaf:focus
Place-as-Thing
Dewey Info URI BL URI
a
owl:sameAs rdfs:label
rdfs:label Miscellaneous literals
a
rdfs:subClassOf
blt:PlaceConcept LCSH URI if
Subject available geo:SpatialThing
Tim Hodson - tim.hodson@talis.com
Corine Deliot - Corine.Deliot@bl.uk
dct:Location Alan Danskin - Alan.Danskin@bl.uk
V.1.01 Heather Rosie - Heather.Rosie@bl.uk
http://www.bl.uk/bibliographic/pdfs/bldatamodelbook.pdf
1st August 2011 Jan Ashton - Jan.Ashton@bl.uk
36. British Library Data Model
@prefix blt: <http://data.bl.uk/schema/bibliographic#> . Series Publication Events Key
event:Event
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
bibo:Series foaf:Agent External
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> . dcterms:Agent An Instance Link
@prefix owl: <http://www.w3.org/2002/07/owl#> .
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
@prefix dct: <http://purl.org/dc/terms/> . MARC country code rdfs:subClassOf
A Class A Literal
@prefix isbd: <http://iflastandards.info/ns/isbd/elements/> . URI
@prefix skos: <http://www.w3.org/2004/02/skos/core#> . a
@prefix blt: <http://data.bl.uk/schema/bibliographic#> .
@prefix bibo: <http://purl.org/ontology/bibo/> . a geo:SpatialThing rdfs:label blt:PublicationEvent
@prefix rda: <http://RDVocab.info/ElementsGr2/> . rdfs:label foaf:focus
@prefix bio: <http://purl.org/vocab/bio/0.1/> .
@prefix foaf: <http://xmlns.com/foaf/0.1/> . Author bibo:issn
@prefix event: <http://purl.org/NET/c4dm/event.owl#> . rdfs:label
Agent bio:Birth
@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> .
@prefix org: <http://www.w3.org/ns/org#> . GeoNames URI
@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> . Series a BL URI
BL URI
event:place
Place a
BL URI bio:Death a
LCSH URI if
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#> .
id.loc.gov URI for
available
owl:sameAs
dct:isPartOf
CalendarYear
a event:place
event:agent
bio:date a
Birth BL URI
bio:date
rdfs:label
@prefix owl: <http://www.w3.org/2002/07/owl#> .
scheme
skos:inScheme
http://r.d.g/id/year/ Death BL URI
Topic LCSH dct:hasPart xxxx foaf:familyName
BL URI event:time PublicationEvent
a BL URI
blt:TopicLCSH
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . rdfs:label
dct:BibliographicResource
rdfs:label
bio:event
bio:event
foaf:givenName
foaf:name
@prefix dct: <http://purl.org/dc/terms/> .
rdfs:subClassOf
rdfs:label
a
Person-as-Concept
BL URI dct:subject
a
bibo:Book
dct:creator
Person-as-Agent
BL URI
a foaf:Agent
dct:Agent
blt:PersonConcept blt:publication foaf:Person
@prefix isbd: <http://iflastandards.info/ns/isbd/elements/> .
rdfs:subClassOf id.loc.gov URI for
scheme
skos:inScheme
rdfs:label
foaf:focus a
blt:hasCreated
rda:periodOfActivityOfThePerson
owl:sameAs
rdfs:label
dct:contributor
@prefix skos: <http://www.w3.org/2004/02/skos/core#> .
dct:subject
Skos:Concept blt:hasContributedTo
Family-as-Concept Resource VIAF URI if available
BL URI dct:subject
BL URI dct:creator
rdfs:subClassOf
a
@prefix bibo: <http://purl.org/ontology/bibo/> .
blt:FamilyConcept
skos:inScheme foaf:focus dct:subject
rdfs:label
blt:hasCreated
dct:contributor
blt:hasContributedTo
Organization-as-Agent BL
URI
Family-as-Agent a
@prefix rda: <http://RDVocab.info/ElementsGr2/> .
id.loc.gov URI for BL URI
scheme rdfs:label
[foaf:name]
rdfs:subClassOf foaf:Agent
dct:language dct:Agent
rdfs:label foaf:focus foaf:Organization
@prefix bio: <http://purl.org/vocab/bio/0.1/> .
blt:OrganizationConcept a Organization-as-Concept
BL URI
dct:subject
Lexvo URI blt:bnb
bibo:isbn10 bibo:isbn13
org:Organization
@prefix foaf: <http://xmlns.com/foaf/0.1/> .
rdfs:subClassOf
rdfs:label
Identifiers
skos:inScheme
dct:subject
foaf:focus
dct:abstract
dct:tableOfContents
isbd:P1008 dct:title
@prefix event: <http://purl.org/NET/c4dm/event.owl#> .
blt:TopicDDC
id.loc.gov URI for
a
scheme
Dewey
dct:spatial
MARC language
code URI (edition statement)
isbd:P1073
(note on language)
dct:description
dct:alternative
BL URI
@prefix org: <http://www.w3.org/ns/org#> .
Dewey Info URI for
skos:inScheme
Title
skos:notation
skos:broader
isbd:P1042
(content note)
isbd:P1053
scheme Place-as-Concept (extent)
@prefix geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> . Dewey Info URI
a
BL URI
owl:sameAs
foaf:focus
Place-as-Thing
BL URI
rdfs:label
rdfs:label Miscellaneous literals
a
rdfs:subClassOf
blt:PlaceConcept LCSH URI if
Subject available geo:SpatialThing
Tim Hodson - tim.hodson@talis.com
Corine Deliot - Corine.Deliot@bl.uk
dct:Location Alan Danskin - Alan.Danskin@bl.uk
V.1.01 Heather Rosie - Heather.Rosie@bl.uk
http://www.bl.uk/bibliographic/pdfs/bldatamodelbook.pdf
1st August 2011 Jan Ashton - Jan.Ashton@bl.uk
70. Libraries Archives
Museums
... the cultural
backbone
of the Web of Data?
71. MashCat - Cambridge UK - 5th July 2012
The Cultural Linked
Data Backbone
Richard Wallis
Technology Evangelist
OCLC
@rjw
The world’s libraries. Connected.
72. MashCat - Cambridge UK - 5th July 2012
The Cultural Linked
Data Backbone
Richard Wallis
Technology Evangelist
OCLC
@rjw http://slideshare.net/rjw
The world’s libraries. Connected.