Presentation to the ALCTS session "International Developments in Library Linked Data: Think Globally" at the American Library Association Conference in Las Vegas - June 2014
British Library Linked Open Data Presentation for ALA June 2014nw13
The document summarizes the British Library's experience in providing linked open data. It describes why the library offers linked data, what data is offered, and lessons learned. Key points include: the library offers metadata in various formats including RDF/XML and CSV to promote innovation, migration to standards, and collaboration; their linked data program has over 1,000 user organizations and 2 million transactions monthly; and lessons learned include understanding diverse user needs, continually improving data quality, and maintaining funding through measurable impact.
Session 1.2 improving access to digital content by semantic enrichmentsemanticsconference
This document discusses improving access to digital collections through semantic enrichment. It describes linking names and entities from text to knowledge bases like Wikidata to make the content more discoverable and usable. The process involves named entity recognition, entity linking using disambiguation algorithms, presenting enriched context, and enabling semantic search. User feedback is gathered to improve the linking algorithms through additional training. The goal is to increase trust in the links for research purposes. Overall, the approach aims to enrich text collections by connecting content to external information sources.
Kate Wittenberg discusses Portico's current work and future plans. Portico has signed 320 publisher participants representing over 2,000 societies. Efforts are being made to preserve links between publications and underlying data through projects like RMap. Portico also partners with national libraries like the BL and KB to leverage their infrastructure for legal deposit programs. Looking ahead, Portico aims to broaden preservation efforts to more content types and collaborate more with other organizations.
ResourceSync - Overview and Real-World Use Cases for Discovery, Harvesting, a...Martin Klein
This document provides an overview of ResourceSync, which is a framework for synchronizing web resources between systems. Some key points:
- ResourceSync was created to address limitations of existing protocols like OAI-PMH by allowing synchronization of any web resource and enabling both one-time and ongoing synchronization.
- It supports various capabilities for synchronization like resource lists, change lists, and notifications. These can be used for initial synchronization or incremental updates.
- Real-world examples are described where ResourceSync has been implemented for projects involving aggregation of digital collections, like Europeana and CLARIAH. It facilitates synchronization between diverse data sources.
- Presentations were given on how ResourceSync could also be useful
This session will comprise a talk with a panel of speakers
looking at KBART: seven years later (since the publication
of the first set of recommendations up to today). The panel
will discuss the changes on the e-resources metadata
landscape, the benefits of KBART and the challenges of
its implementation. Today poor metadata in the electronic
resources supply chain is still a problem. The panel will
use practical examples to explain how metadata creation,
consumption and usage are marked by the constant
requirement of finding the balance between available
resources (technical and human) and end user discoverability
needs. The KBART Standing Committee sees the
implementation of KBART recommendations as a community
effort from a range of stakeholders (content providers,
knowledge bases, link resolvers and librarians).
A talk given at 'Taking the Long View: International Perspectives on E-Journal Archiving', a conference hosted by EDINA and ISSN IC at the University of Edinburgh, September 7th 2015.
OA Network: Heading for Joint Standards and Enhancing Cooperation: Value‐Adde...Stefan Buddenbohm
OA‐Network collaborates with other associated German Open Access‐related projects and pursues the overarching aim to increase the visibility and the ease of use of the German research output. For this end a technical infrastructure is established to offer value‐added services based on a shared information space across all participating repositories. In addition to this OA‐Network promotes the DINI‐certificate for Open Access repositories (standardization) and a regularly communication exchange in the German repository landscape.
Levels of Service for Digital LibrariesGreg Colati
Looking at data management from the perspective of data characteristics instead of the applications or systems that create and manage data. This is a presentation given as a discussion stater at the internal UConn Library management group meeting in April 2017
CILIP Conference - x metadata evolution the final mile - Richard WallisCILIP
Bibliographic metadata forms have evolved over centuries, the last 50 years in machine readable formats. The library community appears to be evolving from records, towards describing real-world entities using an agreed form of linked data. Is that step a step far enough to satisfy the ever-present need to aid discovery? Discovery in the environment of the approaching twenty first century’s 3rd decade. Or do we need to include a move into the landscape of globally understood structured data and knowledge graphs? The millennial environment of answer engines, mobile/local search and voice assistants.
#cilipconf19
Wikidata is a free knowledge base launched in 2012 by Wikimedia Deutschland to centralize key data about items and serve as an interlinked database representing the sum of human knowledge. It contains over 14 million items with 30 million statements in multiple languages and data types. Wikidata is currently in Phase 2 where statements about items are being added and can be accessed through its API and tools like WikiData Query and Reasonator to visualize and explore the data.
Mantas Zimnickas - How Open is Lithuanian Government data? atviriduomenys.lt Aidis Stukas
Mantas Zimnickas presented on open data in Lithuania at OpenCon 2016. He discussed what open means according to opendefinition.org, defined open data, and summarized the status of open data initiatives in Lithuania. Zimnickas' site atviriduomenys.lt aims to provide open data to users by collecting data from providers and using it in projects. The talk covered challenges of working with different data formats and proposed approaches to standardizing and automating data extraction and transformation. Zimnickas concluded by sharing code for several of his open data projects.
Repositories for OA, RDM and Beyond - Rory McNichollRepository Fringe
This document summarizes the history and services of the University of London Computer Centre (ULCC), including its Digital Archives & Research Technologies (DART) service. DART provides open access repositories, research data repositories, and archival storage using platforms like EPrints, OJS, and Arkivum. It works with the research community to meet open access and research data management requirements. The presentation concludes by discussing potential future directions like preservation as a service and moving back through the full research lifecycle.
Introductory talk for ANDS workshop on Institutional Repositories and data. The talk situates the topic within the field of scholarly communication before comparing the relative technical simplicity of running repositories of publications with the complexities that accompany a shift to data. The most-retweeted slide is the one viewing the response of repository managers to data through the lens of Elizabeth Kübler-Ross' stages of grieving.
Session 1.6 slovak public metadata governance and management based on linke...semanticsconference
This document proposes establishing public linked data governance and management in the Slovak Republic based on methodologies used by EU institutions. It outlines establishing rules for interoperability levels of open public data, creating a central ontological model and governance structure to manage data quality and interoperability. It also proposes a linked data management lifecycle to publish, deploy, manage changes to and retire ontologies and URIs according to a change request process in order to establish central governance of public metadata in Slovakia.
CILIP Conference - Diffusion of ISNIs into book supply chain metadata - Andr...CILIP
The presentation by Tim Devenport and Andrew MacEwan gives an introduction to the ISNI system and Member network and describe how the ISNI is linking library authority files with publisher supply chain metadata across multiple content industries. A case study shows how the use of ISNI in the British Library’s metadata opens up new opportunities for collaboration with the book publishing industry.
#cilipconf19
The Information Workbench - Linked Data and Semantic Wikis in the EnterprisePeter Haase
The Information Workbench is a platform for Linked Data applications in the enterprise. Targeting the full life-cycle of Linked Data applications, it facilitates the integration and processing of Linked Data following a Data-as-a-Service paradigm.
In this talk we present how we use Semantic Wiki technologies in the Information Workbench for the development of user interfaces for interacting with the Linked Data. The user interface can be easily customized using a large set of widgets for data integration, interactive visualization, exploration and analytics, as well as the collaborative acquisition and authoring of Linked Data. The talk will feature a live demo illustrating an example application, a Conference Explorer integrating data about the SMWCon conference, publications and social media.
We will also present solutions and applications of the Information Workbench in a variety of other domains, including the Life Sciences and Data Center Management.
A distributed network of digital heritage information - Semantics AmsterdamEnno Meijers
This document discusses strategies for improving discovery of digital heritage information across Dutch cultural institutions. It identifies problems with the current infrastructure based on OAI-PMH including lack of semantic alignment and inefficient data integration. The proposed strategy is to build a distributed network based on Linked Data principles, with a registry of organizations and datasets, a knowledge graph with backlinks to support resource discovery, and virtual data integration using federated querying of Linked Data sources. This will improve usability, visibility, and sustainability of digital heritage information in the Netherlands.
Building library networks with linked dataEnno Meijers
Slides of my talk at the Semantics Conference in Vienna in 2018. The topic of the talk was the initiative of the National Library of the Netherlands to publish their bibliographic metadata as Linked Data.
This document discusses open data in Hong Kong. It notes that open data helps people understand the world and make better decisions. However, Hong Kong's open data program still has room for improvement, with data sometimes only available in non-machine readable formats like PDFs and JPGs, and licensing that is not fully open. The document advocates for fully open licensing of data, as well as policies like a Freedom of Information law and an Archives law to strengthen open data practices in Hong Kong.
Towards a Repository for Dutch Development OrganizationsIAALD Community
Presentation by Harry Heemskerk and Ingeborg Nagel to WUR/IAALD Meeting: Making Agricultural Information Available and Accessible, Wageningen, 13 November 2008.
DataGraft is a platform and set of tools that aims to make open and linked data more accessible and usable. It allows users to interactively build, modify, and share repeatable data transformations. Transformations can be reused to clean and transform spreadsheet data. Data and transformations can be hosted and shared in a cloud-based catalog. DataGraft provides APIs, reliable data hosting, and visualization capabilities to help data publishers share datasets and enable application developers to more easily build applications using open data.
Developing Infrastructure to Support Closer Collaboration of Aggregators with...Nancy Pontika
The document summarizes the CORE (Connecting Repositories) project, which aims to aggregate open access research from around the world. It describes CORE's mission and statistics, the need for a UK aggregator, CORE's three levels of access, applications like its search portal and mobile apps, the aggregation process, and its dashboard tool for repository managers. The dashboard allows managers to monitor their records, update metadata, see harvesting issues, and collaborate more with CORE to broaden discoverability of open access research.
The CTDA has seen significant growth in 2016, with digital assets increasing over 45% to over 412,547 assets. Records harvested also grew by over 43% to 49,923 records. New participants were added and functionality was expanded. Governance committees met regularly to discuss initiatives and projects. Education and training sessions were provided, including a user conference and workshops. The sites and systems performed reliably with over 98% uptime. Feedback from surveys was generally positive and highlighted areas for further improvement and reporting.
Uncovering research - what's the standard - Jisc Digital Festival 2015Jisc
The document discusses research data discovery in the UK. It summarizes that a research data discovery service would aggregate metadata records from UK research institutions and data centers to make research data more discoverable and reusable. A pilot of the service harvested metadata from 9 universities and 3 data centers. Based on feedback, phase 2 will focus on developing the service into a sustainable shared infrastructure to support open access of research data.
CILIP Conference - x metadata evolution the final mile - Richard WallisCILIP
Bibliographic metadata forms have evolved over centuries, the last 50 years in machine readable formats. The library community appears to be evolving from records, towards describing real-world entities using an agreed form of linked data. Is that step a step far enough to satisfy the ever-present need to aid discovery? Discovery in the environment of the approaching twenty first century’s 3rd decade. Or do we need to include a move into the landscape of globally understood structured data and knowledge graphs? The millennial environment of answer engines, mobile/local search and voice assistants.
#cilipconf19
Wikidata is a free knowledge base launched in 2012 by Wikimedia Deutschland to centralize key data about items and serve as an interlinked database representing the sum of human knowledge. It contains over 14 million items with 30 million statements in multiple languages and data types. Wikidata is currently in Phase 2 where statements about items are being added and can be accessed through its API and tools like WikiData Query and Reasonator to visualize and explore the data.
Mantas Zimnickas - How Open is Lithuanian Government data? atviriduomenys.lt Aidis Stukas
Mantas Zimnickas presented on open data in Lithuania at OpenCon 2016. He discussed what open means according to opendefinition.org, defined open data, and summarized the status of open data initiatives in Lithuania. Zimnickas' site atviriduomenys.lt aims to provide open data to users by collecting data from providers and using it in projects. The talk covered challenges of working with different data formats and proposed approaches to standardizing and automating data extraction and transformation. Zimnickas concluded by sharing code for several of his open data projects.
Repositories for OA, RDM and Beyond - Rory McNichollRepository Fringe
This document summarizes the history and services of the University of London Computer Centre (ULCC), including its Digital Archives & Research Technologies (DART) service. DART provides open access repositories, research data repositories, and archival storage using platforms like EPrints, OJS, and Arkivum. It works with the research community to meet open access and research data management requirements. The presentation concludes by discussing potential future directions like preservation as a service and moving back through the full research lifecycle.
Introductory talk for ANDS workshop on Institutional Repositories and data. The talk situates the topic within the field of scholarly communication before comparing the relative technical simplicity of running repositories of publications with the complexities that accompany a shift to data. The most-retweeted slide is the one viewing the response of repository managers to data through the lens of Elizabeth Kübler-Ross' stages of grieving.
Session 1.6 slovak public metadata governance and management based on linke...semanticsconference
This document proposes establishing public linked data governance and management in the Slovak Republic based on methodologies used by EU institutions. It outlines establishing rules for interoperability levels of open public data, creating a central ontological model and governance structure to manage data quality and interoperability. It also proposes a linked data management lifecycle to publish, deploy, manage changes to and retire ontologies and URIs according to a change request process in order to establish central governance of public metadata in Slovakia.
CILIP Conference - Diffusion of ISNIs into book supply chain metadata - Andr...CILIP
The presentation by Tim Devenport and Andrew MacEwan gives an introduction to the ISNI system and Member network and describe how the ISNI is linking library authority files with publisher supply chain metadata across multiple content industries. A case study shows how the use of ISNI in the British Library’s metadata opens up new opportunities for collaboration with the book publishing industry.
#cilipconf19
The Information Workbench - Linked Data and Semantic Wikis in the EnterprisePeter Haase
The Information Workbench is a platform for Linked Data applications in the enterprise. Targeting the full life-cycle of Linked Data applications, it facilitates the integration and processing of Linked Data following a Data-as-a-Service paradigm.
In this talk we present how we use Semantic Wiki technologies in the Information Workbench for the development of user interfaces for interacting with the Linked Data. The user interface can be easily customized using a large set of widgets for data integration, interactive visualization, exploration and analytics, as well as the collaborative acquisition and authoring of Linked Data. The talk will feature a live demo illustrating an example application, a Conference Explorer integrating data about the SMWCon conference, publications and social media.
We will also present solutions and applications of the Information Workbench in a variety of other domains, including the Life Sciences and Data Center Management.
A distributed network of digital heritage information - Semantics AmsterdamEnno Meijers
This document discusses strategies for improving discovery of digital heritage information across Dutch cultural institutions. It identifies problems with the current infrastructure based on OAI-PMH including lack of semantic alignment and inefficient data integration. The proposed strategy is to build a distributed network based on Linked Data principles, with a registry of organizations and datasets, a knowledge graph with backlinks to support resource discovery, and virtual data integration using federated querying of Linked Data sources. This will improve usability, visibility, and sustainability of digital heritage information in the Netherlands.
Building library networks with linked dataEnno Meijers
Slides of my talk at the Semantics Conference in Vienna in 2018. The topic of the talk was the initiative of the National Library of the Netherlands to publish their bibliographic metadata as Linked Data.
This document discusses open data in Hong Kong. It notes that open data helps people understand the world and make better decisions. However, Hong Kong's open data program still has room for improvement, with data sometimes only available in non-machine readable formats like PDFs and JPGs, and licensing that is not fully open. The document advocates for fully open licensing of data, as well as policies like a Freedom of Information law and an Archives law to strengthen open data practices in Hong Kong.
Towards a Repository for Dutch Development OrganizationsIAALD Community
Presentation by Harry Heemskerk and Ingeborg Nagel to WUR/IAALD Meeting: Making Agricultural Information Available and Accessible, Wageningen, 13 November 2008.
DataGraft is a platform and set of tools that aims to make open and linked data more accessible and usable. It allows users to interactively build, modify, and share repeatable data transformations. Transformations can be reused to clean and transform spreadsheet data. Data and transformations can be hosted and shared in a cloud-based catalog. DataGraft provides APIs, reliable data hosting, and visualization capabilities to help data publishers share datasets and enable application developers to more easily build applications using open data.
Developing Infrastructure to Support Closer Collaboration of Aggregators with...Nancy Pontika
The document summarizes the CORE (Connecting Repositories) project, which aims to aggregate open access research from around the world. It describes CORE's mission and statistics, the need for a UK aggregator, CORE's three levels of access, applications like its search portal and mobile apps, the aggregation process, and its dashboard tool for repository managers. The dashboard allows managers to monitor their records, update metadata, see harvesting issues, and collaborate more with CORE to broaden discoverability of open access research.
The CTDA has seen significant growth in 2016, with digital assets increasing over 45% to over 412,547 assets. Records harvested also grew by over 43% to 49,923 records. New participants were added and functionality was expanded. Governance committees met regularly to discuss initiatives and projects. Education and training sessions were provided, including a user conference and workshops. The sites and systems performed reliably with over 98% uptime. Feedback from surveys was generally positive and highlighted areas for further improvement and reporting.
Uncovering research - what's the standard - Jisc Digital Festival 2015Jisc
The document discusses research data discovery in the UK. It summarizes that a research data discovery service would aggregate metadata records from UK research institutions and data centers to make research data more discoverable and reusable. A pilot of the service harvested metadata from 9 universities and 3 data centers. Based on feedback, phase 2 will focus on developing the service into a sustainable shared infrastructure to support open access of research data.
Richard Wallis from OCLC presented on building a library knowledge graph to improve library workflows like cataloging and discovery. He discussed modeling entities like people, places, concepts and linking them together to form a graph. This knowledge graph could improve data quality, enable point-and-click cataloging, and help libraries better expose their unique content on the web. OCLC's approach involves modeling things of interest and making them available using web-friendly structures.
The document discusses three options for libraries to adopt linked data: BIBFRAME 2.0, Schema.org, and Linky MARC. BIBFRAME 2.0 is a library standard that allows standardized RDF interchange but is not recognized outside libraries. Schema.org is the de facto web standard that improves discovery on the web but lacks detail for library needs. Linky MARC adds URIs to MARC without changing its format. The document evaluates the pros and cons of each and who may want to adopt each standard.
Presentation to SWIB23 in Berlin.
The journey to implement a production Linked Data Management and Discovery System for the National Library Board of Singapore.
The National Library Board of Singapore embarked on a journey to create an operational Linked Data Management and Discovery System. Their goals were to enable discovery of entities from different sources in a combined interface, bring together physical and digital resources, and provide a staff interface to manage entities and relationships. They selected a cloud-based system from metaphactory to ingest and link data from their integrated library system, content management system, national archives system, and authority files. Various scripts were used to transform the data and represent it using Schema.org for the public interface and BIBFRAME internally. This new system aimed to provide unified discovery and management of the Library Board's vast resources.
The document discusses the Getty Vocabularies, which are authoritative controlled vocabularies for art, architecture, and material culture. They include the Art & Architecture Thesaurus (AAT), Getty Thesaurus of Geographic Names (TGN), Union List of Artist Names (ULAN), and the developing Cultural Objects Name Authority (CONA). The vocabularies are compiled and maintained by the Getty Vocabulary Program based on contributions from users and projects. Records are merged if multiple contributors submit the same concept, person, place, or object. The vocabularies are licensed and distributed annually to institutions.
The document discusses the benefits of linked data and provides instructions for creating linked data. It describes how linked data allows for connecting and sharing information on the web through the use of URIs and RDF triples. The key steps outlined for creating linked data include establishing the entities in your data, giving them URIs, describing each entity, and linking to authoritative hubs. Schema.org is presented as a vocabulary that is widely used and can be extended for specific domains.
This document summarizes the origins and development of Schema.org. It began as an effort by Tim Berners-Lee in 1989 to conceive of the World Wide Web. Later developments included the semantic web in 2001 and linked open data in 2009. Schema.org was introduced in 2011 as a joint effort between Google, Bing, Yahoo, and Yandex to create a common set of schemas for structured data on web pages. It has since grown significantly, with over 12 million websites now using Schema.org markup and over 500 types and 800 properties defined. Various communities like libraries have also influenced Schema.org through extensions and standards like LRMI.
The Power of Sharing Linked Data: Bibliothekartag 2014Richard Wallis
The document discusses OCLC's efforts to share library data as linked open data on the web. It describes OCLC releasing WorldCat data including 311 million records as linked data, using schemas like Schema.org and linking to other sources like VIAF. It also discusses the release of 197 million linked data work descriptions from WorldCat in April 2014. The goal is to make library data part of the web by giving search engines and users what they want, like structured data at web scale with identifiers and links.
This document discusses a two-phase approach to introducing linked data from WorldCat records. Phase 1 involves mining existing MARC records to identify entities like persons, organizations, and subjects, and linking those strings to controlled vocabularies. Phase 2 models the data using schemas like Schema.org that are of interest to the web in order to share resources via linked data. The goal is to draw people to library resources by sharing in a web-native linked data format.
The document discusses Richard Wallis and his work extending Schema.org to better describe bibliographic data. Wallis is an independent consultant who chairs several W3C community groups focused on expanding Schema.org for bibliographic and archives data. He has worked with organizations like OCLC and Google to develop vocabularies that extend Schema.org to describe over 330 million bibliographic resources in linked data.
User contributions have the potential to enrich WorldCat in several ways:
1) Over 160,000 user-created lists are already in WorldCat, allowing sharing of citations and recommendations.
2) User feedback like corrections, added content, and ratings can help improve WorldCat's quality and coverage.
3) Linking user contributions to related professional and external data could provide a more comprehensive view of identities, works, and topics in WorldCat.
However, challenges remain in encouraging contributions at scale while maintaining data quality, integrating social metadata with existing systems, and addressing users' reluctance to engage in some activities like ratings.
This document discusses linked data and its relevance to libraries. It begins by explaining the basic concepts of linked data, including using URIs to identify things, describing relationships between resources using RDF triples, and linking data to related information on the web. It then discusses why libraries should care about linked data, particularly how it allows bibliographic data to be separated into individual pieces that can be recombined and linked to other data sources. The document concludes by providing examples of linked open data projects and resources for libraries interested in implementing linked data.
This document discusses Richard Wallis and his work extending the Schema.org vocabulary. It notes that Wallis is an independent consultant who founded Data Liberate and currently works with OCLC and Google. He chairs several W3C community groups focused on extending Schema.org for bibliographic and archive data. The document outlines how Schema.org was created in 2011 as a general purpose vocabulary for describing things on the web and how it can be extended through groups like the Schema Bib Extend community to cover additional domains beyond its original 640 types.
About the Webinar
The library and cultural institution communities have generally accepted the vision of moving to a Linked Data environment that will align and integrate their resources with those of the greater Semantic Web. But moving from vision to implementation is not easy or well-understood. A number of institutions have begun the needed infrastructure and tools development with pilot projects to provide structured data in support of discovery and navigation services for their collections and resources.
Join NISO for this webinar where speakers will highlight actual Linked Data projects within their institutions—from envisioning the model to implementation and lessons learned—and present their thoughts on how linked data benefits research, scholarly communications, and publishing.
Speakers:
Jon Voss - Strategic Partnerships Director, We Are What We Do
LODLAM + Historypin: A Collaborative Global Community
Matt Miller - Front End Developer, NYPL Labs at the New York Public Library
The Linked Jazz Project: Revealing the Relationships of the Jazz Community
Cory Lampert - Head, Digital Collections , UNLV University Libraries
Silvia Southwick - Digital Collections Metadata Librarian, UNLV University Libraries
Linked Data Demystified: The UNLV Linked Data Project
Contextual Computing - Knowledge Graphs & Web of EntitiesRichard Wallis
Richard Wallis gave a presentation on contextual computing and knowledge graphs at the SmartData 2017 conference. He discussed how knowledge graphs powered by structured data on the web are providing global context that enables new applications of cognitive and contextual computing. Schema.org plays a key role by defining a common vocabulary and enabling a web of related entities laid out as a global graph. This graph of entities delivers context on a global scale and lays the foundation for the next revolution in computing.
Contextual Computing: Laying a Global Data FoundationRichard Wallis
Richard Wallis presented on laying a global data foundation for contextual computing. He discussed how knowledge graphs and structured data on the web are building global context by connecting related entities. This will enable cognitive computing to evolve from local to global contexts, having access to data on flexible models and a de facto vocabulary from millions of websites. Schema.org plays a key role by delivering on the current structured data revolution and laying foundations for cognitive computing through a contextual web of entities.
This document discusses three options for libraries to implement linked data: BIBFRAME 2.0, Schema.org, and Linky MARC. BIBFRAME 2.0 is a library standard for linked data but is not recognized outside the library community. Schema.org is the main standard for structured data on the web and could increase library discoverability, but lacks detail for library cataloging. Linky MARC adds HTTP URIs to existing MARC records to preserve entity identifiers without converting to linked data. The document also proposes a new open project called "bibframe2schema.org" to map BIBFRAME to Schema.org and promote its adoption for libraries.
Building a Semantic Knowledge Graph split.pdfRichard Wallis
Presentation to the 204 LD4 Online Conference 7th October 2024.
A description of the National Library Board Singapore semantic knowledge graph and its developments.
Building a Semantic Knowledge Graph - web.pdfRichard Wallis
Presentation to European Bibframe Workshop 2024 - Helsinki.
Description of the National Library Board Singapore semantic knowledge graph and its developments.
Structured Data: It's All About the Graph!Richard Wallis
The document discusses structured data and knowledge graphs. It explains that a knowledge graph is a dataset of entities, their descriptions, attributes, relationships and context that powers rich content and drives contextually relevant answers. It provides examples of marking up entities like places, people and articles with schema.org to add them to a knowledge graph. Entities should be fully described and related to each other to build a graph rather than just a collection of disconnected entities.
Schema.org Structured data the What, Why, & HowRichard Wallis
This document discusses Schema.org structured data, including its origins in the Semantic Web and Linked Open Data movements. Schema.org was created in 2011 to provide a common vocabulary for structured data markup on web pages. It allows search engines and other applications to understand the intended meaning and relationships of information on web pages. The document provides examples of using Schema.org structured data and microdata, and recommends applying it across various page types to help search engines better understand websites.
Structured data: Where did that come from & why are Google asking for itRichard Wallis
Structured data and Schema.org have become increasingly important for websites and search engines. Schema.org was created in 2011 as a joint effort by Google, Microsoft, Yahoo, and others to create a common set of schemas for structured data markup on web pages. Google and others now use structured data to better understand websites and display richer information in search features like Knowledge Panels. At a recent conference, a Google employee emphasized that implementing structured data using Schema.org can help websites appear in more search features and be better understood during crawling.
Telling the World and Our Users What We HaveRichard Wallis
This document summarizes a presentation by Richard Wallis on discovery and discoverability. It introduces Schema.org as a vocabulary for structured data on the web and its use by major organizations like Google, OCLC, and the Library of Congress. It discusses motivations for sharing bibliographic data on the web using Schema.org, including connecting library data and reaching users. Key initiatives are summarized, such as the Schema Bib Extend community group, BiblioGraph.net extension vocabulary, and the bib.schema.org hosted extension.
This document summarizes Richard Wallis and his work. Richard Wallis is an independent consultant and founder of Data Liberate. He currently works with OCLC and Google to develop schema standards. He chairs several W3C community groups focused on developing schemas for bibliographic data and archives data using Schema.org.
Richard Wallis, an OCLC Technology Evangelist, discusses how libraries can make their data more visible and connected on the web by publishing it as linked open data using common web vocabularies like Schema.org. Currently, library linked data exists in silos using different local vocabularies, making the data hard to discover and integrate. Adopting Schema.org could help library data reach the billions of web pages and domains that already use this general purpose vocabulary to describe things on the web.
The document discusses the Web of Data and linked data. It notes that while many libraries and institutions have published linked data, it remains isolated in "silos" using different vocabularies. The document promotes the use of Schema.org as a common vocabulary that has become a de facto standard for describing things on the web, and has the potential to help connect library linked data by providing a shared schema.
This document discusses using linked data in libraries. It notes that several national libraries have implemented linked data projects. Linked data allows for entity-based descriptions of things on the web using common vocabularies. This helps users more easily discover resources across institutional silos. The document advocates for libraries to publish their data as linked open data using common schemas, and transform records into interconnected web entities rather than standalone data. This enables new discovery experiences and ways for users to explore library collections on the web.
This document discusses how library data can be represented as entities in a knowledge graph rather than individual records. It describes how this approach can improve library workflows like discovery, cataloging and integration with the open web. Representing authors, works, subjects and other concepts as entities allows the data to be queried and displayed more intuitively. The relationships between entities can also be leveraged to update data more consistently across the graph.
Schema.org: What It Means For You and Your LibraryRichard Wallis
This document summarizes a presentation about Schema.org given to the LITA Forum in Albuquerque, NM on November 7th, 2014. The presentation discussed what Schema.org is, the SchemaBibEx extension for bibliographic data, and examples of Schema.org being used. It also covered the challenges involved in mapping library metadata to Schema.org and proposals made by SchemaBibEx to address these challenges.
The Power of Sharing Linked Data - ELAG 2014 WorkshopRichard Wallis
Presentation to set the scene and stimulate discussion in the Workshop "The Power of Sharing Linked Data" at ELAG 2014 - Bath University, UK June 10/11 2014
CIOs Speak Out - A Research Series by Jasper ColinJasper Colin
Discover key IT leadership insights from top CIOs on AI, cybersecurity, and cost optimization. Jasper Colin’s research reveals what’s shaping the future of enterprise technology. Stay ahead of the curve.
Next.js Development: The Ultimate Solution for High-Performance Web Appsrwinfotech31
The key benefits of Next.js development, including blazing-fast performance, enhanced SEO, seamless API and database integration, scalability, and expert support. It showcases how Next.js leverages Server-Side Rendering (SSR), Static Site Generation (SSG), and other advanced technologies to optimize web applications. RW Infotech offers custom solutions, migration services, and 24/7 expert support for seamless Next.js operations. Explore more :- https://www.rwit.io/technologies/next-js
All-Data, Any-AI Integration: FME & Amazon Bedrock in the Real-WorldSafe Software
Join us for an exclusive webinar featuring special guest speakers from Amazon, Amberside Energy, and Avineon-Tensing as we explore the power of Amazon Bedrock and FME in AI-driven geospatial workflows.
Discover how Avineon-Tensing is using AWS Bedrock to support Amberside Energy in automating image classification and streamlining site reporting. By integrating Bedrock’s generative AI capabilities with FME, image processing and categorization become faster and more efficient, ensuring accurate and organized filing of site imagery. Learn how this approach reduces manual effort, standardizes reporting, and leverages AWS’s secure AI tooling to optimize their workflows.
If you’re looking to enhance geospatial workflows with AI, automate image processing, or simply explore the potential of FME and Bedrock, this webinar is for you!
This presentation, delivered at Boston Code Camp 38, explores scalable multi-agent AI systems using Microsoft's AutoGen framework. It covers core concepts of AI agents, the building blocks of modern AI architectures, and how to orchestrate multi-agent collaboration using LLMs, tools, and human-in-the-loop workflows. Includes real-world use cases and implementation patterns.
GDG on Campus Monash hosted Info Session to provide details of the Solution Challenge to promote participation and hosted networking activities to help participants find their dream team
Why Outsource Accounting to India A Smart Business Move!.pdfanjelinajones6811
Outsource Accounting to India to reduce costs, access skilled professionals, and streamline financial operations. Indian accounting firms offer expert services, advanced technology, and round-the-clock support, making it a smart choice for businesses looking to improve efficiency and focus on growth.
AI in Talent Acquisition: Boosting HiringBeyond Chiefs
AI is transforming talent acquisition by streamlining recruitment processes, enhancing decision-making, and delivering personalized candidate experiences. By automating repetitive tasks such as resume screening and interview scheduling, AI significantly reduces hiring costs and improves efficiency, allowing HR teams to focus on strategic initiatives. Additionally, AI-driven analytics help recruiters identify top talent more accurately, leading to better hiring decisions. However, despite these advantages, organizations must address challenges such as AI bias, integration complexities, and resistance to adoption to fully realize its potential. Embracing AI in recruitment can provide a competitive edge, but success depends on aligning technology with business goals and ensuring ethical, unbiased implementation.
Struggling to get real value from HubSpot Sales Hub? Learn 5 mighty methods to close more deals without more leads or headcount (even on Starter subscriptions)!
These slides accompanied a webinar run by Hampshire's HubSpot User Group (HUG) on 2nd April, 2025.
HubSpot subscribers can watch the recording here: https://events.hubspot.com/events/details/hubspot-hampshire-presents-5-ways-to-close-more-deals-from-your-existing-sales-pipeline/
ABOUT THE EVENT:
Unlock hidden revenue in your CRM with our practical HubSpot tactics
Are you struggling to get real value from your HubSpot Sales Hub?
If your HubSpot feels like more of an admin burden than a revenue enabler, you’re not alone. Many sales leaders find that their team isn't updating records consistently, pipeline visibility is poor, and reporting doesn’t deliver the insights they need to drive strategy.
The good news? You don’t need to upgrade your HubSpot subscription to sort these issues.
Join us for this webinar to learn 5 mighty tactics that will help you streamline your sales process, improve pipeline visibility, and extract more revenue from your existing pipeline, without spending more on marketing or hiring extra sales reps.
What You’ll Learn
✅ Customising Records – Increase sales momentum with more useful CRM data for your salespeople
✅ Pipeline Rules – Improve deal stage consistency and data accuracy for improved prioritisation and forecasting
✅ Team Permissions & Defaults – Control access and streamline processes. Spend more time selling, less on admin
✅ Pipeline View Customisation – Get clearer sales insights, faster, to deal with revenue leaks
✅ Simple Sales Reports – Build actionable dashboards to drive strategy with data
💡 Bonus: Successful Sales Hub users will share their experiences and the revenue impact it has delivered for them.
Who is this webinar for?
Sales leaders using HubSpot Sales Hub Starter, or those new to HubSpot
Sales managers who need better CRM adoption from their team
Anyone struggling with pipeline visibility, reporting, or forecasting
Teams who want to close more deals without extra sales headcount
Getting the Best of TrueDEM – April News & Updatespanagenda
Webinar Recording: https://www.panagenda.com/webinars/getting-the-best-of-truedem-april-news-updates/
Boost your Microsoft 365 experience with OfficeExpert TrueDEM! Join the April webinar for a deep dive into recent and upcoming features and functionalities of OfficeExpert TrueDEM. We’ll showcase what’s new and use practical application examples and real-life scenarios, to demonstrate how to leverage TrueDEM to optimize your M365 environment, troubleshoot issues, improve user satisfaction and productivity, and ultimately make data-driven business decisions.
These sessions will be led by our team of product management and consultants, who interact with customers daily and possess in-depth product knowledge, providing valuable insights and expert guidance.
What you’ll take away
- Updates & info about the latest and upcoming features of TrueDEM
- Practical and realistic applications & examples for troubelshooting or improving your Microsoft Teams & M365 environment
- Use cases and examples of how our customers use TrueDEM
Benefits of Moving Ellucian Banner to Oracle CloudAstuteBusiness
Discover the advantages of migrating Ellucian Banner to Oracle Cloud Infrastructure, including scalability, security, and cost efficiency for educational institutions.
Fast Screen Recorder v2.1.0.11 Crack Updated [April-2025]jackalen173
Copy This Link and paste in new tab & get Crack File
↓
https://hamzapc.com/ddl
Fast Screen Recorder is an incredibly useful app that will let you record your screen and save a video of everything that happens on it.
Packaging your App for AppExchange – Managed Vs Unmanaged.pptxmohayyudin7826
Learn how to package your app for Salesforce AppExchange with a deep dive into managed vs. unmanaged packages. Understand the best strategies for ISV success and choosing the right approach for your app development goals.
Scot-Secure is Scotland’s largest annual cyber security conference. The event brings together senior InfoSec personnel, IT leaders, academics, security researchers and law enforcement, providing a unique forum for knowledge exchange, discussion and high-level networking.
The programme is focussed on improving awareness and best practice through shared learning: highlighting emerging threats, new research and changing adversarial tactics, and examining practical ways to improve resilience, detection and response.
Smarter RAG Pipelines: Scaling Search with Milvus and FeastZilliz
About this webinar
Learn how Milvus and Feast can be used together to scale vector search and easily declare views for retrieval using open source. We’ll demonstrate how to integrate Milvus with Feast to build a customized RAG pipeline.
Topics Covered
- Leverage Feast for dynamic metadata and document storage and retrieval, ensuring that the correct data is always available at inference time
- Learn how to integrate Feast with Milvus to support vector-based retrieval in RAG systems
- Use Milvus for fast, high-dimensional similarity search, enhancing the retrieval phase of your RAG model
4. edition
author location
holding
date of publication
classification
publisher
title
source
ISBN
author location
holding
classification
publisher
person place
object concept
organization work
library data:
stored as records
title
8. What’s Happening for Libraries:
SchemaBibExtend
• W3C Activity
• Schema.org Based
http://www.w3.org/community/schemabibex/
9. WorldCat Entities
Works
• 197+ million Work descriptions and URIs
• Schema.org
• RDF Data formats – RDF/XML, Turtle, Triples, JSON-LD
• Links to WorldCat manifestations
• Links to Dewey, LCSH, LCNAF, VIAF, FAST
• Open Data license
• Released April 2014
14. • A Technology
• Standard on the Web – RDF, URIs, Vocabularies
• Identifying and Linking resources on the Web
• Important powerful enabling technology
• But only a technology…
for the systems folks to worry about
• Real benefits flow from:
Entity Based Data Architecture
Powered by Linked Data
16. … beneficial effects for libraries
http://www.theeuropeanlibrary.org/tel4/newsitem/5350
“… more than 80 per cent of these
visitors coming from search engines …”
20. Bibliographic Entities in the Web of Data
Cataloging
Integration with the web
Cascading updates More options
Intuitive searching
21. OCLC Entity Based Data Strategy
VIAF, ISNI, FAST Publish Linked Data
WorldCat.org Linked Data Release – using Schema.org
Internal agreement on data strategy
Evangelism
Research & Design with Data Architecture Group
Data mining of WorldCat resources
WorldCat Works Released
2012
2014
Application Integration
WorldCat Discovery
Analytics
Discovery API
Cataloging
2015
More Entities Released
Person
Organization
Event
Concept
New Products
Continuing Evangelism
New Services
Continuing Innovation
2013
2016
22. Yeah – but what can I
do?• Your resources in the Web of Data
- Contribute to WorldCat
• Start linking your resources
- WorldCat Work URIs in your data
• Register institution on the web
- Schema.org – Organization/Library
- WorldCat Registry
- Schema.org across your Website(s)
• Talk to system providers / projects
- Not just a Linked Data feature
- Exposing Entities in global context
23. Explore. Share. Magnify.
Richard Wallis
Technology Evangelist
Richard.wallis@oclc.org
@rjw
Linked Data: from Library
Entities to the Web of Data
Editor's Notes
#6: But, what if we started to think about the information in a different way.
#7: We, at OCLC, with our major data ingest and processing techniques – Big Data tech
Matching incoming data with what we have
Identifying the entities and associating their role attributes
Woks – not so far very visible in libraries – important on the web