Connecting the Dots: Linking Digitized Collections Across Metadata Silos
Linking digitized collections across metadata silos
Jeff Mixter and Titia van der Werf
OCLC Research
July 2, 2014
LIBER 2014
Connecting the Dots
Introduction
• Projects such as Europeana and the Digital Public Library of America have
highlighted the importance of sharing metadata across silos
• While both of these projects have been successful in harvesting collections data,
they have had problems with rationalizing the data and forming a coherent
understanding of the aggregation
• In order to properly share data across silos and to better share data on the Web, for
both human as well as machine consumption, there needs to be a concerted effort to
apply best practices and standards that are universally understood and consumed
Current Situation
• Organizations create digital collections and generate metadata in repository silos. This
metadata is generally:
•Not connecting the digitized items to their analogue sources
•Not connecting names to authority records (persons, organizations, places, etc.) nor subject
descriptions to controlled vocabularies
•Not connecting to related online items accessible elsewhere
• Aggregators harvest this metadata that, in the process, generally gets “dumbed
down”:
•The University of Illinois OAI-PMH Data provider registry notes that 2964 repositories use
dc. The next highest is MARC21 at 545 repositories
•Even if dc.extensions are used, they are often lost in the OAI-PMH harvesting process
•Aggregators usually ignore idiosyncratic use of metadata schemas and enforce the use of
designated metadata fields
•Digital collection items are not very visible to search engines
• A recent JISC project determined “Only about 50% of items appeared on the first page of
Google results using the item name or title”
a case study: “a good example”
Search string: exposition organisée pour le centenaire des
"Fleurs du Mal"
& search on full-text string from document: "Eugène Crépet"
Search in:
1. BnF Catalogue (Library Catalogue)
2. Gallica (Repository)
3. WorldCat (Aggregator via DCG harvester)
4. TEL (Aggregator)
5. Europeana (Aggregator)
6. Google (Search Engine)
Observations
1. A lot of duplication of effort and waste of resources in
developing aggregator services within the same domain
2. A lot of missed opportunities to connect to related data
inside&outside the own silo (both repository and aggregation levels)
3. Visibility/discoverability via SEO is a sign of digital maturity
4. Aggregators generally do not use the FT-indexes available from
repositories to enrich their search functionality
Problem Statements
1. How to share metadata and reduce costs?
2. How to make digital collections more interoperable
across data silos?
3. How to make digital collections more visible to search
engines?
Data sharing
• ‘Data sharing’ is a rather simple term and does not do justice to what it means in
today’s knowledge society
• What we want to do is:
1. Publish data on the Web in a format that can be consumed and indexed by
aggregators/web applications
2. Share data with other organizations with the goal of ‘connecting the dots’
3. This entails connecting points in your data to points in other organization’s
data. This could be People, Places, Events, Organizations, Topics etc.
4. Connecting data across silos will help improve the ability for patrons to
browse and navigate related data/items without having to do multiple
searches in multiple portals
A Knowledge Graph
• In essence what we want to build is a massive knowledge graph of data from digital
collections
BNF
France
DNB
Germany
KB
Netherlands
BNE
Spain
BL
UK
A Knowledge Graph
• Better yet, we actually want to connect individual dots within and across data silos.
This is the essence of Linked Data
•This requires changes in how repository data is published
Vincent
van Gogh
Vincent
van Gogh
Vincent
van Gogh
Vincent
van Gogh
Vincent
van Gogh
Linked Data
•Linked Data is a way of publishing data on the Web in a format that can be easily
consumed and understood by both humans and machines. It relies on linking data
points together to form a complex graph of information
• Linked Data relies on identifiers called URIs
• Things NOT Strings!
• Linked Data can also be used to help connect data across silos and across
domains of practice
Schema.org
• Schema.org is a Linked Data vocabulary that is understood and indexed by search
engines
• It is widely used:
• It is used on 15% of web pages harvested by Google
• over 5 million web sites
• over 25 billion referenced entities
• Google Web Master tools can tell users how much structured data Google is seeing
and indexing
• WorldCat.org has unique 4.63 million structured data entities over 1.48 million
pages
• So why Schema.org?
• Discoverability on the web
• Interoperability with data outside of the library domain
OCLC Projects
• In 2012, OCLC added Schema.org tags to WorldCat.org records, improving the way in
which library information is represented to search engines.
OCLC Projects
•In 2012, OCLC published VIAF data as Linked Open Data
•In 2013, OCLC developed a VIAF bot for Wikipedia
http://inkdroid.org/journal/2012/05/15/diving-into-viaf/comment-page-1/
OCLC Projects
• In April of this year (2014), OCLC released a beta version of its Works data as Linked Data,
marked up in Schema.org (197M work descriptions)
This Points to the ‘manifestation’ in
WorldCat.org
http://experiment.worldcat.org/entity/work/data/51196.html
Other OCLC Projects
• There is an exploratory project underway to take the Digital Collections Gateway
metadata create more granular Linked Data descriptions
• A USC collection was used as a test case
• Using original metadata rich descriptions of people, places, events and items
were created
•As OCLC continues to use the Schema.org vocabulary to items found in libraries
archives and museums, we have begun to create extension terms to supplement
shortcomings in Schema.org
• There is also a W3C Community Group Schema Bib Extend that proposes
additional terms to Schema.org for review and consideration
Set the framework for why there needs to be a change in how we share and publish repository data
Explain how we currently do things
The author names are linked to the French authority record.
The full-text index of this resource is seemingly not indexed in this catalogue.
Gallica does not link the author names to any authority record/related resources.
The full-text index of this resource is indexed in Gallica. Searching on “Eugène Crépet” gives 3 results extracted from the FT of this specific document, amongst other results.
Searching on the string “Eugène Crépet” yields 12 results, one of which (nr.8) from the FT of the document under scrutiny. There are three extracts from the FT containing this string.
The metadata coming in WorldCat via the Digital Collection Gateway is dumbed down and not enriched with links to authority records within or outside of WorldCat.
The full-text index of this resource is seemingly not indexed in WorldCat.
The authors are not linked to other resources within or outside TEL.
The full-text index of this resource is seemingly not indexed in TEL.
The author names are linked to other works WITHIN the Europeana aggregation – not to an authority record outside of Europeana (e.g. BnF authority record or VIAF).
The full-text index of this resource is seemingly not indexed in Europeana.
One would expect that Gallica would be top ranked on this result page (with the direct link to the FT-resource), but the DCG-WorldCat record is on top of the list – which shows that WorldCat is better in SEO. Gallica is still ranked second – which is not bad. TEL and Europeana are not visible.
This is reflective of how data sharing works. Europeana harvests repository data in bulk uploads and then publishes it. They do some behind the scenes clean-up but because it is already simple dublin core the efforts are rather futile and very difficult
What we actually want is to link the individual repositories together. Using a rich granular standard Web vocabulary organization will be able to publish their data without loss. The task of linking it to other repositories will still be difficult but the data experts will at least be able to work with very detailed source metadata
The linking that goes on between individual data sets will actually be micro-linking. This is linking individual dots (metadata points) to other dots in other data sets.
URIs are what we are actually linking together
Brief overview of Linked Data and discussion of efforts already undertaken in Europe
Brief overview of Schema.org – I left the discussion of Schema.org purposely brief as to not give the over-impression that we are trying to push it on people. I am not sure if that would come off as offensive to anyone.
Overview of OCLC Linked Data releases and then a more detailed discussion about the exploratory project that I am working on. Emphasis of just getting data published using Schema.org. The conclusion of this slide will be that the goal of developing better Digital Collection Gateway Linked Data records is to enable us build better descriptions of these items. The job would be much easier if we could simply link to the Linked Data descriptions published by the original creator of the data (i.e. the repository). The repository has the benefit of being able to generate Linked Data from highly granular source metadata where as the Digital Collections Gateway has to rely on simple Dublin Core data.