Has been described as a ‘data commons’, or more usually a Web of Data.
Problem for machines to extract meaning. At present, the raw data is not really available.
Persitent URIs for names of things – http URIs are names, not addressesProvide information – properties and classes for a URIMore links
Things are resources because someone created a URI to identify them, not because they have some particular properties in and of themselves.HTTP URIs provide a simple way to create globally unique names without centralized management; and URIs work not just as a name but also as a means of accessing information about a resource over the Web
In a data graph, there is no concept of roots (or a hierarchy). A graph consists of resources related to other resources, with no single resource having any particular intrinsic importance over another.
This subject – the archive itself – has a page (foaf:page being the property) with name ‘finding aid’. The ‘finding aid’ is the object of this statement, but is also itself a subject. A subject in an RDF document may also be referenced as an object of a property in another RDF statement.
We have four ‘things’ here: unit of description; repostiory; finding aid; EAD document. We have given Unit of description a number of properties. Other things can also have properties (this is simplified)These properties are indicated in the green boxes. They are also called predicates.
In hypertext web sites it is considered generally rather bad etiquette not to link to related external material. The value of your own information is very much a function of what it links to, as well as the inherent value of the information within the web page. So it is also in the Semantic Web.Remember, this is about machines linking – machines need identifiers; humans generally know when something is a place or when it is a person. BBC + DBPedia + GeoNames + Archives Hub + Copac + VIAF = the Web as an exploratory space
Once you say that they are the same, the implication is that they share the same classes and properties.
Ontology defines a ‘knowledge domain’
Encoded Archival Description is an XML standard for encoding archival finding aidsThe Object Description Schema (MODS) is an XML-based bibliographic description schemaMODS - Metadata Object Description Schema (MODS) is a schema for a bibliographic element set that may be used for a variety of purposes, and particularly for library applications.EAD - Things” include concepts and abstractions as well as material objects We want location – archives physical things so location importantAlso wanted event data, partly steered by the visualisation prototypeAlso ‘extent’ data – number of boxes
303 and Content Neg from ‘Cool URIs for the Semantic Web’
Open Data Commons Public Domain DedicationCreative Commons CC0 license
e.g. index terms may not always apply down the hierarchy of the descriptionWe are pulling <repository> down into lower-level descriptions
Linked Data and Locah, UKSG2011
How to Become a First Class Citizen of the Web<br />Linked Data and the LOCAH project<br />Jane Stevenson & Adrian Stevenson<br />
Remit<br />This session will give a brief overview of the concepts behind Linked Data and will explain how we are applying these ideas to archival and bibliographic data. <br />Archives Hub: merged catalogue of archival descriptions from 200 institutions across the UK<br />Copac: merged catalogue of bibliographic records from libraries across the UK<br />
The goal of Linked Data is to enable people to share structured data on the Web as easily as they can share documents today.<br />[The creation of] a space where people and organizations can post and consume data about anything. <br />Bizer/Cyganiak/Heath Linked Data Tuturial, linkeddata.org<br />
In essence, it marks a shift in thinking from publishing data in human readable HTML documents to machine readable documents. That means that machines can do a little more of the thinking work for us.<br />http://www.linkeddatatools.com/semantic-web-basics<br />
Linked Data encourages open data, open licences and reuse. <br />…but Linked Data does not have to be open.<br />
Core questions<br />Is it achievable?<br />Will it bring substantial benefits? <br />“It is the unexpected re-use of information which is the value added by the web”<br />
What is Linked Data?<br />4 ‘rules’ of for the web of data:<br />Use URIs as names for things<br />Use HTTP URIs so that people can look up those names.<br />When someone looks up a URI, provide useful information, using the standards (RDF, SPARQL)<br />Include links to other URIs. so that they can discover more things.<br />http://www.w3.org/DesignIssues/LinkedData.html<br />
Giving Things identifiers<br />We can make statements about things and establish relationships by assigning identifiers to them. <br />Jane Stevenson = http://archiveshub.ac.uk/janefoaf.rdf<br />Manchester = http://dbpedia.org/resource/manchester<br />English = http://lexvo.org/id/iso639-3/eng<br />
URIs<br />Uniform Resource Identifiers (URIs) are identifiers for entities (people, places, subjects, records, institutions). <br />They identify resources, and ideally allow you to access representations of those resources.<br />Think not of locations, but of identifiers!<br />For Linked Data you use HTTP URIs<br />Jane Stevenson = http://archiveshub.ac.uk/janefoaf.rdf<br />Manchester = http://dbpedia.org/resource/manchester<br />English = http://lexvo.org/id/iso639-3/eng<br />
So...?<br />If something is identified, it can be linked to<br />We can then take items from one dataset and link them to items from other datasets<br />BBC<br />Copac<br />VIAF<br />DBPedia<br />GeoNames<br />Archives Hub<br />
The Linking benefits of Linked Data<br />BBC:Cranford<br />Copac:Cranford<br />VIAF:Dickens<br />DBPedia: Gaskell<br />Hub:Gaskell<br />Geonames:Manchester<br />DBPedia: Dickens<br />Hub:Dickens<br />
The Web of ‘Documents’<br />Global information space (for humans)<br />Document paradigm<br />Hyperlinks<br />Search engines index and infering relevance<br />Implicit relationships between documents<br />Lack of semantics<br />
The Web of Linked Data<br />Global data space (for humans and machines)<br />Making connections between entities across domains (people, books, films, music, genes, medicines, health, statistics...)<br />LD is not about searching for specific documents or visiting particular websites, it is about things - identifying and connecting them.<br />Closely aligned to the general architecture of the Web<br />
From one thing…to the same thing<br /><sameAs><br />http://dbpedia.org/resource/manchester<br />http://sws.geonames.org/2643123<br />http://data.archiveshub.ac.uk/id/concept/ncarules/manchester<br /> Are they the same? <br />
Vocabularies & Ontologies<br />Vocabulary: set of terms<br />Ontology: organisation of terms – hierarchy, relationships<br />
Shared vocabularies<br />Problems of data integration: information exchange across independently designed systems<br />Two different databases: one for films one for actors<br />To collaborate using their current databases, the owners of either site would have to decide on a common data format by which to share information that they could both understand by using a common film and actor unique ID scheme of their own invention. <br />
Need ‘film title’; ‘actor name’; ‘actor birthdate’, etc. to mean the same thing to each<br />Use the same vocabulary<br />Query both databases.<br />No need for transformations, mappings, contracts<br />
Vocabularies in Linked Data<br />Common vocabulary to describe the data, e.g. ‘film-title’ means the same thing<br />Adopt the same ontologies for expressing meaning<br />Use semantics to link data<br />Want to avoid transformation, mapping, contracts between data providers<br />
Ontologies<br />Many widely used ontologies<br />Use others as far as possible<br />Use your own where necessary<br />Dublin Core<br />Friend of a Friend (FOAF)<br />Simple Knowledge Organisation System (SKOS)<br />Bibo<br />Open Cyc<br />
Linked Data on the Hub & Copac<br />Linked Open Copac and Archives Hub: Locah<br />JISC funded project<br />August 2010 – July 2011<br />Mimas<br />UKOLN<br />Eduserv<br />
What is LOCAH doing?<br />Part 1: Exposing the Linked Data<br />Part 2: Creating a prototype visualisation<br />Part 3: Reporting on opportunities and barriers<br />
How are we exposing the Data?<br />Model our ‘things’ into RDF<br />Transform the existing data into RDF/XML <br />Enhance the data<br />Load the RDF/XML into a triple store<br />Create Linked Data Views<br />Document the process, opportunities and barriers on LOCAH Blog<br />
1. Modelling ‘things’ into RDF<br />Hub data in ‘Encoded Archival Description’ EAD XML form<br />Copac data in ‘Metadata Object Description Schema’ MODS XML form<br />Take a step back from the data format<br />Think about your ‘things’<br />What is EAD document “saying” about “things in the world”?<br />What questions do we want to answer about those “things”?<br />http://www.loc.gov/ead/ http://www.loc.gov/standards/mods/<br />
1. Modelling ‘things’ into RDF<br />Need to decide on patterns for URIs we generate<br />Following guidance from W3C ‘Cool URIs for the Semantic Web’ and UK Cabinet Office ‘Designing URI Sets for the UK Public Sector’<br />http://data.archiveshub.ac.uk/id/findingaid/gb1086skinner ‘thing’ URI<br /> … is HTTP 303 ‘See Other’ redirected to …<br />http://data.archiveshub.ac.uk/doc/findingaid/gb1086skinner document URI<br /> … which is then content negotiated to …<br /> http://data.archiveshub.ac.uk/doc/findingaid/gb1086skinner.htmlhttp://data.archiveshub.ac.uk/doc/findingaid/gb1086skinner.rdf http://data.archiveshub.ac.uk/doc/findingaid/gb1086skinner.turtlehttp://data.archiveshub.ac.uk/doc/findingaid/gb1086skinner.json<br />http://www.w3.org/TR/cooluris/http://www.cabinetoffice.gov.uk/resource-library/designing-uri-sets-uk-public-sector<br />
1. Modelling ‘things’ into RDF<br />Using existing RDF vocabularies:<br />DC, SKOS, FOAF, BIBO, WGS84 Geo, Lexvo, ORE, LODE, Event and Time Ontologies<br />Define additional RDF terms where required,<br />hub:ArchivalResource<br />copac:BibiographicResource<br />hub:maintenanceAgency<br />copac:Creator<br />It can be hard to know where to look for vocabs and ontologies<br />Decide on licence – CC BY-NC 2.0, CC0, ODC PDD<br />
Feedback Requested!<br />We would like feedback on the model<br />Appreciate this will be easier when the data available<br />Via blog <br />http://blogs.ukoln.ac.uk/locah/2010/09/28/model-a-first-cut/<br />http://blogs.ukoln.ac.uk/locah/2010/11/08/some-more-things-some-extensions-to-the-hub-model/<br />http://blogs.ukoln.ac.uk/locah/2010/10/07/modelling-copac-data/<br />Via email, twitter, in person<br />
2. Transforming in RDF/XML<br />Transform EAD and MODS to RDF/XML based on our models<br />Hub: created XSLT Stylesheet and used Saxon parser<br />http://saxon.sourceforge.net/<br />Saxon runs the XSLT against a set of EAD files and creates a set of RDF/XML files<br />Copac: created in-house Java transformation program<br />
3. Enhancing our data<br />Language - lexvo.org<br />Time periods - reference.data.gov.uk<br />Geolocation - UK Postcodes URIs and Ordnance Survey URIs<br />Names - Virtual International Authority File<br />Matches and links widely-used authority files - http://viaf.org/<br />Names (and subjects) - DBPedia<br />Subjects - Library of Congress Subject Headings<br />
4. Load RDF/XML into triple store<br />Using the Talis Platform triple store<br />RDF/XML is HTTP POSTed<br />We’re using Pynappl<br />Python client for the Talis Platform<br />http://code.google.com/p/pynappl/<br />Store provides us with a SPARQL query interface<br />
5. Create Linked Data Views<br />Expose ‘bounded’ descriptions from the triple store over the Web<br />Make available as documents in both human-readable HTML and RDF formats (also JSON, Turtle, CSV)<br />Using Paget ‘Linked Data Publishing Framework’<br />http://code.google.com/p/paget/<br />PHP scripts query Sparql endpoint<br />
Can I access the Locah Linked Data?<br />Will be releasing the Hub data very soon!<br />Copac data will follow approx 1 month later<br />Release will include Linked Data views, Sparql endpoint details, example queries and supporting documentation<br />
Reporting on opportunities and barriers<br />Locah Blog (tags: ‘opportunities’ ‘barriers’)<br />Feed into #JiscEXPO programme evidence gathering<br />More at:<br />http://blogs.ukoln.ac.uk/locah/2010/09/22/creating-linked-data-more-reflections-from-the-coal-face/<br />http://blogs.ukoln.ac.uk/locah/2010/12/01/assessing-linked-data<br />
Creating the Visualisation Prototype<br />Based on researcher use cases<br />Data queried from Sparql endpoint<br />Use tools such as Simile, Many Eyes, Google Charts<br />For first Hub visualisation using Timemap – <br />Googlemaps and Simile<br />http://code.google.com/p/timemap/<br />
Visualisation Prototype<br />Using Timemap – <br />Googlemaps and Simile<br />http://code.google.com/p/timemap/<br />Early stages with this<br />Will give location and ‘extent’ of archive.<br />Will link through to Archives Hub <br />
Sir Ernest Henry Shackleton<br />http://archiveshub.ac.uk/data/gb15sirernesthenryshackleton<br />Archives related to Shackleton:<br />VIAF URL: http://viaf.org/viaf/12338195/<br />Books related to Shackleton: <br />Biographical History:<br />Ernest Henry Shackleton was born on 15 February 1874 in Kilkea, Ireland, one of six children of Anglo-Irish parents. The family moved from their farm to Dublin, where his father, Henry studied medicine. On qualifying in 1884, Henry took up a practice in south London, and between 1887 and 1890, Ernest was educated at Dulwich College. On leaving school, he entered the merchant service, serving in the square-rigged ship Hoghton Tower until 1894 when he transferred to tramp steamers. In 1896, he qualified as first mate, and two years later, was certified as master, joining the Union Castle line in 1899. [more]<br />
The learning process<br />Model the data, not the description<br />The description is one of the entities<br />Understand the importance of URIs<br />Think about your world before others<br /> …but external links are important<br />Try to get to grips with terminology<br />
Names<br />6947115KNAPPF<br />F Knapp associated with record 6947115<br />/id/agent/6947115KNAPPF<br /><copac:isCreatorOf rdf:resource="http://data.copac.ac.uk/id/mods/6947115"/><br />6957115KNAPPF<br />6947115<br /><isCreatorOf><br />
Index terms (names, subjects, places)<br />‘AssociatedWith’ as the relationship<br /> Benefits of structured index terms<br />Use /person/ and /organisation/ in the URI<br /> Distinguish /person/pilkington’ the person and /organisation/pilkington<br />Distinguish place/reading/ and subject/reading/ <br />
Problems with source data<br />EAD very permissive: whole range of finding aids<br />Copac more consistent but still wide variety<br />Hub EAD: We limited the tags we worked with<br />Large files (around 5Mb) tend to need splitting up<br />
Duplication of data<br />“So statements which relate things in the two documents must be repeated in each. This clearly is against the first rule of data storage: don't store the same data in two different places: you will have problems keeping it consistent.” (T B-L www.w3.org/designissues/linkeddata.html)<br />
Archival Inheritance<br />“Do not repeat information at a lower level of description that has already been given at a higher level.” ISAD(G)<br />Many elements do not apply to ‘child’ descriptions<br />Simple rule of inheritance not always appropriate<br />LD does assert hierarchical relationships but no requirement to follow these links<br />
Copac<br />Larger community: more potential vocabularies/documentation/support/confusion/inconsistencies<br />Merged catalogues: a unique scenario<br />‘Creator’ and ‘Others’ (editor, authors, illustrator)<br />Learning from Hub / Doing what is appropriate<br />Usually not right or wrong answers<br />
Copac model<br />Groundwork done with Archives Hub. Then had to decide what we wanted to say about the data<br />Challenges over what a ‘record’ is – ‘Bleak House’ from each contributor? or one merged record?<br />In many ways simpler than archival data; but also can decide to create a simpler model<br />
Copac specification<br />Hard to start but proved to be very crucial<br />Very iterative process between spec and RDF output<br />Important to establish the structure of the spec (we used tabs for each ‘entity’)<br />
Risks<br />Can you rely on data sources long-term? <br />Persistence of persistent URIs?<br />New technologies<br />Investment of time – unsure of benefits<br />Licensing issues<br />
Provenance<br />Track which data comes from our sources: URIs identify your entities<br />Linked Data tends towards disassembling<br />Copac/Hub as trusted sources…is DBPedia (for example) as reliable? <br />Contributors may want data to be identified<br />Issues around administrative/biographical history<br />Benefits of trust? <br />Users may want to know where data is from<br />
Licensing <br />Nature of Linked Data: each triple as a piece of data<br />‘Ownership’ of data? <br />Data often already freely available (M2M interfaces)<br />
Licensing<br />Public Domain Licences: simple, explicit, and permit widest possible reuse. Waive all rights to the data<br />BL, British National Bibiography uses public domain licence<br />Limit commercial uses? <br />Build in community norms: attribution, share alike - to reinforce desire for acknowledgement<br />Legal situation? <br />
Attribution and CC licence<br />Sections of this presentation adapted from materials created by other members of the LOCAH Project<br />This presentation available under creative commonsNon Commercial-Share Alike:http://creativecommons.org/licenses/by-nc/2.0/uk/<br />
A particular slide catching your eye?
Clipping is a handy way to collect important slides you want to go back to later.