ResourceSync:
                   Web-based
                    Resource
               Synchronization

                 Simeon Warner (Cornell)


Open Repositories 2012, Edinburgh, 11 July 2012
Core team -- Todd Carpenter (NISO), Berhard Haslhofer, (Cornell
University), Martin Klein (Los Alamos National Laboratory), Nettie
Lagace (NISO), Carl Lagoze (Cornell University), Peter Murray
(NISO), Michael L. Nelson (Old Dominion University), Robert
Sanderson (Los Alamos National Laboratory), Herbert Van de
Sompel (Los Alamos National Laboratory), Simeon Warner (Cornell
University)

Team members – Richard Jones (JISC/Cottage Labs), Stuart
Lewis (JISC/Cottage Labs), Graham Klyne (JISC), Shlomo Sanders
(Ex Libris), Kevin Ford (LoC), Ed Summers (LoC), Jeff Young
(OCLC), David Rosenthal (Stanford)

Funding – The Sloan Foundation (core team) and the JISC (UK
participation)

Thanks for slides from – Stuart Lewis, Herbert Van de Sompel
Synchronize what?
•  Web resources – things with a URI that can be
   dereferenced and are cache-able (no dependency on
   underlying OS, technologies etc.)

•  Small websites/repositories (a few resources) to
   large repositories/datasets/linked data collections
   (many millions of resources)

•  That change slowly (weeks/months) or quickly
   (seconds), and where latency needs may vary

•  Focus on needs of research communication and
   cultural heritage organizations, but aim for generality

                                                    3
Why?
… because lots of projects and services are
doing synchronization but have to roll their
own on a case by case basis!


•  Project team involved with projects that need this

•  Experience with OAI-PMH: widely used in repos but
   o    XML metadata only
   o    Web technology has moved on since 1999

•  Data / Metadata / Linked Data – Shared solution?
Use cases – the basics




  JISC
More use cases




JISC
Out-of-scope (for now)
•  Bidirectional synchronization

•  Destination-defined selective synchronization (query)

•  Special understanding of complex objects

•  Bulk URI migration

•  Diffs (hooks?)

•  Intra-application event notification

•  Content tracking
Use case: DBpedia Live duplication
•  20M entries updated @ 1/s though sporadic

•  Want low latency => need a push technology
Use case: arXiv mirroring
•  1M article versions, ~800/day created
   or updated at 8pm US eastern time

•  Metadata and full-text for each article

•  Accuracy important

•  Want low barrier for others to use
•  Look for more general solution than current
   homebrew mirroring (running with minor
   modifications since 1994!) and occasional
   rsync (filesystem layout specific, auth issues)
Terminology
•  Resource: an object to be synchronized, a web resource

•  Source: system with the original or master resources

•  Destination: system to which resources from the source will be
   copied and kept in synchronization

•  Pull: process to get information from source to destination
   initiated by the destination.

•  Push: process to get information from source to destination
   initiated by the source (and some subscription mechanism)

•  Metadata: information about resources such as URI,
   modification time, checksum, etc. (Not to be confused with
   resources that may themselves be metadata records)
Three basic needs
1.  Baseline synchronization – A destination must be
    able to perform an initial load or catch-up with a
    source
     -     avoid out-of-band setup; provide discovery

2.  Incremental synchronization – A destination must
     have some way to keep up-to-date with changes at a
     source
     -    subject to some latency; minimal: create/update/delete

3.  Audit – It should be possible to determine whether a
    destination is synchronized with a source
     -    subject to some latency; want efficiency > HTTP HEAD
Baseline synchronization
Either

•  Get inventory of resources and then copy them one-
   by-one using HTTP GET
     o    simplest, inventory is list of resources plus perhaps metadata
     o    inventory format?

or

•  Get dump of resources and all necessary metadata
     o    more efficient: reduce number of round trips
     o    dump format?
Audit
Could do new Baseline synchronization and compare …
but likely very inefficient! Optimize by adding:

•  Get inventory and compare with copy at destination
   o    use timestamp, digest or other metadata in inventory to
        check content (effort çè accuracy tradeoff)
   o    latency depends on freshness of inventory and time to copy
        and check (easier to cope with if modification times included
        in metadata)
Incremental synchronization
Simplest method is Audit and then copy of all new/
updated resources, plus removal of deleted resources.
Optimize by adding:

•  Change Communication – Exchange ChangeSet
   listing only updates
      -  How to understand sequence, schedule?

•  Resource Transfer – Exchange dumps for
   ChangeSets or even diffs appropriate to resource type

Change Memory necessary to record sequence or
intermediate states.
Template to map approaches




                         15
Approaches and technologies
                                    Push
  DSNotify
                OAI-PMH                        Pull
     rsync
                   Crawl
                              OAI-ORE
        RDFsync
                                         WebDAV Col. Syn.
                            XMPP
 Atom                                SWORD       AtomPub
             Sitemap        RSS

SPARQLpush                                 PubSubHubbub
                  SDShare         XMPP

                JISC
A framework based on Sitemaps
•  Modular framework allowing selective deployment
•  Sitemap is the most basic component of the
   framework
•  Reuse Sitemap form for changesets and notifications
   (same <url> element describing resource)
•  Selective synchronization via tagging
•  Discovery of capabilities via <atom:link>!
•  Further extension possible



                                                18
Baseline Sync with Inventory




                           19
Level zero è Publish a Sitemap
•  Periodic publication of an up-to-date Sitemap is
   base level implementation

•  Use Sitemap <url> as is with <loc> and
   <lastmod> as core elements for each Resource
   o    Introduce optional extra elements to convey fixity information,
        size, tags for selective synchronization, etc.

•  Extend to:
   o    Convey Source capabilities, discovery informatio, locations of
        dumps, locations of changesets, change memory, etc.
   o    Provide timestamp and/or additional metadata for the
        Sitemap
Two resources, with lastmod times
Two resources, with lastmod times, sizes and
         digests. The second with a tag also
Sitemap details & issues
•  Sitemap XML format designed to allow extension

•  ResourceSync additions:
   o    Additional core elements in ResourceSync namespace
        (digest, size, update information)
   o    Discovery information using <atom:link> elements

•  Use existing Sitemap Index scheme for large sets of
   resources (handles up to 2.5 billion resources before
   further extension required)

•  Provide mapping to RDF semantics but keep XML
   simple


                                                        23
Incremental Sync with ChangeSet




                            24
ChangeSet
•  Reuse Sitemap format but include information only for change
   events over a certain period:
    •  One <url> element per change event
    •  The <url> element uses <loc> and <lastmod> as is and
       is extended with:
        •  an event type to express create/update/delete
        •  an optional event id to provide a unique identifier for the
            event.
        •  can further extend to include fixity, tag info, Memento
            TimeGate link, special-purpose access-point, etc.
    •  Introduce minimal <urlset>-level extensions to support:
        •  Navigation between ChangeSets via <atom:link>
        •  Timestamping the ChangeSet



                                                             25
Expt: arXiv – Inventory and ChangeSet
 •  Baseline synchronization and Audit (Inventory):
    o    2.3M resources (300GB content)
    o    46 sitemaps and 1 sitemapindex (50k resources/sitemap)
    o    sitemaps ~9.3MB each -> 430MB total uncompressed;1.7MB
         each -> 78MB total if gzipped (<0.03% content size)

 •  Incremental synchronization (ChangeSet):
    o    arXiv has updates daily @ 8pm so create daily ChangeSet
    o    ~1k additions and 700 updates per day
    o    1 sitemap ~300kB or 20kB gzipped, can be generated and
         served statically
    o    keep chain of ChangeSets, link with <atom:link>
Incremental Sync with Push via XMPP




                              27
Change Communication: Push via XMPP
  •  Rapid notification of change events via XMPP
     PubSub node; one notification per event
  •  Each change event is conveyed using a Sitemap
     <url> element contained in a dedicated XMPP
     <item> wrapper
  •  Use same resource metadata (e.g. <loc>,
     <lastmod>) and same extensions as with
     changesets
  •  Multiple change events can be grouped into a single
     XMPP message (using <items>)
Expt: LiveDBpedia with XMPP Push
•  LANL Research Library ran a significant scale
   experiment in synchronization of the LiveDBpedia
   database from Los Alamos to two remote sites using
   XMPP to push change notifications
   o    Push for change communication only, content then obtained
        with HTTP GET

•  Destination sites were able to keep in close
   synchronization with sources
   o    Maximum queued updates <400 over 6 runs with 100k
        updates; and bursty updates averaging ~1/s
   o    Small number of errors suggests use for audit in many real-
        life situations
Dumps
Optimization over making repeated HTTP GET requests
for multiple resources. Use for baseline and changeset.
Options:

1.  ZIP+Sitemap
  o    simple and ZIP very widely used
  o    consistent inventory/change/set format
  o    con: “custom”

2.  WARC
  o    designed for exactly this purpose
  o    con: little used outside web archiving community
Sitemaps + XMPP + Dumps




                      31
Timeline and input
•  July 2012 – First draft of sitemap-based spec (SOON)

•  August 2012 – Publicize and solicit feedback (will be
   NISO email list)

•  September 2012 – Revise, more experiments, more
   feedback

•  December 2012 – Finalize specification (?)



•  NISO webspace

•  Code on github: http://github.org/resync/simulator
ResourceSync: Web-based Resource Synchronization

ResourceSync: Web-based Resource Synchronization

  • 1.
    ResourceSync: Web-based Resource Synchronization Simeon Warner (Cornell) Open Repositories 2012, Edinburgh, 11 July 2012
  • 2.
    Core team --Todd Carpenter (NISO), Berhard Haslhofer, (Cornell University), Martin Klein (Los Alamos National Laboratory), Nettie Lagace (NISO), Carl Lagoze (Cornell University), Peter Murray (NISO), Michael L. Nelson (Old Dominion University), Robert Sanderson (Los Alamos National Laboratory), Herbert Van de Sompel (Los Alamos National Laboratory), Simeon Warner (Cornell University) Team members – Richard Jones (JISC/Cottage Labs), Stuart Lewis (JISC/Cottage Labs), Graham Klyne (JISC), Shlomo Sanders (Ex Libris), Kevin Ford (LoC), Ed Summers (LoC), Jeff Young (OCLC), David Rosenthal (Stanford) Funding – The Sloan Foundation (core team) and the JISC (UK participation) Thanks for slides from – Stuart Lewis, Herbert Van de Sompel
  • 3.
    Synchronize what? •  Webresources – things with a URI that can be dereferenced and are cache-able (no dependency on underlying OS, technologies etc.) •  Small websites/repositories (a few resources) to large repositories/datasets/linked data collections (many millions of resources) •  That change slowly (weeks/months) or quickly (seconds), and where latency needs may vary •  Focus on needs of research communication and cultural heritage organizations, but aim for generality 3
  • 4.
    Why? … because lotsof projects and services are doing synchronization but have to roll their own on a case by case basis! •  Project team involved with projects that need this •  Experience with OAI-PMH: widely used in repos but o  XML metadata only o  Web technology has moved on since 1999 •  Data / Metadata / Linked Data – Shared solution?
  • 5.
    Use cases –the basics JISC
  • 6.
  • 7.
    Out-of-scope (for now) • Bidirectional synchronization •  Destination-defined selective synchronization (query) •  Special understanding of complex objects •  Bulk URI migration •  Diffs (hooks?) •  Intra-application event notification •  Content tracking
  • 8.
    Use case: DBpediaLive duplication •  20M entries updated @ 1/s though sporadic •  Want low latency => need a push technology
  • 9.
    Use case: arXivmirroring •  1M article versions, ~800/day created or updated at 8pm US eastern time •  Metadata and full-text for each article •  Accuracy important •  Want low barrier for others to use •  Look for more general solution than current homebrew mirroring (running with minor modifications since 1994!) and occasional rsync (filesystem layout specific, auth issues)
  • 10.
    Terminology •  Resource: anobject to be synchronized, a web resource •  Source: system with the original or master resources •  Destination: system to which resources from the source will be copied and kept in synchronization •  Pull: process to get information from source to destination initiated by the destination. •  Push: process to get information from source to destination initiated by the source (and some subscription mechanism) •  Metadata: information about resources such as URI, modification time, checksum, etc. (Not to be confused with resources that may themselves be metadata records)
  • 11.
    Three basic needs 1. Baseline synchronization – A destination must be able to perform an initial load or catch-up with a source -  avoid out-of-band setup; provide discovery 2.  Incremental synchronization – A destination must have some way to keep up-to-date with changes at a source -  subject to some latency; minimal: create/update/delete 3.  Audit – It should be possible to determine whether a destination is synchronized with a source -  subject to some latency; want efficiency > HTTP HEAD
  • 12.
    Baseline synchronization Either •  Getinventory of resources and then copy them one- by-one using HTTP GET o  simplest, inventory is list of resources plus perhaps metadata o  inventory format? or •  Get dump of resources and all necessary metadata o  more efficient: reduce number of round trips o  dump format?
  • 13.
    Audit Could do newBaseline synchronization and compare … but likely very inefficient! Optimize by adding: •  Get inventory and compare with copy at destination o  use timestamp, digest or other metadata in inventory to check content (effort çè accuracy tradeoff) o  latency depends on freshness of inventory and time to copy and check (easier to cope with if modification times included in metadata)
  • 14.
    Incremental synchronization Simplest methodis Audit and then copy of all new/ updated resources, plus removal of deleted resources. Optimize by adding: •  Change Communication – Exchange ChangeSet listing only updates -  How to understand sequence, schedule? •  Resource Transfer – Exchange dumps for ChangeSets or even diffs appropriate to resource type Change Memory necessary to record sequence or intermediate states.
  • 15.
    Template to mapapproaches 15
  • 16.
    Approaches and technologies Push DSNotify OAI-PMH Pull rsync Crawl OAI-ORE RDFsync WebDAV Col. Syn. XMPP Atom SWORD AtomPub Sitemap RSS SPARQLpush PubSubHubbub SDShare XMPP JISC
  • 18.
    A framework basedon Sitemaps •  Modular framework allowing selective deployment •  Sitemap is the most basic component of the framework •  Reuse Sitemap form for changesets and notifications (same <url> element describing resource) •  Selective synchronization via tagging •  Discovery of capabilities via <atom:link>! •  Further extension possible 18
  • 19.
    Baseline Sync withInventory 19
  • 20.
    Level zero èPublish a Sitemap •  Periodic publication of an up-to-date Sitemap is base level implementation •  Use Sitemap <url> as is with <loc> and <lastmod> as core elements for each Resource o  Introduce optional extra elements to convey fixity information, size, tags for selective synchronization, etc. •  Extend to: o  Convey Source capabilities, discovery informatio, locations of dumps, locations of changesets, change memory, etc. o  Provide timestamp and/or additional metadata for the Sitemap
  • 21.
    Two resources, withlastmod times
  • 22.
    Two resources, withlastmod times, sizes and digests. The second with a tag also
  • 23.
    Sitemap details &issues •  Sitemap XML format designed to allow extension •  ResourceSync additions: o  Additional core elements in ResourceSync namespace (digest, size, update information) o  Discovery information using <atom:link> elements •  Use existing Sitemap Index scheme for large sets of resources (handles up to 2.5 billion resources before further extension required) •  Provide mapping to RDF semantics but keep XML simple 23
  • 24.
  • 25.
    ChangeSet •  Reuse Sitemapformat but include information only for change events over a certain period: •  One <url> element per change event •  The <url> element uses <loc> and <lastmod> as is and is extended with: •  an event type to express create/update/delete •  an optional event id to provide a unique identifier for the event. •  can further extend to include fixity, tag info, Memento TimeGate link, special-purpose access-point, etc. •  Introduce minimal <urlset>-level extensions to support: •  Navigation between ChangeSets via <atom:link> •  Timestamping the ChangeSet 25
  • 26.
    Expt: arXiv –Inventory and ChangeSet •  Baseline synchronization and Audit (Inventory): o  2.3M resources (300GB content) o  46 sitemaps and 1 sitemapindex (50k resources/sitemap) o  sitemaps ~9.3MB each -> 430MB total uncompressed;1.7MB each -> 78MB total if gzipped (<0.03% content size) •  Incremental synchronization (ChangeSet): o  arXiv has updates daily @ 8pm so create daily ChangeSet o  ~1k additions and 700 updates per day o  1 sitemap ~300kB or 20kB gzipped, can be generated and served statically o  keep chain of ChangeSets, link with <atom:link>
  • 27.
    Incremental Sync withPush via XMPP 27
  • 28.
    Change Communication: Pushvia XMPP •  Rapid notification of change events via XMPP PubSub node; one notification per event •  Each change event is conveyed using a Sitemap <url> element contained in a dedicated XMPP <item> wrapper •  Use same resource metadata (e.g. <loc>, <lastmod>) and same extensions as with changesets •  Multiple change events can be grouped into a single XMPP message (using <items>)
  • 29.
    Expt: LiveDBpedia withXMPP Push •  LANL Research Library ran a significant scale experiment in synchronization of the LiveDBpedia database from Los Alamos to two remote sites using XMPP to push change notifications o  Push for change communication only, content then obtained with HTTP GET •  Destination sites were able to keep in close synchronization with sources o  Maximum queued updates <400 over 6 runs with 100k updates; and bursty updates averaging ~1/s o  Small number of errors suggests use for audit in many real- life situations
  • 30.
    Dumps Optimization over makingrepeated HTTP GET requests for multiple resources. Use for baseline and changeset. Options: 1.  ZIP+Sitemap o  simple and ZIP very widely used o  consistent inventory/change/set format o  con: “custom” 2.  WARC o  designed for exactly this purpose o  con: little used outside web archiving community
  • 31.
    Sitemaps + XMPP+ Dumps 31
  • 32.
    Timeline and input • July 2012 – First draft of sitemap-based spec (SOON) •  August 2012 – Publicize and solicit feedback (will be NISO email list) •  September 2012 – Revise, more experiments, more feedback •  December 2012 – Finalize specification (?) •  NISO webspace •  Code on github: http://github.org/resync/simulator