Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

NHM Data Portal: first steps toward the Graph-of-Life


Published on

Presented by Vince Smith at the Society for the Preservation of Natural History Collections Conference in Berlin on the 23 June 2016

Published in: Science
  • Be the first to comment

  • Be the first to like this

NHM Data Portal: first steps toward the Graph-of-Life

  1. 1. NHM Data Portal: first steps toward the Graph-of-Life Vince Smith, Ben Scott & Ed Baker Informatics & Digital Collections Group, NHM London SPNHC, Berlin, 23 June 2016
  2. 2. NHM Collection Collection area No of objects No of type specimens Physical register Digital data Palaeontology 6,919,207 43,146 2,364,232 340,636 Mineralogy 423,563 615 425,000 402,727 Botany 5,863,000 172,750 127,200 645,222 Entomology 33,753,257 612,796 57,197 255,000 Zoology 27,501,350 325,000 1,986,000 1,160,216 Library & archives 5,460,000 - - - TOTAL 79,920,377 1,154,307 4,959,629 2,803,801 <3% of NHM specimens are digitised, & even fewer are ‘computable’
  3. 3. Citizen science Big, open, linked dataHigh-throughput digitisation Data portal and tools Text mining Robotics Digital Science at the NHM
  4. 4. Citizen science Big, open, linked dataHigh-throughput digitisation Data portal and tools Text mining Robotics Digital Science at the NHM
  5. 5. NHM Digital Collections Access, pre-2015 • Developed with the best of intentions, but… • 23 separate interfaces • Hard to find, cite, access and integrate • No maps, few images, slow, no statistics, no export, few updates, no authors, no citation mechanisms, no GBIF connection
  6. 6. NHM Data Portal • Discovery of NHM collections & research data • Easy access & reuse to promote collaboration (website, API, R-package, RDF & direct download) • 3.7m records, >1m images (+sound, video & 3D) • Integrates with our collection management system (weekly) & DAM system (for images) • Traffic light data quality indicators • Stable, citable (DataCite) identifiers on datasets & GUIDs on records to measure impact • Technically sustainable & scalable • Default open licensing (CC-Zero, CC-BY, CC-BY-NC)
  7. 7. CKAN – the technical foundation for the portal • Enterprise, open source data portal platform • Developed by Open Knowledge Foundation • Used by 31 national governments, 74 regional authorities, academia & large commercial organisations • Key features o Publish & find datasets o Store & manage large data o Robust API o Customise & extend o Sustainable
  8. 8. Primary views of each NHM dataset Point map Grid map Heat map Statistical overviewFilterable table
  9. 9. Dataset & data record citation • DataCite DOIs on every dataset • Stable URI (UUID) on every record • Prior identifiers aliased & disambiguated • Citation encouraged with clear statements at dataset & record level • Allows us to track cited usage • Dynamic DOI’s on subsets coming soon Dataset DOI Specimen URI
  10. 10. Traffic-light data quality indicators (via GBIF) Via GBIF API Major errors Minor errors No errors nb. similar services offered by CRIA for Brazilian data
  11. 11. Potential errors highlighted & “corrected”
  12. 12. Assembly Video doi: 10.3897/zookeys.481.8788 Step-by-step instructions Supports deposition of other research datasets
  13. 13. Easy addition of new datasets (rapid & semi-automated) 1. Name the dataset 2. Upload / link the data file 3. Describe the data file 4. Theme & tag 5. Add additional resources 6. Temporal coverage 7. Geographic coverage 8. Save & finish
  14. 14. Data access & feedback Extensive API R integration Link to data curator team DwCA Downloads RDF (Linked Open Data)
  15. 15. Serving external data aggregators GBIF iDigBio EOL Vertnet CRIA
  16. 16. Data visualisations driven by API DEMO DEMO DEMO
  17. 17. 500,000,000 (since Feb. 2015, excluding major aggregators) Records downloaded
  18. 18. Data access & feedback Extensive API R integration Link to data curator team DwCA Downloads RDF (& Linked Open Data)
  19. 19. Tim Berners-Lee, the inventor of the Web and Linked Data initiator, suggested a 5-star deployment scheme for Open Data… What does a 5-star Data Portal mean?
  20. 20. LOD gives us the means to connect our data (i.e. graph queries across distributed datasets)
  21. 21. Top 200 collections holding institutions contributing specimen record to GBIF Example 1: “what data are we publishing” • What proportion of our collections are accessible / digitised? • What biases exiting in our digitised collections? • How much taxonomic redundancy exists in our collections? Useful for policy setting: - Planning digitisation strategies (why should we all be digitising the same taxa first) - Identifying institutional collections strengths (outside our community these are often not known) - What is ‘unique’ in our collections (taxonomically, geospatially, temporally) - Disaster planning (how many institutions hold the same material)
  22. 22. What collections are held globally? Where are these specimens from? There are huge gaps and biases in what & where about our collections & where these collections are from Top 200 collections (scaled by size) Specimen country origin (darker is more )
  23. 23. Our results are very incomplete, constrained by what we’ve digitised Size of collection Proportion digitised RBGE RBGK NHM MNHN RMCA RBINS Very small proportions of our collections are digitally accessible We don’t publish the overall size of our collections in a machine readable way
  24. 24. Example 2: exploring ecological interactions • Specimen data is one dimension of our collections • We need to know how organisms interact E.g. Predator-prey, pollinator-pollenated, host-parasite • Museums have lots of this data NHM Interactions data: • Louse-host (12,000+) • Helminth host-parasite (250,000+) • Also large datasets: Coleoptera feeding on dipterocarp seeds, butterfly host-plants, British mammal-flea associations, bee flower pollinators, several parasitic wasp datasets, …. Increasingly published as RDF via NHM Data Portal
  25. 25. Global Biotic Interactions (GloBI) Database • By Jorrit Poelen & colleagues • Collates interaction datasets • Currently >1.9M interactions • EOL pulls these into Species Pages • NHM Portal creates a combined dataset to feed GloBI • Produces Linked Open Data – Create beautiful visualisations
  26. 26. • Predatory interactions for Eurythenes gryllus • Visualisations highlight number, frequency & type of interaction GloBI’s Interaction Browser https://blog.globalbioticinteractio antarctic-interactions-using- globis-interaction-browser/
  27. 27. Create beautiful visualisations with custom R scripts and existing libraries (e.g., igraph, Reol, rgdal) 4/06/06/a-food-web-map-of-the-world/
  28. 28. Conclusions • Data portals like the NHM Portal allow us to contribute and reflect our data through the lens of specialist aggregators • GBIF & GloBI are specialist aggregators serving LOD • LOD allows us to combine big datasets to address new questions – Tracking interactions & distribution of disease vectors – Predicting crop pests, via the distribution and interactions of pests of crop wild relatives Next Steps • Continue Portal development & encourage institutional adoption • Consolidate NHM ecological interaction datasets • Publish combined dataset on the NHM Data Portal • GloBI to harvest the dataset and publish linked open data • Develop visualisations for key NHM datasets
  29. 29. Acknowledgements Ben Scott – Portal Engineer & Architect Ed Baker – Data Researcher Laurence Livermore - Project Management Matt Woodburn – Data Architect Vince Smith – SRO / Coordinator