Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

Bridging the Gap Between Small and Large Research Repositories


Published on

Presentation for OAI8 workshop, Geneva, 2013

Published in: Technology, Education
  • Be the first to comment

Bridging the Gap Between Small and Large Research Repositories

  1. 1. Bridging the Gap Between Small andLarge Research Repositories(There is No Dumb Data!)Anita de WaardVP Research Data Collaborationsa.dewaard@elsevier.com
  2. 2. Background:• Background:– Low-temperature physics (Leiden & Moscow)– Joined Elsevier in 1988 as publisher in solid state physics– 1991: ArXiV => publishers will go out of business very soon!• 1997- 2013: Disruptive Technologies Director, focus on betterrepresentation of scientific knowledge:– Identifying key knowledge elements in articles (linguistics thesis)– Building claim-evidence networks (collaborations on e.g. CKUs!)– Help build communities to accelerate rate of change (Force11)• Per 1/1/2013 Research Data Collaborations:– Data is the evidence that the claims are built on!– Doug Engelbart: connected minds augment collective intelligence– Can a publisher play a useful role?
  3. 3. Claimed Knowledge UpdateUsually Refers to Data (or lack thereof!):
  4. 4. There are many data preservation efforts:• There are many different research databases– both generic(Dryad, Dataverse, DataBank, Zenodo, etc) and specific(NIF, IEDA, PDB)• There are many systems for creating/sharing workflows(Taverna, MyExperiment, Vistrails, Workflow4Ever,)• There are many e-lab notebooks (LabGuru, LabArchives,LaBlog etc)• There are scores of projects, committees, standards,bodies, grants, initiatives, conferences for discussing andconnecting all of this (KEfED, Pegasus, PROV, RDA, ScienceGateways, Codata, BRDI, Earthcube, etc. etc)• You can make a living out of this ;-)! (and many of us do…)
  5. 5. …but this is what scientists do:Using antibodiesand squishy bitsGrad Students experimentand enter details into theirlab notebook.The PI then tries tomake sense of this,and writes a paper.End of story.
  6. 6. PrepareObserveAnalyzePonderCommunicatePrepareObserveAnalyzePonderCommunicateAs a result of this practice,e.g. most of biology is quite insular
  7. 7. But also VERY complicated:• Interspecies variability: A specimen is not a species• Gene expression variability: Knowing genes is notknowing how they are expressed• Microbiome: An animal is an ecosystem• Systems biology: A whole is more than the sum of itspartsReductionist sciencedoes not workfor living systems!
  8. 8. What if the research data was connected?PrepareAnalyze CommunicatePrepareAnalyze CommunicateObservationsObservationsObservationsAcross labs, experiments:track reagents and howthey are used
  9. 9. PrepareAnalyze CommunicatePrepareAnalyze CommunicateObservationsObservationsObservationsCompare outcome ofinteractions with theseentitiesWhat if the research data was connected?
  10. 10. PrepareAnalyze CommunicatePrepareAnalyzeCommunicateObservationsObservationsObservationsBuild a ‘virtual reagentspectrogram’ by comparinghow different entitiesinteracted in differentexperiments ThinkWhat if the research data was connected?
  11. 11. > 50 My Papers2 M scientists2 My papers/yearWhere The Data Goes Now:Majority of data(90%?) is storedon local hard drivesDryad:7,631 filesDataverse:0.6 MyDatacite:1.5 MySome data(8%?) stored in large,generic datarepositoriesMiRB:25kPetDB:1,5 kTAIR:72,1 kPDB:88,3 kSedDB:0.6 kA small portion of data(1-2%?) stored in small,topic-focuseddata repositories
  12. 12. > 50 My Papers2 M scientists2 My papers/yearKey Needs:Dryad:7,631 filesDataverse:0.6 MyDatacite:1.5 MyMiRB:25kPetDB:1,5 kMajority of data(90%?) is storedon local hard drivesSome data(8%?) stored in large,generic datarepositoriesTAIR:72,1 kPDB:88,3 kSedDB:0.6 kA small portion of data(1-2%?) stored in small,topic-focuseddata repositories1. INCREASE DATADIGITISATION4. DEVELOPSUSTAINABLE MODELS3. IMPROVEREPOSITORYINTEROPERABILITY
  13. 13. Elsevier Research Data Services: Goals1. Increase Data Preservation:Help increase the amount and quality of data preservedand shared2. Improve Data UseHelp increase the value and usability of the data sharedby increasing annotation, normalization, provenance3. Enhance Interoperability:Help improve interoperability between systems and data4. Develop Sustainable Models and Systems:Help measure and deliver credit for shared data, theresearchers, the institute, and the funding body, enablingmore sustainable platforms.
  14. 14. Elsevier RDS: Guiding Principles• In principle, all data stays open• Work with existing repositories – URLs, front end etcstay where they are• Collaboration is tailored to partner’s unique needs:– Aspects where collaboration is needed are discussed– A collaboration plan is drawn up using a Service-LevelAgreement: agree on time, conditions, etc.– Working with domain-specific and institutional repositories• 2013: series of pilots to enable feasibility study:– What are key needs?– Can Elsevier play a role: skillsets, partnerships?– Is there a (transparant) business model for this?
  15. 15. 1. Data Digitisation• Goal: enable access, reproduction• Issue: much of the research data is simply notdigitized!• Example: MagellanObservatory’s paper records• Example: CMUElectrophysiology Lab:lab notebooks arekept on paper
  16. 16. 1. Data Digitisation• Example: Marine geophysicssuggests: convince instruments,not researchers!• Prize: IEDA/Elsevier DataRescue Award in the geosciences:$ 5000 award for bestdata rescue attemptThe 2013 InternationalData Rescue Award in the GeosciencesOrganised by IEDA andElsevier Research Data Services
  17. 17. 1. Pilot: CMU Urban Legend AppSubmitted to Discovery Informatics 2013
  18. 18. 2. Data Curation• To allow reuse, data needs to be enriched:why and how was it created?• Issue: Dropbox and Figshare most popular tools• Example: moon rock data is stored as PDFs withtables from different papers• Pilot: lunar samples: curate geochemistry to allowdata use
  19. 19. • Issue: hard to find right antibodies in papers (NIF)• Pilot: Properly AnnotatedData Sets (PADS) for biology:shared ‘cloud’ of metadata,describes why, what, how ofexperimental procedure:2. Data CurationDescriptive metadataWorkflow/lab toolsResearch dataPaper
  20. 20. 2. Data curation as seen by the researcher:Funding Agency: University:Collaborators:Domain of study:Domain-SpecificData RepositoryLocalData RepositoryInstitutionalData RepositoryGenericData RepositoryANDTHEYALLWANTDIFFERENTMETADATA!!!!
  21. 21. 3. Repository Interoperability• Pilot: find metabolomiccompounds from massspectrometry data: needbiological understandingof chemical results• Issue: battle between domain-specific and‘domain-agnostic’ repositories: who is a betterdata curator? (Example: geochemistry)
  22. 22. 3. Repository Interoperability• Counterexample: MERRITT system:CDL builds generic infrastructure – domain-specific content curators• Planned report with UCL, UCSD, MIT Librariesand CNI: best practices re. role of libraries?– How does a researcher decide where to placehis/her content?– How does a library decide what digital datacuration/preservation efforts to invest in?
  23. 23. 4. Sustainable Data Models:Interdependency in a credit economyFundingagenciesScholarsInstitutesRequest Funding FromPolicies and EvaluationsDomain/Task-SpecificGroups(EarthCube,DataONE etc)Fund ReportbackUniversity as a DigitalEnterprise?Roles, responsibilities?I don’t havetime for thisnonsense!Should everythingbe open? Thegovernment too??Will the librariestake over our role?Can they do adecent job at it?PublicPrivatePartnerships??CreditUsage
  24. 24. 4. Sustainable Data Models:(One Fool Can Ask) Many Questions!• Cost:– Who pays for hosting the data?– Who pays for data curation?– Who pays for long-term preservation?– Who pays for data integration?• Infrastructure:– Where does the metadata live?– What is the entry point to metadata cloud – the paper, the data?– Who is responsible for fulfilling DMP requirements?– Who decides on the data storage requirements?• Usage:– Who wants to know where/what data is stored?– Who needs to know how data was accessed/used?– Who gets credit for data storage, data use?– Who needs/pays for credit-metric reporting?
  25. 25. In Summary:1. Data digitisation:– Multiplicity of content, how to reach ‘small data’ creators?– Working with equipment could be key to success?– Pilots with CMU, Lunar Samples, Data Rescue Award.2. Data curation:– Essential for reuse, but who does the work?– Each use case/user has own metadata requirements– Pilots with Metabolomics MS, Properly Annotated Data Sets.3. Repository integration:– Domain-specific vs. domain agnostic?– Various domains have different requirements– Study and report re. best-practices for libraries.4. Sustainable models in a credit economy:– Cost, infrastructure, usage: who needs what?– Who pays for what?– Interviews/discussions with number of institutions.
  26. 26. Collaborations and discussions gratefully acknowledged:• CMU: Nathan Urban, Shreejoy Tripathy, Shawn Burton, Ed Hovy• UCSD: Phil Bourne, Brian Shoettlander, David Minor, DeclanFleming, Ilya Zaslavsky• NIF: Maryann Martone, Anita Bandrowski• MSU: Brian Bothner• OHSU: Melissa Haendel, Nicole Vasilevsky• California Digital Library: Carly Strasser, John Kunze, StephenAbrams• Columbia/IEDA: Kerstin Lehnert, Leslie Hsu• CNI: Clifford Lynch• Harvard: Michael Kurtz, Chris Erdmann• MIT: Micah Altman• UVM: Mara SaurleThank you!