• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Iain Hrynaszkiewicz - Research Integrity: Integrity of the published record

Iain Hrynaszkiewicz - Research Integrity: Integrity of the published record



Iain Hrynaszkiewicz, Journal Publisher, BioMed Central

Iain Hrynaszkiewicz, Journal Publisher, BioMed Central



Total Views
Views on SlideShare
Embed Views



2 Embeds 4

https://twitter.com 2
http://paper.li 2



Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution-NonCommercial-NoDerivs LicenseCC Attribution-NonCommercial-NoDerivs LicenseCC Attribution-NonCommercial-NoDerivs License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
Post Comment
Edit your comment
  • The Creative Commons license Authors/copyright owners irrevocably grant to anyone the right to use, reproduce or disseminate the research article in its entirety or in part in perpetuity provided that No substantive errors are introduced Authorship attribution is correct Citation details are provided Bibliographic details are unchanged
  • Customer – authors, editors, peer reviewers, institutions, funders
  • Customer – authors, editors, peer reviewers, institutions, funders
  • Electronic version of article is authoritative “ Additional files” not “Supplementary material” Additional files can be central to the reported findings of the paper
  • Efficient online publication processes can facilitate dataset publication Only a fraction of experimental data sets make it into the literature Many more datasets have the potential to be useful, but do not warrant a traditional publication For certain standard types of data, appropriate databases exist (e.g. nucleotide sequences) But if such databases do not exist, or if further description of the experimental context is required?
  • Publishers not best placed to run repositories for long term preservation of large datasets Mirrors of publisher content not able to accept arbitrary amounts of additional data Long term preservation presents a challenge with respect to continuity Redundant international mirrors with independent governance and funding could help to reduce risk BGI capable of sequencing ~2000 genomes per day (6 Tb/day = 2Pb/year)
  • Bioinformaticists have been rapid adopters of cloud computing (as they were of the web) Cloud computing can reduce the barriers to reproducibility Publications can include or refer to necessary datasets and the computational tools that can be fired up to carry out/reproduce the analysis Large datasets can live in cloud – take analysis to the data, rather than vice versa Deposited data sets assigned DOIs, as are data papers
  • Accession number system in genomics, for example Sometimes deposit data as part of institutional, funder requirements or for personal reasons
  • Dryad is a mechanism for enforcement of the joint data archiving policy – a community requirement in ecology/evolutionary biology. As part of a publisher’s service provision to these scientific communities we are implementing integration that enables accepted articles to be associated with data sets in Dryad. Dryad meets criteria for permanent linking to articles by assigning DOIs to data sets.
  • Data preservation and re-use maximises its value but restrictive licensing, IP etc are barriers to effective re-use and sharing