• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Scott Edmunds: GigaScience - a journal or a database? Lessons learned from the Genomics Tsunami
 

Scott Edmunds: GigaScience - a journal or a database? Lessons learned from the Genomics Tsunami

on

  • 1,673 views

Scott Edmunds talk at the HUPO congress in Geneva, September 6th 2011 on GigaScience - a journal or a database? Lessons learned from the Genomics Tsunami.

Scott Edmunds talk at the HUPO congress in Geneva, September 6th 2011 on GigaScience - a journal or a database? Lessons learned from the Genomics Tsunami.

Statistics

Views

Total Views
1,673
Views on SlideShare
1,662
Embed Views
11

Actions

Likes
0
Downloads
8
Comments
0

2 Embeds 11

http://twitter.com 8
https://twitter.com 3

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • BGI (formerly known as Beijing Genomics Institute) was founded in 1999 and has since become the largest genomic organization in the world, with a focus on research and applications in healthcare, agriculture, conservation, and bio-energy fields.Our goal is to make leading-edge genomics highly accessible to the global research community by leveraging industry’s best technology, economies of scale and expert bioinformatics resources. BGI Americas was established as an interface with customer and collaborations in North and South Americas.
  • Our facilities feature Sanger and next-generation sequencing technologies, providing the highest throughput sequencing capacity in the world. Powered by 137 IlluminaHiSeq 2000 instruments and 27 Applied BiosystemsSOLiD™ 4 Systems, we provide, high-quality sequencing results with industry-leading turnaround time. As of December 2010, our sequencing capacity is 5 Tb raw data per day, supported by several supercomputing centers with a total peak performance up to 102 Tflops, 20 TB of memory, and 10 PB storage. We provide stable and efficient resources to store and analyze massive amounts of data generated by next generation sequencing.
  • Helps reproducibility, but some debate over whether it can help that much regarding scaling.
  • Need to help authors and curators.

Scott Edmunds: GigaScience - a journal or a database? Lessons learned from the Genomics Tsunami Scott Edmunds: GigaScience - a journal or a database? Lessons learned from the Genomics Tsunami Presentation Transcript

  • : a Journal or a Database?
    (Lessons learned from the Genomics “Tsunami”)
    Scott Edmunds
    HUPO Congress 2011, Geneva
    www.gigasciencejournal.com
  • BGI Introduction
    Formerly known as Beijing Genomics Institute
    Founded in 1999
    Now the largest genomic organization in the world
    Goal
    Use genomics technology to impact the society
    Make leading edge genomics highly
    accessible to the global research community
  • Largest Sequencing Capacity in the World
    Sequencers
    137Illumina/HiSeq 2000
    27LifeTech/SOLiD 4
    16 AB/3730xl + 110 MegaBACEs
    2 IlluminaiScan
    Data Production
    5.6 Tb / day
    > 1500X of human genome / day
    Multiple Supercomputing Centers
    157 TB Flops
    20 TB Memory
    12.6 PB Storage
  • Mass spectrometry at BGI
    QTRAP 5500, AB SCIEX
    Orbitrap velos, Thermo Scientific
    maXis Q-TOF, Bruker
    ultraflex, Bruker
  • Products and Services Offered to Collaborators
    Protein Profiling for any species
    (tying in with 1000 PARGP)
    Techniques:
    Quantitative analysis
    Post-translational modification
    Target Proteomics
    Metabolomics
  • “Trans-Omics”
    Objective to integrate data from:
    • Genomics
    • Transcriptomics
    • Proteomics
    • Metabolomics
  • BGI Proteomics Dept Focus:
    RAW MS data storage and analysis
    Upstream analysis
    “Large-scale” screening/quantitative analysis
    Working on:
    Automatic analysis pipelines/tools
    Industrial usage/standards
  • Lessons Learned:
    What went right?
  • Lessons Learned:
    1. having a cool project helps…
    Bill Clinton:
    “We are here to celebrate the completion of the first survey of the entire human genome. Without a doubt, this is the most important, most wondrous map ever produced by human kind. “
    “Today we are learning the language in which God created life.”
  • Lessons Learned:
    2. Reproducibility is important…
    Helped by stability of:
    Platforms
    Infrastructure
    Standards
    1st Gen
    2ndGen
  • Lessons Learned:
    3. Sharing is important…
    V
  • Lessons Learned:
    3. Sharing is important…
    V
  • Lessons Learned:
    3. Sharing is important…
    Bermuda Accords 1996/1997/1998:
    Automatic release of sequence assemblies within 24 hours.
    Immediate publication of finished annotated sequences.
    Aim to make the entire sequence freely available in the public domain for both research and development in order to maximise benefits to society.
    Fort Lauderdale Agreement, 2003:
    Sequence traces from whole genome shotgun projects are to be deposited in a trace archive within one week of production.
    Whole genome assemblies are to be deposited in a public nucleotide sequence database as soon as possible after the assembled sequence has met a set of quality evaluation criteria.
    Toronto International data release workshop, 2009:
    The goal was to reaffirm and refine, where needed, the policies related to the early release of genomic data, and to extend, if possible, similar data release policies to other types of large biological datasets – whether from proteomics, biobanking or metabolite research.
  • Benefits of Data-sharing
    Sharing Detailed Research Data Is Associated with Increased Citation Rate.
    Piwowar HA, Day RS, Fridsma DB (2007) PLoSONE 2(3): e308. doi:10.1371/journal.pone.0000308
    Every 10 datasets collected contributes to at least 4papers in the following 3-years.
    Piwowar, HA, Vision, TJ, & Whitlock, MC (2011). Data archiving is a good investment Nature, 473 (7347), 285-285 DOI: 10.1038/473285a
  • Rice v Wheat: consequences of publically available genome data.
  • The Ecoresponsive Genome of Daphnia pulexColbourne et al., Science4 February 2011:
    200Mb Genome, 30,907 genes
    Duplicated genes most responsive to ecological challenges
  • Daphnia Genome Consortium
    wFleabase: Mar 2006
    Genome release: July 2007
    Genome Published: Feb 2011
    >58 companion papers
    https://daphnia.cgb.indiana.edu/Publications
  • Problems?
    Flickr cc: opensourceway
  • Lessons Learned:
    4. Need to manage expectations…
    June 2000
    Thomas Michael Dexter (Wellcome trust):
    “Mapping the human genome has been compared with putting a man on the moon, but I believe it is more than that. This is the outstanding achievement not only of our lifetime, but in terms of human history”
  • Lessons Learned:
    4. Need to manage expectations…
    June 2010
  • Lessons Learned: 5. Data, data, data
    Sequencing cost($ per Mbp)
    Moore’s Law
    ~100,000X
    Sequencing
    Source: E Lander/Broad
  • Lessons Learned: 5. Data, data, data
    Sequencing Output
    Data
    Storage
    Moore’s/Kryders Law
  • Lessons Learned: 5. Data, data, data
    Sequencing Output
    Data
    Publication
    Dissemination?
  • Lessons Learned: 5. Data, data, data
    Can we keep up?
    Flickr cc: opensourceway
  • Lessons Learned: 5. Data, data, data
    Do we have models for long term funding?
    Human Gene Mutation Database
    Kyoto Encyclopedia of Genes and Genomes
    ?
    Flickr cc: opensourceway
  • Lessons Learned: 5. Data, data, data
    Growing/widening user base.
    3rd Gen sequencers: “Democratizing sequencing”
    ?
  • Lessons Learned: 5. Data, data, data
    Curation, curation, curation?
    ?
    The long tail of new “big-data” producers?
  • Lessons Learned: 5. Data, data, data
    Are there now too many hurdles?
    ?
  • Lessons Learned: 5. Data, data, data
    Are there now too many hurdles?
    Technical: too large volumes
    too heterogeneous
    no home for many data types
    too time consuming
    Economic: too expensive, no long-term funding
    Cultural: inertia
    no incentives to share
    unaware of how
    ?
  • Potential solutions?
  • Potential solutions: Better handling of data, data, data
    Cloud?
  • Potential solutions: Better handling of data, data, data
    • What to save/what to throw away?
    • Better Compression?
  • Potential solutions: Better handling of metadata…
    Cloud solutions?
    Better tools for assessing data quality…
  • Potential Solutions:
    New incentives/credit
    Credit where credit is overdue:
    “One option would be to provide researchers who release data to public repositories with a means of accreditation.”
    “An ability to search the literature for all online papers that used a particular data set would enable appropriate attribution for those who share. “
    Nature Biotechnology 27, 579 (2009)
    Prepublication data sharing
    (Toronto International Data Release Workshop)
    “Data producers benefit from creating a citable reference, as it can later be used to reflect impact of the data sets.”
    Nature461, 168-170 (2009)
    ?
  • Datacitation: Datacite and DOIs
    Digital Object Identifiers (DOIs) offer a solution
    • Mostly widely used identifier for scientific articles
    • Researchers, authors, publishers know how to use them
    • Put datasets on the same playing field as articles

    Dataset
    Yancheva et al (2007). Analyses on sediment of Lake Maar. PANGAEA.
    doi:10.1594/PANGAEA.587840
  • Datacitation: Datacite and DOIs
    >1 million DOIs since Dec 2009
    Central metadata repository to link with WoS/ISI
    - finally can track and credit use!
  • How can we combine these?
    Databases
    ?
    Journals
  • Now taking submissions…
    Large-Scale Data
    Journal/Database
    In conjunction with:
    Editor-in-Chief: Laurie Goodman, PhD
    Editor: Scott Edmunds, PhD
    Assistant Editor: Alexandra Basford, PhD
    www.gigasciencejournal.com
  • Criteria and Focus of Journal/Database
    • Reproducibility/Reuse
    • Utility/Usability
    • Standards/Searchability/Scale/Sharing
    • Data publishing/DOI
    www.gigasciencejournal.com
  • Data publishing/DOI
    • Data hosting will follow standard funding agency and community guidelines.
    • DOI assignment available for submitted data to allow ease of findingand citing datasets, as well as for citation tracking.
    • Datasets tracked by WOS/ISI allowing additional metrics/credit for use.
    www.gigasciencejournal.com
  • Reproducibility/Reuse
    • BGI Cloud Computing resources for handling and analyzing large-scale data.
    • Integrated tools to promote more widespread access, viewing, and analysis of data.
    • Encourage and aid use of workflow systems for methods (e.g. submission of Galaxy XML files).
    www.gigasciencejournal.com
  • Special Series/Hub for cloud-based tools
    • Technical notes: test tools in the BGI-Cloud.
    • Tools + Test Data (BGI or user) in one place.
    • Aids reproducibility.
    • Aids reviewers (free)
    • Aids authors: visibility (pubmed, etc.) hosting (included/free offers)
    –contact us: editorial@gigasciencejournal.com
    Oledoeflickr cc
    www.gigasciencejournal.com
  • Standards/Searchability/Sharing
    • ISA-Tab compatibility to aid and promote best practice in metadata reporting.
    • Allsupporting data must be publically available.
    • Ask for MIBBI compliance and use of reporting checklists.
    • Part of the Biosharing network.
    www.gigasciencejournal.com
  • Our first DOI:
    To maximize its utility to the research community and aid those  fighting the current epidemic, genomic data is released here into the public domain under a CC0 license. Until the publication of research papers on the assembly and whole-genome analysis of this isolate we would ask you to cite this dataset as:
    Li, D; Xi, F; Zhao, M; Liang, Y; Chen, W; Cao, S; Xu, R; Wang, G; Wang, J; Zhang, Z; Li, Y; Cui, Y; Chang, C; Cui, C; Luo, Y; Qin, J; Li, S; Li, J; Peng, Y; Pu, F; Sun, Y; Chen,Y; Zong, Y; Ma, X; Yang, X; Cen, Z; Zhao, X; Chen, F; Yin, X; Song,Y ; Rohde, H; Li, Y; Wang, J; Wang, J and the Escherichia coli O104:H4 TY-2482 isolate genome sequencing consortium (2011) Genomic data from Escherichia coli O104:H4 isolate TY-2482. BGI Shenzhen. doi:10.5524/100001 http://dx.doi.org/10.5524/100001
    To the extent possible under law, BGI Shenzhen has waived all copyright and related or neighboring rights to Genomic Data from the 2011 E. coli outbreak. This work is published from: China.
  • “The way that the genetic data of the 2011 E. coli strain were disseminated globally suggests a more effective approach for tackling public health problems. Both groups put their sequencing data on the Internet, so scientists the world over could immediately begin their own analysis of the bug's makeup. BGI scientists also are using Twitter to communicate their latest findings.”
    “German scientists and their colleagues at the Beijing Genomics Institute in China have been working on uncovering secrets of the outbreak. BGI scientists revised their draft genetic sequence of the E. coli strain and have been sharing their data with dozens of scientists around the world as a way to "crowdsource" this data. By publishing their data publicy and freely, these other scientists can have a look at the genetic structure, and try to sort it out for themselves.”
  • G10K Genomes Get DOI®s
    doi:10.5524/100004
  • We want your data!
    scott@gigasciencejournal.com
    editorial@gigasciencejournal.com
    @gigascience
    www.gigasciencejournal.com