High Performance Cyberinfrastructure  Enabling Data-Driven Science  Supporting Stem Cell Research Invited Presentation San...
Academic Research OptIPlanet Collaboratory: A 10Gbps “End-to-End” Lightpath Cloud National LambdaRail Campus Optical Switc...
“ Blueprint for the Digital University”--Report of the UCSD Research Cyberinfrastructure Design Team <ul><li>A Five Year P...
UCSD Campus Investment in Fiber Enables Consolidation of Energy Efficient Computing & Storage Source:  Philip Papadopoulos...
Moving to Shared Enterprise Data Storage & Analysis Resources: SDSC Triton Resource & Calit2 GreenLight http://tritonresou...
NCMIR’s Integrated Infrastructure of Shared Resources Source: Steve Peltier, NCMIR Local SOM  Infrastructure Scientific  I...
The GreenLight Project:  Instrumenting  the Energy Cost  of Computational Science <ul><li>Focus on 5 Communities with At-S...
Next Generation Genome Sequencers Produce Large Data Sets Source: Chris Misleh, SOM
The Growing Sequencing Data Load  Runs over RCI Connecting GreenLight and Triton <ul><li>Data from the Sequencers Stored i...
Community Cyberinfrastructure for Advanced  Microbial Ecology Research and Analysis http://camera.calit2.net/
Calit2 Microbial Metagenomics Cluster- Next Generation Optically Linked Science Data Server 4000 Users From 90 Countries 5...
Fully Integrated UCSD CI Manages the End-to-End Lifecycle  of Massive Data from Instruments to Analysis to Archival UCSD C...
NSF Funds a Data-Intensive Track 2 Supercomputer: SDSC’s Gordon-Coming Summer 2011 <ul><li>Data-Intensive Supercomputer Ba...
Data Mining Applications will Benefit from Gordon <ul><li>De Novo  Genome Assembly from Sequencer Reads & Analysis of Gala...
IF Your Data is Remote,  Your Network Better be “Fat” Data Oasis (100GB/sec) OptIPuter Quartzite Research 10GbE Network Op...
Calit2 Sunlight OptIPuter Exchange  Contains Quartzite Maxine Brown, EVL, UIC OptIPuter Project Manager
Rapid Evolution of 10GbE Port Prices Makes Campus-Scale 10Gbps CI Affordable 2005  2007  2009  2010 $80K/port  Chiaro (60 ...
10G Switched Data Analysis Resource: SDSC’s Data Oasis – Scaled Performance  2 12 OptIPuter 32 Co-Lo UCSD RCI CENIC/NLR Tr...
Data Oasis – 3 Different Types of Storage
Campus Now Starting RCI Pilot (http://rci.ucsd.edu)
Upcoming SlideShare
Loading in …5
×

High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting Stem Cell Research

918 views

Published on

11.05.13
Invited Presentation
Sanford Consortium for Regenerative Medicine
Salk Institute, La Jolla
Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2
Title: High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting Stem Cell Research

Published in: Education, Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
918
On SlideShare
0
From Embeds
0
Number of Embeds
47
Actions
Shares
0
Downloads
2
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • This is a production cluster with it’s own Force10 e1200 switch. It is connected to quartzite and is labeled as the “CAMERA Force10 E1200”. We built CAMERA this way because of technology deployed successfully in Quartzite
  • RAM + flash
  • High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting Stem Cell Research

    1. 1. High Performance Cyberinfrastructure Enabling Data-Driven Science Supporting Stem Cell Research Invited Presentation Sanford Consortium for Regenerative Medicine Salk Institute, La Jolla Larry Smarr, Calit2 & Phil Papadopoulos, SDSC/Calit2 May 13, 2011
    2. 2. Academic Research OptIPlanet Collaboratory: A 10Gbps “End-to-End” Lightpath Cloud National LambdaRail Campus Optical Switch Data Repositories & Clusters HPC HD/4k Video Repositories End User OptIPortal 10G Lightpaths HD/4k Live Video Local or Remote Instruments
    3. 3. “ Blueprint for the Digital University”--Report of the UCSD Research Cyberinfrastructure Design Team <ul><li>A Five Year Process Begins Pilot Deployment This Year </li></ul>research.ucsd.edu/documents/rcidt/RCIDTReportFinal2009.pdf No Data Bottlenecks--Design for Gigabit/s Data Flows April 2009
    4. 4. UCSD Campus Investment in Fiber Enables Consolidation of Energy Efficient Computing & Storage Source: Philip Papadopoulos, SDSC, UCSD OptIPortal Tiled Display Wall Campus Lab Cluster Digital Data Collections N x 10Gb/s Triton – Petascale Data Analysis Gordon – HPD System Cluster Condo WAN 10Gb: CENIC, NLR, I2 Scientific Instruments DataOasis (Central) Storage GreenLight Data Center
    5. 5. Moving to Shared Enterprise Data Storage & Analysis Resources: SDSC Triton Resource & Calit2 GreenLight http://tritonresource.sdsc.edu UCSD Research Labs Campus Research Network Calit2 GreenLight N x 10Gb/s Source: Philip Papadopoulos, SDSC, UCSD <ul><li>SDSC </li></ul><ul><li>Large Memory Nodes </li></ul><ul><li>256/512 GB/sys </li></ul><ul><li>8TB Total </li></ul><ul><li>128 GB/sec </li></ul><ul><li>~ 9 TF </li></ul>x28 <ul><li>SDSC Shared Resource </li></ul><ul><li>Cluster </li></ul><ul><li>24 GB/Node </li></ul><ul><li>6TB Total </li></ul><ul><li>256 GB/sec </li></ul><ul><li>~ 20 TF </li></ul>x256 <ul><li>SDSC Data Oasis Large Scale Storage </li></ul><ul><li>2 PB </li></ul><ul><li>50 GB/sec </li></ul><ul><li>3000 – 6000 disks </li></ul><ul><li>Phase 0: 1/3 PB, 8GB/s </li></ul>
    6. 6. NCMIR’s Integrated Infrastructure of Shared Resources Source: Steve Peltier, NCMIR Local SOM Infrastructure Scientific Instruments End User Workstations Shared Infrastructure
    7. 7. The GreenLight Project: Instrumenting the Energy Cost of Computational Science <ul><li>Focus on 5 Communities with At-Scale Computing Needs: </li></ul><ul><ul><li>Metagenomics </li></ul></ul><ul><ul><li>Ocean Observing </li></ul></ul><ul><ul><li>Microscopy </li></ul></ul><ul><ul><li>Bioinformatics </li></ul></ul><ul><ul><li>Digital Media </li></ul></ul><ul><li>Measure, Monitor, & Web Publish Real-Time Sensor Outputs </li></ul><ul><ul><li>Via Service-oriented Architectures </li></ul></ul><ul><ul><li>Allow Researchers Anywhere To Study Computing Energy Cost </li></ul></ul><ul><ul><li>Enable Scientists To Explore Tactics For Maximizing Work/Watt </li></ul></ul><ul><li>Develop Middleware that Automates Optimal Choice of Compute/RAM Power Strategies for Desired Greenness </li></ul><ul><li>Data Center for School of Medicine Illumina Next Gen Sequencer Storage and Processing </li></ul>Source: Tom DeFanti, Calit2; GreenLight PI
    8. 8. Next Generation Genome Sequencers Produce Large Data Sets Source: Chris Misleh, SOM
    9. 9. The Growing Sequencing Data Load Runs over RCI Connecting GreenLight and Triton <ul><li>Data from the Sequencers Stored in GreenLight SOM Data Center </li></ul><ul><ul><li>Data Center Contains Cisco Catalyst 6509-connected to Campus RCI at 2 x 10Gb. </li></ul></ul><ul><ul><li>Attached to the Cisco Catalyst is a 48 x 1Gb switch and an Arista 7148 switch which has 48 x 10Gb ports. </li></ul></ul><ul><ul><li>The two Sun Disks connect directly to the Arista switch for 10Gb connectivity. </li></ul></ul><ul><li>With our current configuration of two Illumina GAIIx, one GAII, and one HiSeq 2000, we can produce a maximum of 3TB of data per week. </li></ul><ul><li>Processing uses a combination of local compute nodes and the Triton resource at SDSC. </li></ul><ul><ul><li>Triton comes in particularly handy when we need to run 30 seqmap/blat/blast jobs. On a standard desktop computer this analysis could take several weeks. On Triton, we have the ability submit these jobs in parallel and complete computation in a fraction of the time. Typically within a day. </li></ul></ul><ul><li>In the coming months we will be transitioning another lab to the 10Gbit Arista switch. In total we will have 6 Sun Disks connected at 10Gbit speed, and mounted via NFS directly on the Triton resource.. </li></ul><ul><li>The new PacBio RS is scheduled to arrive in May, which will also utilize the Campus RCI in Leichtag and the SOM GreenLight Data Center. </li></ul>Source: Chris Misleh, SOM
    10. 10. Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis http://camera.calit2.net/
    11. 11. Calit2 Microbial Metagenomics Cluster- Next Generation Optically Linked Science Data Server 4000 Users From 90 Countries 512 Processors ~5 Teraflops ~ 200 Terabytes Storage 1GbE and 10GbE Switched/ Routed Core ~200TB Sun X4500 Storage 10GbE Source: Phil Papadopoulos, SDSC, Calit2
    12. 12. Fully Integrated UCSD CI Manages the End-to-End Lifecycle of Massive Data from Instruments to Analysis to Archival UCSD CI Features Kepler Workflow Technologies
    13. 13. NSF Funds a Data-Intensive Track 2 Supercomputer: SDSC’s Gordon-Coming Summer 2011 <ul><li>Data-Intensive Supercomputer Based on SSD Flash Memory and Virtual Shared Memory SW </li></ul><ul><ul><li>Emphasizes MEM and IOPS over FLOPS </li></ul></ul><ul><ul><li>Supernode has Virtual Shared Memory: </li></ul></ul><ul><ul><ul><li>2 TB RAM Aggregate </li></ul></ul></ul><ul><ul><ul><li>8 TB SSD Aggregate </li></ul></ul></ul><ul><ul><ul><li>Total Machine = 32 Supernodes </li></ul></ul></ul><ul><ul><ul><li>4 PB Disk Parallel File System >100 GB/s I/O </li></ul></ul></ul><ul><li>System Designed to Accelerate Access to Massive Data Bases being Generated in Many Fields of Science, Engineering, Medicine, and Social Science </li></ul>Source: Mike Norman, Allan Snavely SDSC
    14. 14. Data Mining Applications will Benefit from Gordon <ul><li>De Novo Genome Assembly from Sequencer Reads & Analysis of Galaxies from Cosmological Simulations & Observations </li></ul><ul><ul><li>Will Benefit from Large Shared Memory </li></ul></ul><ul><li>Federations of Databases & Interaction Network Analysis for Drug Discovery, Social Science, Biology, Epidemiology, Etc. </li></ul><ul><ul><li>Will Benefit from Low Latency I/O from Flash </li></ul></ul>Source: Mike Norman, SDSC
    15. 15. IF Your Data is Remote, Your Network Better be “Fat” Data Oasis (100GB/sec) OptIPuter Quartzite Research 10GbE Network OptIPuter Partner Labs 50 Gbit/s (6GB/sec) Campus Production Research Network Campus Labs 20 Gbit/s (2.5 GB/sec) 1TB @ 10 Gbit/sec = ~20 Minutes 1TB @ 10 Mbit/sec = ~10 Days >10 Gbit/s each 1 or 10 Gbit/s each
    16. 16. Calit2 Sunlight OptIPuter Exchange Contains Quartzite Maxine Brown, EVL, UIC OptIPuter Project Manager
    17. 17. Rapid Evolution of 10GbE Port Prices Makes Campus-Scale 10Gbps CI Affordable 2005 2007 2009 2010 $80K/port Chiaro (60 Max) $ 5K Force 10 (40 max) $ 500 Arista 48 ports ~$1000 (300+ Max) $ 400 Arista 48 ports <ul><li>Port Pricing is Falling </li></ul><ul><li>Density is Rising – Dramatically </li></ul><ul><li>Cost of 10GbE Approaching Cluster HPC Interconnects </li></ul>Source: Philip Papadopoulos, SDSC/Calit2
    18. 18. 10G Switched Data Analysis Resource: SDSC’s Data Oasis – Scaled Performance 2 12 OptIPuter 32 Co-Lo UCSD RCI CENIC/NLR Trestles 100 TF 8 Dash 128 Gordon Oasis Procurement (RFP) <ul><li>Phase0: > 8GB/s Sustained Today </li></ul><ul><li>Phase I: > 50 GB/sec for Lustre (May 2011) </li></ul><ul><li>:Phase II: >100 GB/s (Feb 2012) </li></ul>40  128 Source: Philip Papadopoulos, SDSC/Calit2 Triton 32 Radical Change Enabled by Arista 7508 10G Switch 384 10G Capable 8 Existing Commodity Storage 1/3 PB 2000 TB > 50 GB/s 10Gbps 5 8 2 4
    19. 19. Data Oasis – 3 Different Types of Storage
    20. 20. Campus Now Starting RCI Pilot (http://rci.ucsd.edu)

    ×