Big process for big data

            Ian Foster
         foster@anl.gov
   NASA Goddard, February 27, 2013


                                     computationinstitute.org
The Computation Institute
= UChicago + Argonne
= Cross-disciplinary nexus
= Home of the Research Cloud

                             computationinstitute.org
computationinstitute.org
Will data kill genomics?




                                           x10 in 6 years
                                     x105 in 6 years




Kahn, Science, 331 (6018): 728-729             computationinstitute.org
Moore’s Law for X-Ray Sources




                                             18 orders
                                             of magnitude
                                             in 5 decades!
12 orders
of magnitude
In 6 decades!

                                          computationinstitute.org
1.2 PB of climate data
Delivered to 23,000 users

                       computationinstitute.org
We have exceptional
infrastructure for the 1%




                      computationinstitute.org
What about the 99%?



                 computationinstitute.org
Big science. Small labs.
                    computationinstitute.org
Need: A new way to deliver
research cyberinfrastructure

      Frictionless
      Affordable
      Sustainable
                       computationinstitute.org
We asked ourselves:
 What if the research work flow
could be managed as easily as…

…our pictures
                …our e-mail
                              …home entertainment

                                       computationinstitute.org
What makes these services great?

    Great User Experience
                  +
       High performance
  (but invisible) infrastructure

                           computationinstitute.org
We aspire (initially) to create a
  great user experience for
research data management

 What would a “dropbox for
   science” look like?
                           computationinstitute.org
• Collect   • Annotate
• Move      • Publish
• Sync      • Search
• Share     • Backup
• Analyze   • Archive

BIG DATA
                  computationinstitute.org
A common work flow…



                                    Registry
Staging       Ingest
 Store         Store

                                   Community
                                     Store
              Analysis
               Store



                         Archive               Mirror

                                               computationinstitute.org
… with common challenges


Data movement, sync, and sharing
                                    Registry
• Between facilities, archives, researchers
     Staging     Ingest
      Store       Store
• Many files, large data volumes
                               Community
• With security, reliability, performance
                                 Store
               Analysis
                Store



                          Archive              Mirror

                                               computationinstitute.org
• Collect            • Annotate
 • Move               • Publish
 • Sync               • Search
 • Share              • Backup
 • Capabilities delivered using
    Analyze           • Archive
Software-as-Service (SaaS) model



                              computationinstitute.org
2 Globus
      Data
                Online
                              Data
     Source     moves/sy    Destination
                ncs files


1 User
  initiates
  transfer
  request

                     Globus Online 3
                     notifies user
                               computationinstitute.org
2 Globus Online tracks            Data
                       shared files; no need        Source
                       to move files to
                       cloud storage!

1 User A selects                                    3
   file(s) to share;                   User B logs in to
   selects                               Globus Online
   user/group, sets                       and accesses
   share permissions                        shared file




                                               computationinstitute.org
Extreme ease of use

•   InCommon, Oauth, OpenID, X.509, …
•   Credential management
•   Group definition and management
•   Transfer management and optimization
•   Reliability via transfer retries
•   Web interface, REST API, command line
•   One-click “Globus Connect” install
•   5-minute Globus Connect Multi User install
                                     computationinstitute.org
Early adoption is encouraging




                        computationinstitute.org
Early adoption is encouraging



 8,000 registered users; ~100 daily
      ~10 PB moved; ~1B files
10x (or better) performance vs. scp
         99.9% availability
      Entirely hosted on AWS


                              computationinstitute.org
Delivering a great user
    experience relies on
high performance network
       infrastructure



                     computationinstitute.org
Science DMZ
+   optimizes
    performance




           computationinstitute.org
What is a Science DMZ?
Three key components, all required:
• “Friction free” network path
   –   Highly capable network devices (wire-speed, deep queues)
   –   Virtual circuit connectivity option
   –   Security policy and enforcement specific to science workflows
   –   Located at or near site perimeter if possible
• Dedicated, high-performance Data Transfer Nodes (DTNs)
   – Hardware, operating system, libraries optimized for transfer
   – Optimized data transfer tools: Globus Online, GridFTP
• Performance measurement/test node
   – perfSONAR
Details at http://fasterdata.es.net/science-dmz/
                                                            computationinstitute.org
Globus GridFTP architecture

                             Parallel
                               TCP
                                               LFN

                Globus XIO
 GridFTP                     UDP or
                             RDMA          Dedicated


                                TCP
                                             Shared

Internal layered XIO architecture allows alternative network
   and filesystem interfaces to be plugged in to the stack
                                             28computationinstitute.org
GridFTP performance options

    •   TCP configuration
    •   Concurrency: Multiple flows per node
    •   Parallelism: Multiple nodes
    •   Pipelining of requests to support small files
    •   Multiple cores for integrity, encryption
    •   Alternative protocol selection*
    •   Use of circuits and multiple paths*

    Globus Online can configure these options
    based on what it knows about a transfer
* Experimental                                   computationinstitute.org
Exploiting multiple paths




   • Take advantage of multiple interfaces in multi-homed data
     transfer nodes
   • Use circuit as well as production IP link
   • Data will flow even while the circuit is being set up
   • Once circuit is set up, use both paths to improve throughput
Raj Kettimuthu, Ezra Kissel, Martin Swany, Jason Zurawski, Dan Gunter   computationinstitute.org
Exploiting multiple paths
    Transfer between NERSC and ANL                                Transfer between UMich and Caltech


                                                                         multipath




                                    multipath




                        Default, commodity IP routes
                             + Dedicated circuits
                       = Significant performance gains
Raj Kettimuthu, Ezra Kissel, Martin Swany, Jason Zurawski, Dan Gunter                computationinstitute.org
Duration of runs, in seconds, over time.
                        Red: >10 TB transfer; green: >1 TB transfer.


           1e+07


                   1 week
           1e+05




                    1 day




                    1 hour
duration

           1e+03




                   1 minute
           1e+01




                   1 second
           1e-01




                              2011                      2012
K. Heitmann (Argonne)
moves 22 TB of cosmology
data LANL  ANL at 5 Gb/s

                     computationinstitute.org
B. Winjum (UCLA) moves
900K-file plasma physics
datasets UCLA NERSC

                      computationinstitute.org
Dan Kozak (Caltech)
replicates 1 PB LIGO
astronomy data for resilience

                       computationinstitute.org
• Collect   • Annotate
• Move      • Publish
• Sync      • Search
• Share     • Backup
• Analyze   • Archive

BIG DATA
                  computationinstitute.org
• Collect   • Annotate
• Move      • Publish
• Sync      • Search
• Share     • Backup
• Analyze   • Archive

BIG DATA
                  computationinstitute.org
Many more capabilities planned …


 Globus Online Research Data Management-as-a-Service

  Ingest,      Sharing, Colla    Backup,
Cataloging,    boration, Ann     Archival,        …      SaaS
Integration       otation        Retrieval

Globus Integrate (Globus Nexus, Globus Connect)          PaaS




                                             computationinstitute.org
A platform for integration




                    computationinstitute.org
Catalog as a service
        Approach                                           Three REST APIs
 • Hosted user-defined                                 /query/
   catalogs                                            • Retrieve subjects
 • Based on tag model                                  /tags/
                                                       • Create, delete, retrie
      <subject, name, value>
                                                          ve tags
 • Optional schema                                     /tagdef/
   constraints                                         • Create, delete, retrie
 • Integrated with other                                  ve tag definitions
   Globus services
Builds on USC Tagfiler project (C. Kesselman et al.)              computationinstitute.org
Other early successes in
services for science…




                       computationinstitute.org
computationinstitute.org
computationinstitute.org
Other innovative science
SaaS projects




                       computationinstitute.org
Other innovative science
SaaS projects




                       computationinstitute.org
Our vision for a 21st century
     cyberinfrastructure
To provide more capability for
more people at substantially
lower cost by creatively
aggregating (“cloud”) and
federating (“grid”) resources

“Science as a service”
                           computationinstitute.org
Thank you to our sponsors!




                      computationinstitute.org

Big Process for Big Data @ NASA

  • 1.
    Big process forbig data Ian Foster foster@anl.gov NASA Goddard, February 27, 2013 computationinstitute.org
  • 2.
    The Computation Institute =UChicago + Argonne = Cross-disciplinary nexus = Home of the Research Cloud computationinstitute.org
  • 3.
  • 4.
    Will data killgenomics? x10 in 6 years x105 in 6 years Kahn, Science, 331 (6018): 728-729 computationinstitute.org
  • 5.
    Moore’s Law forX-Ray Sources 18 orders of magnitude in 5 decades! 12 orders of magnitude In 6 decades! computationinstitute.org
  • 6.
    1.2 PB ofclimate data Delivered to 23,000 users computationinstitute.org
  • 7.
    We have exceptional infrastructurefor the 1% computationinstitute.org
  • 8.
    What about the99%? computationinstitute.org
  • 9.
    Big science. Smalllabs. computationinstitute.org
  • 10.
    Need: A newway to deliver research cyberinfrastructure Frictionless Affordable Sustainable computationinstitute.org
  • 11.
    We asked ourselves: What if the research work flow could be managed as easily as… …our pictures …our e-mail …home entertainment computationinstitute.org
  • 12.
    What makes theseservices great? Great User Experience + High performance (but invisible) infrastructure computationinstitute.org
  • 13.
    We aspire (initially)to create a great user experience for research data management What would a “dropbox for science” look like? computationinstitute.org
  • 14.
    • Collect • Annotate • Move • Publish • Sync • Search • Share • Backup • Analyze • Archive BIG DATA computationinstitute.org
  • 15.
    A common workflow… Registry Staging Ingest Store Store Community Store Analysis Store Archive Mirror computationinstitute.org
  • 16.
    … with commonchallenges Data movement, sync, and sharing Registry • Between facilities, archives, researchers Staging Ingest Store Store • Many files, large data volumes Community • With security, reliability, performance Store Analysis Store Archive Mirror computationinstitute.org
  • 17.
    • Collect • Annotate • Move • Publish • Sync • Search • Share • Backup • Capabilities delivered using Analyze • Archive Software-as-Service (SaaS) model computationinstitute.org
  • 18.
    2 Globus Data Online Data Source moves/sy Destination ncs files 1 User initiates transfer request Globus Online 3 notifies user computationinstitute.org
  • 19.
    2 Globus Onlinetracks Data shared files; no need Source to move files to cloud storage! 1 User A selects 3 file(s) to share; User B logs in to selects Globus Online user/group, sets and accesses share permissions shared file computationinstitute.org
  • 20.
    Extreme ease ofuse • InCommon, Oauth, OpenID, X.509, … • Credential management • Group definition and management • Transfer management and optimization • Reliability via transfer retries • Web interface, REST API, command line • One-click “Globus Connect” install • 5-minute Globus Connect Multi User install computationinstitute.org
  • 21.
    Early adoption isencouraging computationinstitute.org
  • 22.
    Early adoption isencouraging 8,000 registered users; ~100 daily ~10 PB moved; ~1B files 10x (or better) performance vs. scp 99.9% availability Entirely hosted on AWS computationinstitute.org
  • 23.
    Delivering a greatuser experience relies on high performance network infrastructure computationinstitute.org
  • 24.
    Science DMZ + optimizes performance computationinstitute.org
  • 25.
    What is aScience DMZ? Three key components, all required: • “Friction free” network path – Highly capable network devices (wire-speed, deep queues) – Virtual circuit connectivity option – Security policy and enforcement specific to science workflows – Located at or near site perimeter if possible • Dedicated, high-performance Data Transfer Nodes (DTNs) – Hardware, operating system, libraries optimized for transfer – Optimized data transfer tools: Globus Online, GridFTP • Performance measurement/test node – perfSONAR Details at http://fasterdata.es.net/science-dmz/ computationinstitute.org
  • 26.
    Globus GridFTP architecture Parallel TCP LFN Globus XIO GridFTP UDP or RDMA Dedicated TCP Shared Internal layered XIO architecture allows alternative network and filesystem interfaces to be plugged in to the stack 28computationinstitute.org
  • 27.
    GridFTP performance options • TCP configuration • Concurrency: Multiple flows per node • Parallelism: Multiple nodes • Pipelining of requests to support small files • Multiple cores for integrity, encryption • Alternative protocol selection* • Use of circuits and multiple paths* Globus Online can configure these options based on what it knows about a transfer * Experimental computationinstitute.org
  • 28.
    Exploiting multiple paths • Take advantage of multiple interfaces in multi-homed data transfer nodes • Use circuit as well as production IP link • Data will flow even while the circuit is being set up • Once circuit is set up, use both paths to improve throughput Raj Kettimuthu, Ezra Kissel, Martin Swany, Jason Zurawski, Dan Gunter computationinstitute.org
  • 29.
    Exploiting multiple paths Transfer between NERSC and ANL Transfer between UMich and Caltech multipath multipath Default, commodity IP routes + Dedicated circuits = Significant performance gains Raj Kettimuthu, Ezra Kissel, Martin Swany, Jason Zurawski, Dan Gunter computationinstitute.org
  • 30.
    Duration of runs,in seconds, over time. Red: >10 TB transfer; green: >1 TB transfer. 1e+07 1 week 1e+05 1 day 1 hour duration 1e+03 1 minute 1e+01 1 second 1e-01 2011 2012
  • 31.
    K. Heitmann (Argonne) moves22 TB of cosmology data LANL  ANL at 5 Gb/s computationinstitute.org
  • 32.
    B. Winjum (UCLA)moves 900K-file plasma physics datasets UCLA NERSC computationinstitute.org
  • 33.
    Dan Kozak (Caltech) replicates1 PB LIGO astronomy data for resilience computationinstitute.org
  • 34.
    • Collect • Annotate • Move • Publish • Sync • Search • Share • Backup • Analyze • Archive BIG DATA computationinstitute.org
  • 35.
    • Collect • Annotate • Move • Publish • Sync • Search • Share • Backup • Analyze • Archive BIG DATA computationinstitute.org
  • 36.
    Many more capabilitiesplanned … Globus Online Research Data Management-as-a-Service Ingest, Sharing, Colla Backup, Cataloging, boration, Ann Archival, … SaaS Integration otation Retrieval Globus Integrate (Globus Nexus, Globus Connect) PaaS computationinstitute.org
  • 37.
    A platform forintegration computationinstitute.org
  • 38.
    Catalog as aservice Approach Three REST APIs • Hosted user-defined /query/ catalogs • Retrieve subjects • Based on tag model /tags/ • Create, delete, retrie <subject, name, value> ve tags • Optional schema /tagdef/ constraints • Create, delete, retrie • Integrated with other ve tag definitions Globus services Builds on USC Tagfiler project (C. Kesselman et al.) computationinstitute.org
  • 39.
    Other early successesin services for science… computationinstitute.org
  • 40.
  • 41.
  • 42.
    Other innovative science SaaSprojects computationinstitute.org
  • 43.
    Other innovative science SaaSprojects computationinstitute.org
  • 44.
    Our vision fora 21st century cyberinfrastructure To provide more capability for more people at substantially lower cost by creatively aggregating (“cloud”) and federating (“grid”) resources “Science as a service” computationinstitute.org
  • 45.
    Thank you toour sponsors! computationinstitute.org

Editor's Notes

  • #3 The Computation Institute (or CI)A joint initiative between Uchicago and Argonne National LabA place where researchers from multiple disciplines come together and engage in research that is fundamentally enabled by computationMore recently ….we’ve been talking about it as the home of the research cloud …and I’ll describe what we mean by that throughout this talk
  • #4 Here are some of the areas where we have active projectsFocus on areas of particular interest to I2/Esnet, namely HEP, climate change, genomics (up and coming)
  • #5 And the reason is pretty obvious…This chart and others like it are becoming a cliché in next gen sequencing and big data presentations …but the point is that while Moore’s law translates to roughly 10x increase in processor power…data volumes are growing many orders of magnitude fasterAND MEANWHILE, other necessary resources [money, people] are staying pretty flatSo we have a crisis …and we hear that magic bullet of “the cloud” is going to solve itWell, as far as cost goes, clouds are helping but many issues remain
  • #7 Another example if the earth systems grid that provides data and tools to over 20,000 climate scientists around the worldSo what’s notable about these examples?It’s the combination of the amount of data being managed and the number of people that need access to that dataWe heard Martin Leach tell us that the Broad Institute hit 10PB of spinning disk last year …and that it’s not a big dealTo a select few, these numbers are routine ….And for the projects I just talked about, the IT infrastructure is in placeThey have robust production solutionsBuilt by substantial teams at great expenseSustained, multi-year effortsApplication-specific solutions, built mostlyon common/homogeneoustechnology platforms
  • #8 The point is, the 1% of projects are in good shape
  • #9 But what about the 99% set?There are hundreds of thousands of small and medium labs around the world that are faced with similar data management challengesThey don’t have the resources to deal with these challengesSo their research suffers …and over time many may become irrelevantSo at the CI we asked ourselves a question …many questions actually about how we can help avert this crisisAnd one question that kinds sums up a lot of our thinking is…
  • #10 There are hundreds of thousands of small and medium labs around the world that are faced with similar data management challengesThey don’t have the resources to deal with these challengesSo their research suffers …and over time many may become irrelevantSo at the CI we asked ourselves a question …many questions actually about how we can help avert this crisisAnd one question that kinds sums up a lot of our thinking is…
  • #12 Lewis CarrollEnd-to-end crisis
  • #13 Can’t just expect to throw more people and $$$ at the problem ….already seeing the limits
  • #16 Many in this room are probably users of Dropbox or similar services for keeping their files synced across multiple machinesWell, the scientific research equivalent is a little different
  • #17 We figured it needs to allow a group of collaborating researchers to do many or all of these things with their data ……and not just the 2GB of powerpoints…or the 100GB of family photos and videos….but the petabytes and exabytes of data that will soon be the norm for many
  • #18 So how would such a drop box for science be used? Let’s look at a very typical scientific data work flow . . .Data is generated by some instrument (a sequencer at JGI or a light source like APS/ALS)…since these instruments are in high demand, users have to get their data off the instrument to make way for the next userSo the data is typically moved from a staging area to some type of ingest storeEtcetera for analysis, sharing of results with collaborators, annotation with metadata for future search, backup/sync/archival, …
  • #20 Started with seemingly simple/mundane task of transferring files …etc.
  • #26 Many in this room are probably users of Dropbox or similar services for keeping their files synced across multiple machinesWell, the scientific research equivalent is a little different
  • #29 Extensible Session ProtocolA session provides context for a data transfer(OSI stack layer 5)Connections, forwarding, application context, etc.XSP provides mechanisms to configure dynamic network circuitsEzra Kissel and Martin Swany have developed a Globus XIO driver for XSP
  • #32 Preliminary GridFTP test results has demonstrated that making use of both the default, commodity IP routes in conjunction with dedicated circuits will provide a number of significant performance gainsIn each case, our reservable circuit capacity was limited to 2Gb/s because of capacity caps, although we note that due to bandwidth “scavenging” enabled in the circuit service, we frequently see average rates above the defined bandwidth limit.
  • #33 XIO-XSP is a Globus XIO driverProvides an integrated XSP client for GridFTPIncludes path provisioning and instrumentation for transfers over XSP sessionsXSPd (daemon) implements protocol frontendAccepts on-demand reservation requests from clientsSignals OSCARS and monitors circuit statusOSCARS circuits provisioned to end-hostsEither bandwidth or circuit on-demand
  • #36 And when we spoke with IT folks at various research communities they insisted that some things were not up for negotiation
  • #37 And when we spoke with IT folks at various research communities they insisted that some things were not up for negotiation
  • #38 And when we spoke with IT folks at various research communities they insisted that some things were not up for negotiation
  • #39 We figured it needs to allow a group of collaborating researchers to do many or all of these things with their data ……and not just the 2GB of powerpoints…or the 100GB of family photos and videos….but the petabytes and exabytes of data that will soon be the norm for many
  • #40 We figured it needs to allow a group of collaborating researchers to do many or all of these things with their data ……and not just the 2GB of powerpoints…or the 100GB of family photos and videos….but the petabytes and exabytes of data that will soon be the norm for many
  • #43 http://datasets.globus.org/carl-catalog/query/propertyA=value1