“The Pacific Research Platform:
a Science-Driven Big-Data Freeway System.”
Opening Presentation
Pacific Research Platform Workshop
Calit2’s Qualcomm Institute
University of California, San Diego
October 14, 2015
Dr. Larry Smarr
Director, California Institute for Telecommunications and Information
Technology
Harry E. Gruber Professor,
Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
1
Vision: Creating a West Coast “Big Data Freeway”
Connected by CENIC/Pacific Wave to Internet2 & GLIF
Use Lightpaths to Connect
All Data Generators and Consumers,
Creating a “Big Data” Freeway
Integrated With High Performance Global Networks
“The Bisection Bandwidth of a Cluster Interconnect,
but Deployed on a 20-Campus Scale.”
This Vision Has Been Building for Over a Decade
NSF’s OptIPuter Project: Using Supernetworks
to Meet the Needs of Data-Intensive Researchers
OptIPortal–
Termination
Device
for the
OptIPuter
Global
Backplane
Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PI
Univ. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST
Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
2003-2009
$13,500,000
In August 2003,
Jason Leigh and his
students used
RBUDP to blast data
from NCSA to SDSC
over the
TeraGrid DTFnet,
achieving18Gbps file
transfer out of the
available 20Gbps
LS Slide 2005
Quartzite: The Optical Core of the UCSD Campus-Scale Testbed --
Evaluating Packet Routing versus Lambda Switching
Goals by 2007:
>= 50 endpoints at 10 GigE
>= 32 Packet switched
>= 32 Switched wavelengths
>= 300 Connected endpoints
Approximately 0.5 TBit/s
Arrive at the “Optical” Center
of Campus
Switching will be a Hybrid
Combination of:
Packet, Lambda, Circuit --
OOO and Packet Switches
Already in Place
Source: Phil Papadopoulos,
SDSC, Calit2
Funded by
NSF MRI
Grant
Lucent
Glimmerglass
Chiaro
Networks
LS Slide 2005
Integrated “OptIPlatform” Cyberinfrastructure System:
A 10Gbps Lightpath Cloud
National LambdaRail
Campus
Optical
Switch
Data Repositories & Clusters
HPC
HD/4k Video Images
HD/4k Video Cams
End User
OptIPortal
10G
Lightpath
HD/4k Telepresence
Instruments
LS 2009
Slide
So Why Don’t We Have a National
Big Data Cyberinfrastructure?
“Research is being stalled by ‘information overload,’ Mr. Bement said, because
data from digital instruments are piling up far faster than researchers can
study. In particular, he said, campus networks need to be improved. High-speed
data lines crossing the nation are the equivalent of six-lane superhighways, he
said. But networks at colleges and universities are not so capable. “Those
massive conduits are reduced to two-lane roads at most college and university
campuses,” he said. Improving cyberinfrastructure, he said, “will transform the
capabilities of campus-based scientists.”
-- Arden Bement, the director of the National Science Foundation May 2005
DOE ESnet’s Science DMZ: A Scalable Network
Design Model for Optimizing Science Data Transfers
• A Science DMZ integrates 4 key concepts into a unified whole:
– A network architecture designed for high-performance applications,
with the science network distinct from the general-purpose network
– The use of dedicated systems for data transfer
– Performance measurement and network testing systems that are
regularly used to characterize and troubleshoot the network
– Security policies and enforcement mechanisms that are tailored for
high performance science environments
http://fasterdata.es.net/science-dmz/
Science DMZ
Coined 2010
The DOE ESnet Science DMZ and the NSF “Campus Bridging” Taskforce Report Formed the Basis
for the NSF Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Program
Greg Bell,
Director ESnet
On Panel
Based on Community Input and on ESnet’s Science DMZ Concept,
NSF Has Funded Over 100 Campuses to Build Local Big Data Freeways
Red 2012 CC-NIE Awardees
Yellow 2013 CC-NIE Awardees
Green 2014 CC*IIE Awardees
Blue 2015 CC*DNI Awardees
Purple Multiple Time Awardees
Source: NSF
Creating a “Big Data” Freeway on Campus:
NSF-Funded CC-NIE Grants Prism@UCSD and CHeruB
Prism@UCSD, Phil Papadopoulos, SDSC, Calit2, PI (2013-15)
CHERuB, Mike Norman, SDSC PI
CHERuB
The Pacific Research Platform Creates
a Regional End-to-End Science-Driven “Big Data Freeway System”
NSF CC*DNI $5M 10/2015-10/2020
PI: Larry Smarr, UC San Diego Calit2
Co-Pis:
• Camille Crittenden, UC Berkeley CITRIS,
• Tom DeFanti, UC San Diego Calit2,
• Philip Papadopoulos, UC San Diego SDSC,
• Frank Wuerthwein, UC San Diego Physics and
SDSC
Amy Walton,
PRP NSF Program Officer on Panel
CENIC/PW Backplane –
Louis Fox, CEO CENIC, on Panel
FIONA – Flash I/O Network Appliance:
Linux PCs Optimized for Big Data
UCOP Rack-Mount Build:
FIONAs Are
Science DMZ Data Transfer Nodes &
Optical Network Termination Devices
UCSD CC-NIE Prism Award & UCOP
Phil Papadopoulos & Tom DeFanti
Joe Keefe & John Graham
Cost $8,000 $20,000
Intel Xeon
Haswell Multicore
E5-1650 v3
6-Core
2x E5-2697 v3
14-Core
RAM 128 GB 256 GB
SSD SATA 3.8 TB SATA 3.8 TB
Network Interface 10/40GbE
Mellanox
2x40GbE
Chelsio+Mellanox
GPU NVIDIA Tesla K80
RAID Drives 0 to 112TB (add ~$100/TB)
A UCSD Integrated Digital Infrastructure Project for Big Data Requirements
of Rob Knight’s Lab – PRP Does This on a Sub-National Scale
FIONA
12 Cores/GPU
128 GB RAM
3.5 TB SSD
48TB Disk
10Gbps NIC
Knight Lab
10Gbps
Gordon
Prism@UCSD
Data Oasis
7.5PB,
200GB/s
Knight 1024 Cluster
In SDSC Co-Lo
CHERuB
100Gbps
Emperor & Other Vis Tools
64Mpixel Data Analysis Wall
120Gbps
40Gbps
FIONAs as
Uniform DTN End Points
Existing DTNs
As of October 2015
FIONA DTNs
UC FIONAs Funded by
UCOP “Momentum” Grant -
Tom Andriola, UCOP CIO on Panel
Ten Week Sprint to Demonstrate the West Coast
Big Data Freeway System: PRPv0
Presented at CENIC 2015
March 9, 2015
FIONA DTNs Now Deployed to All UC Campuses
And Most PRP Sites
PRP First Application: Distributed IPython/Jupyter Notebooks:
Cross-Platform, Browser-Based Application Interleaves Code, Text, & Images
IJulia
IHaskell
IFSharp
IRuby
IGo
IScala
IMathics
Ialdor
LuaJIT/Torch
Lua Kernel
IRKernel (for the R language)
IErlang
IOCaml
IForth
IPerl
IPerl6
Ioctave
Calico Project
•kernels implemented in Mono,
including Java, IronPython, Boo,
Logo, BASIC, and many others
IScilab
IMatlab
ICSharp
Bash
Clojure Kernel
Hy Kernel
Redis Kernel
jove, a kernel for io.js
IJavascript
Calysto Scheme
Calysto Processing
idl_kernel
Mochi Kernel
Lua (used in Splash)
Spark Kernel
Skulpt Python Kernel
MetaKernel Bash
MetaKernel Python
Brython Kernel
IVisual VPython Kernel
Source: John Graham, QI
PRP Has Deployed Powerful FIONA Servers at UCSD and UC Berkeley
to Create a UC-Jupyter Hub Backplane
FIONAs Have GPUs and
Can Spawn Jobs
to SDSC’s Comet
Using inCommon CILogon
Authenticator Module
for Jupyter.
Deep Learning Libraries
Have Been Installed
Source: John Graham, QI
Pacific Research Platform
Multi-Campus Science Driver Teams
• Particle Physics
• Astronomy and Astrophysics
– Telescope Surveys
– Galaxy Evolution
– Gravitational Wave Astronomy
• Biomedical
– Cancer Genomics Hub/Browser
– Microbiome and Integrative ‘Omics
– Integrative Structural Biology
• Earth Sciences
– Data Analysis and Simulation for Earthquakes and Natural Disasters
– Climate Modeling: NCAR/UCAR
– California/Nevada Regional Climate Data Analysis
– CO2 Subsurface Modeling
• Scalable Visualization, Virtual Reality, and Ultra-Resolution Video
17
Key Task for This Workshop:
Determine the Big Data Needs of These Teams
and Translate into
PRP Cyberinfrastructure Requirements
Science Teams Require High Bandwidth
Across Campus and Between Campuses and National Facilities
• Connecting Scientific Instrument Data Production to
Remote Campus Compute & Storage Clusters
• Providing Access to Remote Data Repositories
• Bringing Supercomputer Data to Local Users
• Enabling Remote Collaborations
• MORE?
Big Data Flows Add to Commodity Internet to Fully Utilize
CENIC’s 100G Campus Connection
PRP Timeline
• PRPv1
– A Layer 3 System
– Completed In 2 Years
– Tested, Measured, Optimized, With Multi-domain Science Data
– Bring Many Of Our Science Teams Up
– Each Community Thus Will Have Its Own Certificate-Based Access
To its Specific Federated Data Infrastructure.
• PRPv2
– Advanced Ipv6-Only Version with Robust Security Features
– e.g. Trusted Platform Module Hardware and SDN/SDX Software
– Support Rates up to 100Gb/s in Bursts And Streams
– Develop Means to Operate a Shared Federation of Caches

The Pacific Research Platform: A Science-Driven Big-Data Freeway System

  • 1.
    “The Pacific ResearchPlatform: a Science-Driven Big-Data Freeway System.” Opening Presentation Pacific Research Platform Workshop Calit2’s Qualcomm Institute University of California, San Diego October 14, 2015 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD 1
  • 2.
    Vision: Creating aWest Coast “Big Data Freeway” Connected by CENIC/Pacific Wave to Internet2 & GLIF Use Lightpaths to Connect All Data Generators and Consumers, Creating a “Big Data” Freeway Integrated With High Performance Global Networks “The Bisection Bandwidth of a Cluster Interconnect, but Deployed on a 20-Campus Scale.” This Vision Has Been Building for Over a Decade
  • 3.
    NSF’s OptIPuter Project:Using Supernetworks to Meet the Needs of Data-Intensive Researchers OptIPortal– Termination Device for the OptIPuter Global Backplane Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PI Univ. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent 2003-2009 $13,500,000 In August 2003, Jason Leigh and his students used RBUDP to blast data from NCSA to SDSC over the TeraGrid DTFnet, achieving18Gbps file transfer out of the available 20Gbps LS Slide 2005
  • 4.
    Quartzite: The OpticalCore of the UCSD Campus-Scale Testbed -- Evaluating Packet Routing versus Lambda Switching Goals by 2007: >= 50 endpoints at 10 GigE >= 32 Packet switched >= 32 Switched wavelengths >= 300 Connected endpoints Approximately 0.5 TBit/s Arrive at the “Optical” Center of Campus Switching will be a Hybrid Combination of: Packet, Lambda, Circuit -- OOO and Packet Switches Already in Place Source: Phil Papadopoulos, SDSC, Calit2 Funded by NSF MRI Grant Lucent Glimmerglass Chiaro Networks LS Slide 2005
  • 5.
    Integrated “OptIPlatform” CyberinfrastructureSystem: A 10Gbps Lightpath Cloud National LambdaRail Campus Optical Switch Data Repositories & Clusters HPC HD/4k Video Images HD/4k Video Cams End User OptIPortal 10G Lightpath HD/4k Telepresence Instruments LS 2009 Slide
  • 6.
    So Why Don’tWe Have a National Big Data Cyberinfrastructure? “Research is being stalled by ‘information overload,’ Mr. Bement said, because data from digital instruments are piling up far faster than researchers can study. In particular, he said, campus networks need to be improved. High-speed data lines crossing the nation are the equivalent of six-lane superhighways, he said. But networks at colleges and universities are not so capable. “Those massive conduits are reduced to two-lane roads at most college and university campuses,” he said. Improving cyberinfrastructure, he said, “will transform the capabilities of campus-based scientists.” -- Arden Bement, the director of the National Science Foundation May 2005
  • 7.
    DOE ESnet’s ScienceDMZ: A Scalable Network Design Model for Optimizing Science Data Transfers • A Science DMZ integrates 4 key concepts into a unified whole: – A network architecture designed for high-performance applications, with the science network distinct from the general-purpose network – The use of dedicated systems for data transfer – Performance measurement and network testing systems that are regularly used to characterize and troubleshoot the network – Security policies and enforcement mechanisms that are tailored for high performance science environments http://fasterdata.es.net/science-dmz/ Science DMZ Coined 2010 The DOE ESnet Science DMZ and the NSF “Campus Bridging” Taskforce Report Formed the Basis for the NSF Campus Cyberinfrastructure Network Infrastructure and Engineering (CC-NIE) Program Greg Bell, Director ESnet On Panel
  • 8.
    Based on CommunityInput and on ESnet’s Science DMZ Concept, NSF Has Funded Over 100 Campuses to Build Local Big Data Freeways Red 2012 CC-NIE Awardees Yellow 2013 CC-NIE Awardees Green 2014 CC*IIE Awardees Blue 2015 CC*DNI Awardees Purple Multiple Time Awardees Source: NSF
  • 9.
    Creating a “BigData” Freeway on Campus: NSF-Funded CC-NIE Grants Prism@UCSD and CHeruB Prism@UCSD, Phil Papadopoulos, SDSC, Calit2, PI (2013-15) CHERuB, Mike Norman, SDSC PI CHERuB
  • 10.
    The Pacific ResearchPlatform Creates a Regional End-to-End Science-Driven “Big Data Freeway System” NSF CC*DNI $5M 10/2015-10/2020 PI: Larry Smarr, UC San Diego Calit2 Co-Pis: • Camille Crittenden, UC Berkeley CITRIS, • Tom DeFanti, UC San Diego Calit2, • Philip Papadopoulos, UC San Diego SDSC, • Frank Wuerthwein, UC San Diego Physics and SDSC Amy Walton, PRP NSF Program Officer on Panel CENIC/PW Backplane – Louis Fox, CEO CENIC, on Panel
  • 11.
    FIONA – FlashI/O Network Appliance: Linux PCs Optimized for Big Data UCOP Rack-Mount Build: FIONAs Are Science DMZ Data Transfer Nodes & Optical Network Termination Devices UCSD CC-NIE Prism Award & UCOP Phil Papadopoulos & Tom DeFanti Joe Keefe & John Graham Cost $8,000 $20,000 Intel Xeon Haswell Multicore E5-1650 v3 6-Core 2x E5-2697 v3 14-Core RAM 128 GB 256 GB SSD SATA 3.8 TB SATA 3.8 TB Network Interface 10/40GbE Mellanox 2x40GbE Chelsio+Mellanox GPU NVIDIA Tesla K80 RAID Drives 0 to 112TB (add ~$100/TB)
  • 12.
    A UCSD IntegratedDigital Infrastructure Project for Big Data Requirements of Rob Knight’s Lab – PRP Does This on a Sub-National Scale FIONA 12 Cores/GPU 128 GB RAM 3.5 TB SSD 48TB Disk 10Gbps NIC Knight Lab 10Gbps Gordon Prism@UCSD Data Oasis 7.5PB, 200GB/s Knight 1024 Cluster In SDSC Co-Lo CHERuB 100Gbps Emperor & Other Vis Tools 64Mpixel Data Analysis Wall 120Gbps 40Gbps
  • 13.
    FIONAs as Uniform DTNEnd Points Existing DTNs As of October 2015 FIONA DTNs UC FIONAs Funded by UCOP “Momentum” Grant - Tom Andriola, UCOP CIO on Panel
  • 14.
    Ten Week Sprintto Demonstrate the West Coast Big Data Freeway System: PRPv0 Presented at CENIC 2015 March 9, 2015 FIONA DTNs Now Deployed to All UC Campuses And Most PRP Sites
  • 15.
    PRP First Application:Distributed IPython/Jupyter Notebooks: Cross-Platform, Browser-Based Application Interleaves Code, Text, & Images IJulia IHaskell IFSharp IRuby IGo IScala IMathics Ialdor LuaJIT/Torch Lua Kernel IRKernel (for the R language) IErlang IOCaml IForth IPerl IPerl6 Ioctave Calico Project •kernels implemented in Mono, including Java, IronPython, Boo, Logo, BASIC, and many others IScilab IMatlab ICSharp Bash Clojure Kernel Hy Kernel Redis Kernel jove, a kernel for io.js IJavascript Calysto Scheme Calysto Processing idl_kernel Mochi Kernel Lua (used in Splash) Spark Kernel Skulpt Python Kernel MetaKernel Bash MetaKernel Python Brython Kernel IVisual VPython Kernel Source: John Graham, QI
  • 16.
    PRP Has DeployedPowerful FIONA Servers at UCSD and UC Berkeley to Create a UC-Jupyter Hub Backplane FIONAs Have GPUs and Can Spawn Jobs to SDSC’s Comet Using inCommon CILogon Authenticator Module for Jupyter. Deep Learning Libraries Have Been Installed Source: John Graham, QI
  • 17.
    Pacific Research Platform Multi-CampusScience Driver Teams • Particle Physics • Astronomy and Astrophysics – Telescope Surveys – Galaxy Evolution – Gravitational Wave Astronomy • Biomedical – Cancer Genomics Hub/Browser – Microbiome and Integrative ‘Omics – Integrative Structural Biology • Earth Sciences – Data Analysis and Simulation for Earthquakes and Natural Disasters – Climate Modeling: NCAR/UCAR – California/Nevada Regional Climate Data Analysis – CO2 Subsurface Modeling • Scalable Visualization, Virtual Reality, and Ultra-Resolution Video 17 Key Task for This Workshop: Determine the Big Data Needs of These Teams and Translate into PRP Cyberinfrastructure Requirements
  • 18.
    Science Teams RequireHigh Bandwidth Across Campus and Between Campuses and National Facilities • Connecting Scientific Instrument Data Production to Remote Campus Compute & Storage Clusters • Providing Access to Remote Data Repositories • Bringing Supercomputer Data to Local Users • Enabling Remote Collaborations • MORE? Big Data Flows Add to Commodity Internet to Fully Utilize CENIC’s 100G Campus Connection
  • 19.
    PRP Timeline • PRPv1 –A Layer 3 System – Completed In 2 Years – Tested, Measured, Optimized, With Multi-domain Science Data – Bring Many Of Our Science Teams Up – Each Community Thus Will Have Its Own Certificate-Based Access To its Specific Federated Data Infrastructure. • PRPv2 – Advanced Ipv6-Only Version with Robust Security Features – e.g. Trusted Platform Module Hardware and SDN/SDX Software – Support Rates up to 100Gb/s in Bursts And Streams – Develop Means to Operate a Shared Federation of Caches

Editor's Notes

  • #18 Campus Cyberinfrastructure – Network Infrastructure and Engineering (CC-NIE) Campus Cyberinfrastructure – Infrastructure, Innovation, and Engineering (CC-IIE) Campus Cyberinfrastructure – Data, Networking, and Innovation (CC-DNI) NSF 15-534 incorporates Data Infrastructure Building Blocks (CC-DNI-DIBBs) – Multi-Campus / Multi-Institution Model Implementation from Program Solicitation NSF 14-530