UC-Wide Cyberinfrastructure for Data-Intensive Research

Like this? Share it with your network

Share

UC-Wide Cyberinfrastructure for Data-Intensive Research

  • 563 views
Uploaded on

Invited presentation by Calit2 Director Larry Smarr to the UC IT Leadership Council in Oakland, Calif., on May 19, 2014.

Invited presentation by Calit2 Director Larry Smarr to the UC IT Leadership Council in Oakland, Calif., on May 19, 2014.

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
    Be the first to like this
No Downloads

Views

Total Views
563
On Slideshare
535
From Embeds
28
Number of Embeds
2

Actions

Shares
Downloads
3
Comments
0
Likes
0

Embeds 28

http://lsmarr.calit2.net 25
http://www.slideee.com 3

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • “SciDMZ” and “SciDMZ Infrastructure” links – initially only for traffic with one end-point in Science DMZ, eventually (with experience) also for transit traffic between campus production networks and Internet.
    “SciDMZ Research” links – one end-point in Science DMZ
  • Note on cooling:
    The normal ‘tons’ are converted to the equivalent MW’s that can be air-cooled. We realize its number is short of power to the datacenter and promise not to melt the datacenter (power actually has multiple distribution systems and various limits).
     From Tom & Valerie
    The answers are different based upon who the customer is.
    For UCSD-based Colo users there is a max of 20Gig Layer3 bandwidth out (limited by the RCI routers), and some of the non-CENIC 10G connections are private to SDSC researchers, hence not accessible.
    For SDSC Colo users and other SDSC researchers, the non-CENIC 10G connections may be accessible (depending on connection and researcher), and the connection to CHERuB will be 100G into the SDSC fabric.
    For other UC Campuses, the 20gig bottleneck of the RCI routers is not in the path.  That’s a UCSD only detail.
    We currently have (or will have):
    Network connections (fall 2014)
    100Gbps (layer 2 only): CENIC to PacWave, Internet2 AL2S & ESnet
    20Gbps (each): CENIC HPR (Internet2), CENIC DC (K-20 and ISP's) 
    10Gbps (each): XSEDENet, FutureGrid (IU), CENIC HPR-L2, ESnet L3, Pacwave L2
     
    There are other private links into SDSC from other campuses that are not included in the previous email.  They all enter SDSC on the Layer1 Cenic WDM connection
    As Valerie mentioned, we also have an additional 10GE Cenic HPR (to Riverside) link and a backup 10G Cenic-DC link to Los Angeles.   Thus one could have the case where there was 10Gig of HPR traffic flowing to UCR and another 20Gig of traffic over the other HPR connection to other locations at the same time.
     
    In addition …
    If this is for other UC campus's we should let them know that we are the San Diego Cenic Hub site.  That means to them no last mile charges on any private waves they want to bring into SDSC/UCSD.
     
    In addition,  We are on net with the following fiber providers:
    AT&T/SBC
    Verizon/MCI
    Time Warner
    Cox Communications
    MFS
    Freedom Fiber
     
    In addition we have cenic fiber out to off campus meet me points where we can connect level 3 and williams fiber among other.
  • While CGHub is a data serving system, it has attracted co-located computing.  Annai Systems co-locates the Bio Compute Farm a commercial service for accessing CGHub; Memorial Sloan-Kettering Cancer Center has a dedicated cluster specifically for CGHub access; and UCSD researchers are looking at access via dedicated systems and/or TSCC.

Transcript

  • 1. “A UC-Wide Cyberinfrastructure for Data-Intensive Research” Invited Presentation UC IT Leadership Council Oakland, CA May 19, 2014 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD http://lsmarr.calit2.net 1
  • 2. Vision: Creating a UC-Wide “Big Data” Plane Connected to CENIC, I2, & GLIF Use Lightpaths to Connect All UC Data Generators and Consumers, Creating a “Big Data” Plane Integrated With High Performance Global Networks “The Bisection Bandwidth of a Cluster Interconnect, but Deployed on a 10-Campus Scale.” This Vision Has Been Building for Over a Decade
  • 3. Calit2/SDSC Proposal to Create a UC Cyberinfrastructure of OptIPuter “On-Ramps” to TeraGrid Resources UC San Francisco UC San Diego UC Riverside UC Irvine UC Davis UC Berkeley UC Santa Cruz UC Santa Barbara UC Los Angeles UC Merced OptIPuter + CalREN-XD + TeraGrid = “OptiGrid” Source: Fran Berman, SDSC Creating a Critical Mass of End Users on a Secure LambdaGrid LS 2005 Slide
  • 4. CENIC Provides an Optical Backplane For the UC Campuses Upgrading to 100G
  • 5. CENIC is Rapidly Moving to Connect at 100 Gbps Across the State and Nation DOE Internet2
  • 6. Global Innovation Centers are Connected with 10 Gigabits/sec Clear Channel Lightpaths Source: Maxine Brown, UIC and Robert Patterson, NCSA Members of The Global Lambda Integrated Facility Meet Annually at Calit2’s Qualcomm Institute
  • 7. Why Now? The White House Announcement Has Galvanized U.S. Campus CI Innovations
  • 8. Why Now? Federating the Six UC CC-NIE Grants • 2011 ACCI Strategic Recommendation to the NSF #3: – NSF should create a new program funding high-speed (currently 10 Gbps) connections from campuses to the nearest landing point for a national network backbone. The design of these connections must include support for dynamic network provisioning services and must be engineered to support rapid movement of large scientific data sets." – - pg. 6, NSF Advisory Committee for Cyberinfrastructure Task Force on Campus Bridging, Final Report, March 2011 – www.nsf.gov/od/oci/taskforces/TaskForceReport_CampusBridging.pdf – Led to Office of Cyberinfrastructure RFP March 1, 2012 • NSF’s Campus Cyberinfrastructure – Network Infrastructure & Engineering (CC-NIE) Program – 85 Grants Awarded So Far (NSF Summit Last Week) – 6 Are in UC UC Must Move Rapidly or Lose a Ten-Year Advantage!
  • 9. Creating a “Big Data” Plane NSF CC-NIE Funded Prism@UCSD NSF CC-NIE Has Awarded Prism@UCSD Optical Switch Phil Papadopoulos, SDSC, Calit2, PI CHERuB
  • 10. UC-Wide “Big Data Plane” Puts High Performance Data Resources Into Your Lab 12
  • 11. How to Terminate 10Gbps in Your Lab FIONA – Inspired by Gordon • FIONA – Flash I/O Node Appliance – Combination of Desktop and Server Building Blocks – US$5K - US$7K – Desktop Flash up to 16TB – RAID Drives up to 48TB – Drive HD 2D & 3D Displays – 10GbE/40GbE Adapter – Tested speed 30Gbs – Developed by UCSD’s – Phil Papadopoulos – Tom DeFanti – Joe Keefe FIONA 3+GB/s Data Appliance, 32GB 9 X 256GB 510MB/se c 8 X 3TB 125MB/se c 2 x 40GbE 2 TB Cache 24TB Disk
  • 12. 100G CENIC to UCSD—NSF CC-NIE Configurable, High-speed, Extensible Research Bandwidth (CHERuB) PacWave, CENIC, Internet2, NLR, ESnet, StarLight, XSEDE & other R&E networks DWDM 100G transponders DWDM 100G transponders 818 W. 7th, Los Angeles, CA 10100 Hopkins Drive, La Jolla, CA up to 3 add'l 100G transponders can be attached up to 3 add'l 100G transponders can be attached to CENIC/ PacWave switch L2 UCSD/SDSC Gateway Juniper MX960 "MX0" New 2x100G/8x10G line card + optics New 40G line card + optics SDSC Juniper MX960 "Medusa" New 100G card/ optics Other SDSC resources UCSD Primary Node Cisco 6509 "Node B" PRISM@UCSD Arista 7504 PRISM@UCSD - many UCSD big data users mult. 40G+ connections UCSD Production users mult. 10G connections GORDON compute cluster 2x40G 4x10G 100G 100G mult. 40G connections NEW UCSD Key: Green/dashed lines - new component/ equipment in proposal Pink/black - existing UCSD infrastructure UCSD/SDSC Cisco 6509 UCSD DYNES add'l 10G card/optics 100G Equinix/L3/CENIC POP SDSC NAP existing CENIC fiber Nx10G 10G Existing ESnet SD router 10G Dual Arista 7508 "Oasis" SDSC DYNES 128x10G 256x10G DataOasis/ SDSC Cloud Source: Mike Norman, SDSC
  • 13. NSF CC-NIE Funded UCI LightPath: A Dedicated Campus Science DMZ Network for Big Data Transfer Source: Dana Roode, UCI
  • 14. NSF CC-NIE Funded UC Berkeley ExCEEDS - Extensible Data Science Networking Source: Jon Kuroda, UCB
  • 15. NSF CC-NIE Funded UC Davis Science DMZ Architecture Source: Matt Bishop, UCD
  • 16. NSF CC-NIE Funded Adding a Science DMZ to Existing Shared Internet at UC Santa Cruz → Before CENIC DC and Global Internet CENIC HPR and Global Research Networks Border Router Border Router Core RouterCore Router 10 Gb/s Campus Distribution Core Existing 10 Gb/s Science DMZ Router Campus High Performance Research Networks DYNES/L2 SciDMZ 10 Gb/s SciDMZ Research 10 Gb/s SciDMZ Infrastructure 100 Gb/s After Source: Brad Smith, UCSC
  • 17. Gray Davis Institutes for Science and Innovation: A Faculty-Facing Partner for NSF CC-NIEs & ITLC UCSB UCLA California NanoSystems Institute UCSF UCB California Institute for Bioengineering, Biotechnology, and Quantitative Biomedical Research UCI UCSD California Institute for Telecommunications and Information Technology Center for Information Technology Research in the Interest of Society UCSC UCD UCM www.ucop.edu/california-institutes
  • 18. Coupling to California CC-NIE Winning Proposals From Non-UC Campuses • Caltech – Caltech High-Performance OPtical Integrated Network (CHOPIN) – CHOPIN Deploys Software-Defined Networking (SDN) Capable Switches – Creates 100Gbps Link Between Caltech and CENIC and Connection to: – California OpenFlow Testbed Network (COTN) – Internet2 Advanced Layer 2 Services (AL2S) network – Driven by Big Data High Energy Physics, astronomy (LIGO, LSST), Seismology, Geodetic Earth Satellite Observations • Stanford University – Develop SDN-Based Private Cloud – Connect to Internet2 100G Innovation Platform – Campus-Wide Sliceable/VIrtualized SDN Backbone (10-15 switches) – SDN control and management • San Diego State University – Implementing a ESnet Architecture Science DMZ – Balancing Performance and Security Needs – Promote Remote Usage of Computing Resources at SDSU Source: Louis Fox, CENIC CEO Also USC
  • 19. High Performance Computing and Storage Become Plug Ins to the “Big Data” Plane
  • 20. NERSC and ESnet Offer High Performance Computing and Networking Cray XC30 2.4 Petaflops Dedicated Feb. 5, 2014
  • 21. SDSC’s Comet is a ~2 PetaFLOPs System Architected for the “Long Tail of Science” NSF Track 2 award to SDSC $12M NSF award to acquire $3M/yr x 4 yrs to operate Production early 2015
  • 22. UCSD/SDSC Provides CoLo Facilities Over Multi-Gigabit/s Optical Networks Capacity Utilized Headroom Racks 480 (=80%) 340 140 Power (MW) (fall 2014) 6.3 (13 to bldg) 2.5 3.8 Cooling capacity (MW) 4.25 2.5 1.75 UPS (total) (MW) 3.1 1.5 1.6 UPS/Generator MW 1.1 0.5 0.6 Network Connectivity (Fall ’14) •100Gbps (CHERuB - layer 2 only): via CENIC to PacWave, Internet2 AL2S & ESnet • 20Gbps (each): CENIC HPR (Internet2), CENIC DC (K-20+ISPs) • 10Gbps (each): CENIC HPR-L2, ESnet L3, Pacwave L2, XSEDENet, FutureGrid (IU) Current Usage Profile (racks) •UCSD: 248 •Other UC campuses: 52 •Non-UC nonprofit/industry: 26 Protected-Data Equipment or Services (PHI, HIPAA) •UCD, UCI, UCOP, UCR, UCSC, UCSD, UCSF, Rady Children’s Hospital
  • 23. Triton Shared Computing Cluster “Hotel” & “Condo” Models • Participation Model: – Hotel: – Pre-Purchase Computing Time as Needed / Run on Subset of Cluster – For Small/Medium & Short- Term Needs – Condo: – Purchase Nodes with Equipment Funds and Have “Run Of The Cluster” – For Longer Term Needs / Larger Runs – Annual Operations Fee Is Subsidized (~75%) for UCSD • System Capabilities: – Heterogeneous System for Range of User Needs – Intel Xeon, NVIDIA GPU, Mixed Infiniband / Ethernet Interconnect – 180 Total Nodes, ~ 80-90TF Performance – 40+ Hotel Nodes – 700TB High Performance Data Oasis Parallel File System – Persistent Storage via Recharge • User Profile: – 16 Condo Groups (All UCSD) – ~600 User Accounts – Hotel Partition – Users From 8 UC Campuses – UC Santa Barbara & Merced Most Active After UCSD – ~70 Users from Outside Research Institutes and Industry
  • 24. Many Disciplines Require Dedicated High Bandwidth on Campus • Remote Analysis of Large Data Sets – Particle Physics, Regional Climate Change • Connection to Remote Campus Compute & Storage Clusters – Microscopy and Next Gen Sequencers • Providing Remote Access to Campus Data Repositories – Protein Data Bank, Mass Spectrometry, Genomics • Enabling Remote Collaborations – National and International • Extending Data-Intensive Research to Surrounding Counties – HPWREN Big Data Flows Add to Commodity Internet to Fully Utilize CENIC’s 100G Campus Connection
  • 25. PRISM is Connecting CERN’s CMS Experiment To UCSD Physics Department at 80 Gbps All UC LHC Researchers Could Share Data/Compute Across CENIC/Esnet at 10-100 Gbps
  • 26. Dan Cayan USGS Water Resources Discipline Scripps Institution of Oceanography, UC San Diego much support from Mary Tyree, Mike Dettinger, Guido Franco and other colleagues Sponsors: California Energy Commission NOAA RISA program California DWR, DOE, NSF Planning for climate change in California substantial shifts on top of already high climate variability SIO Campus Climate Researchers Need to Download Results from Remote Supercomputer Simulations to Make Regional Climate Change Forecasts
  • 27. average summer afternoon temperature average summer afternoon temperature 27GFDL A2 1km downscaled to 1km Source: Hugo Hidalgo, Tapash Das, Mike Dettinger
  • 28. NIH National Center for Microscopy & Imaging Research Integrated Infrastructure of Shared Resources Source: Steve Peltier, Mark Ellisman, NCMIR Local SOM Infrastructure Scientific Instruments End User FIONA Workstation Shared Infrastructure
  • 29. PRISM Links Calit2’s VROOM to NCMIR to Explore Confocal Light Microscope Images of Rat Brains
  • 30. Protein Data Bank (PDB) Needs Bandwidth to Connect Resources and Users • Archive of experimentally determined 3D structures of proteins, nucleic acids, complex assemblies • One of the largest scientific resources in life sciences Source: Phil Bourne and Andreas Prlić, PDBHemoglobin Virus
  • 31. • Why is it Important? – Enables PDB to Better Serve Its Users by Providing Increased Reliability and Quicker Results • Need High Bandwidth Between Rutgers & UCSD Facilities – More than 300,000 Unique Visitors per Month – Up to 300 Concurrent Users – ~10 Structures are Downloaded per Second 7/24/365 PDB Plans to Establish Global Load Balancing Source: Phil Bourne and Andreas Prlić, PDB
  • 32. Cancer Genomics Hub (UCSC) is Housed in SDSC CoLo: Storage CoLo Attracts Compute CoLo • CGHub is a Large-Scale Data Repository/Portal for the National Cancer Institute’s Cancer Genome Research Programs • Current Capacity is 5 Petabytes , Scalable to 20 Petabytes; Cancer Genome Atlas Alone Could Produce 10 PB in the Next Four Years • (David Haussler, PI) “SDSC [colocation service] has exceeded our expectations of what a data center can offer. We are glad to have the CGHub database located at SDSC.” • Researchers can already install their own computers at SDSC, where the CGHub data is physically housed, so that they can run their own analyses. ( http://blogs.nature.com/news/2012/05/us-cancer-genome-repository-hopes-to-speed-res ) • Berkeley is connecting at 100Gbps to CGHub Source: Richard Moore, et al. SDSC
  • 33. PRISM Will Link Computational Mass Spectrometry and Genome Sequencing Cores to the Big Data Freeway ProteoSAFe: Compute-intensive discovery MS at the click of a button MassIVE: repository and identification platform for all MS data in the world Source: proteomics.ucsd.edu
  • 34. Telepresence Meeting Using Digital Cinema 4k Streams Keio University President Anzai UCSD Chancellor Fox Lays Technical Basis for Global Digital Cinema Sony NTT SGI Streaming 4k with JPEG 2000 Compression ½ Gbit/sec 100 Times the Resolution of YouTube! Calit2@UCSD Auditorium 4k = 4000x2000 Pixels = 4xHD
  • 35. Tele-Collaboration for Audio Post-Production Realtime Picture & Sound Editing Synchronized Over IP Skywalker Sound@Marin Calit2@San Diego
  • 36. Collaboration Between EVL’s CAVE2 and Calit2’s VROOM Over 10Gb Wavelength EVL Calit2 Source: NTT Sponsored ON*VECTOR Workshop at Calit2 March 6, 2013
  • 37. High Performance Wireless Research and Education Network http://hpwren.ucsd.edu/ National Science Foundation awards 0087344, 0426879 and 0944131
  • 38. approximately 50 miles: Note: locations are approximate to CI and PEMEX HPWREN Topology Covers San Diego, Imperial, and Part of Riverside Counties
  • 39. SoCal Weather Stations: Note the High Density in San Diego County Source: Jessica Block, Calit2
  • 40. Interactive Virtual Reality of San Diego County Includes Live Feeds From 150 Met Stations TourCAVE at Calit2’s Qualcomm Institute
  • 41. Real-Time Network Cameras on Mountains for Environmental Observations Source: Hans Werner Braun, HPWREN PI
  • 42. Development of end-to-end “cyberinfrastructure” for “analysis of large dimensional heterogeneous real-time sensor data” System integration of •real-time sensor networks, •satellite imagery, •near-real time data management tools, •wildfire simulation tools •connectivity to emergency command centers before during and after a firestorm. A Scalable Data-Driven Monitoring, Dynamic Prediction and Resilience Cyberinfrastructure for Wildfires (WiFire) NSF Has Just Awarded the WiFire Grant – Ilkay Altintas SDSC PI Photo by Bill Clayton
  • 43. Using Calit2’s Qualcomm Institute NexCAVE for CAL FIRE Research and Planning Source: Jessica Block, Calit2
  • 44. Integrated Digital Infrastructure: Next Steps • White Paper for UCSD Delivered to Chancellor – Creating a Campus Research Data Library – Deploying Advanced Cloud, Networking, Storage, Compute, and Visualization Services – Organizing a User-Driven IDI Specialists Team – Riding the Learning Curve from Leading-Edge Capabilities to Community Data Services – Extending the High Performance Wireless Research and Education Network (HPWREN) to all UC Campuses • White Paper for UC-Wide IDI Under Development • Calit2 (UCSD, UCI) and CITRIS (UCB, UCSC, UCD) – Begin Work on Integrating CC-NIEs Across Campuses – Calit2 and UCR Investigating HPWREN Deployment • Add in UCLA, UCSB, UCSF, UCR, UCM