High Performance Collaboration – The Jump to Light Speed
 

High Performance Collaboration – The Jump to Light Speed

on

  • 453 views

06.06.26

06.06.26
Talk
Visiting Team from Intel
Title: High Performance Collaboration – The Jump to Light Speed
La Jolla, CA

Statistics

Views

Total Views
453
Views on SlideShare
447
Embed Views
6

Actions

Likes
0
Downloads
0
Comments
0

1 Embed 6

http://lsmarr.calit2.net 6

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

High Performance Collaboration – The Jump to Light Speed High Performance Collaboration – The Jump to Light Speed Presentation Transcript

  • “ High Performance Collaboration – The Jump to Light Speed " Talk to A Visiting Team from Intel [email_address] June 25, 2006 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology; Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD
  • From “Supercomputer–Centric” to “Supernetwork-Centric” Cyberinfrastructure Megabit/s Gigabit/s Terabit/s Network Data Source: Timothy Lance, President, NYSERNet 32x10Gb “Lambdas” 1 GFLOP Cray2 60 TFLOP Altix Bandwidth of NYSERNet Research Network Backbones T1 Optical WAN Research Bandwidth Has Grown Much Faster Than Supercomputer Speed! Computing Speed (GFLOPS)
  • National Lambda Rail (NLR) and TeraGrid Provides Cyberinfrastructure Backbone for U.S. Researchers NLR 4 x 10Gb Lambdas Initially Capable of 40 x 10Gb wavelengths at Buildout Links Two Dozen State and Regional Optical Networks DOE, NSF, & NASA Using NLR San Francisco Pittsburgh Cleveland San Diego Los Angeles Portland Seattle Pensacola Baton Rouge Houston San Antonio Las Cruces / El Paso Phoenix New York City Washington, DC Raleigh Jacksonville Dallas Tulsa Atlanta Kansas City Denver Ogden/ Salt Lake City Boise Albuquerque UC-TeraGrid UIC/NW-Starlight Chicago International Collaborators NSF’s TeraGrid Has 4 x 10Gb Lambda Backbone
  • The OptIPuter Project – Creating High Resolution Portals Over Dedicated Optical Channels to Global Science Data
    • NSF Large Information Technology Research Proposal
      • Calit2 (UCSD, UCI) and UIC Lead Campuses—Larry Smarr PI
      • Partnering Campuses: SDSC, USC, SDSU, NCSA, NW, TA&M, UvA, SARA, NASA Goddard, KISTI, AIST, CRC(Canada), CICESE (Mexico)
    • Industrial Partners
      • IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
    • $13.5 Million Over Five Years—Now In the Fourth Year
    NIH Biomedical Informatics NSF EarthScope and ORION Research Network
  • OptIPuter Software Architecture--a Service-Oriented Architecture Integrating Lambdas Into the Grid GTP XCP UDT LambdaStream CEP RBUDP Globus XIO GRAM GSI Source: Andrew Chien, UCSD DVC Configuration Distributed Virtual Computer (DVC) API DVC Runtime Library Distributed Applications/ Web Services Telescience Vol-a-Tile SAGE JuxtaView Visualization Data Services LambdaRAM DVC Services DVC Core Services DVC Job Scheduling DVC Communication Resource Identify/Acquire Namespace Management Security Management High Speed Communication Storage Services IP Lambdas Discovery and Control PIN/PDC RobuStore
  • OptIPuter Scalable Adaptive Graphics Environment (SAGE) Allows Integration of HD Streams Photo: David Lee, NCMIR, UCSD OptIPortal– Termination Device for the OptIPuter Global Backplane
  • OptIPortal– Termination Device for the OptIPuter Global Backplane
    • 20 Dual CPU Nodes, 20 24” Monitors, ~$50,000
    • 1/4 Teraflop, 5 Terabyte Storage, 45 Mega Pixels--Nice PC!
    • Scalable Adaptive Graphics Environment ( SAGE) Jason Leigh, EVL-UIC
    Source: Phil Papadopoulos SDSC, Calit2
  • The New Optical Core of the UCSD Campus-Scale Testbed: Evaluating Packet Routing versus Lambda Switching
    • Goals by 2007:
    • >= 50 endpoints at 10 GigE
    • >= 32 Packet switched
    • >= 32 Switched wavelengths
    • >= 300 Connected endpoints
    Approximately 0.5 TBit/s Arrive at the “Optical” Center of Campus Switching will be a Hybrid Combination of: Packet, Lambda, Circuit -- OOO and Packet Switches Already in Place Funded by NSF MRI Grant Lucent Glimmerglass Force10
  • Calit2/SDSC Proposal to Create a UC Cyberinfrastructure of OptIPuter “On-Ramps” to TeraGrid Resources UC San Francisco UC San Diego UC Riverside UC Irvine UC Davis UC Berkeley UC Santa Cruz UC Santa Barbara UC Los Angeles UC Merced OptIPuter + CalREN-XD + TeraGrid = “OptiGrid” Source: Fran Berman, SDSC , Larry Smarr, Calit2 Creating a Critical Mass of End Users on a Secure LambdaGrid
  • Creating a North American Superhighway for High Performance Collaboration Next Step: Adding Mexico to Canada’s CANARIE and the U.S. National Lambda Rail
  • Countries are Aggressively Creating Gigabit Services: Interactive Access to CAMERA Data System www.glif.is Created in Reykjavik, Iceland 2003 Visualization courtesy of Bob Patterson, NCSA.
  • First Remote Interactive High Definition Video Exploration of Deep Sea Vents Source John Delaney & Deborah Kelley, UWash Canadian-U.S. Collaboration
  • PI Larry Smarr
  • Marine Genome Sequencing Project Measuring the Genetic Diversity of Ocean Microbes CAMERA will include All Sorcerer II Metagenomic Data
  • Calit2’s Direct Access Core Architecture Will Create Next Generation Metagenomics Server Source: Phil Papadopoulos, SDSC, Calit2 + Web Services User Environment CAMERA Complex Flat File Server Farm TeraGrid Backplane (10000s of CPUs) W E B PORTAL Web Local Cluster Direct Access Lambda Cnxns Dedicated Compute Farm (1000 CPUs) Data- Base Farm 10 GigE Fabric
  • Analysis Data Sets, Data Services, Tools, and Workflows
    • Assemblies of Metagenomic Data
      • e.g, GOS, JGI CSP
    • Annotations
      • Genomic and Metagenomic Data
    • “ All-against-all” Alignments of ORFs
      • Updated Periodically
    • Gene Clusters and Associated Data
      • Profiles, Multiple-Sequence Alignments,
      • HMMs, Phylogenies, Peptide Sequences
    • Data Services
      • ‘ Raw’ and Specialized Analysis Data
      • Rich Query Facilities
    • Tools and Workflows
      • Navigate and Sift Raw and Analysis Data
      • Publish Workflows and Develop New Ones
      • Prioritize Features via Dialogue with Community
    Source: Saul Kravitz Director of Software Engineering J. Craig Venter Institute
  • Calit2 and the Venter Institute Will Combine Telepresence with Remote Interactive Analysis Live Demonstration of 21st Century National-Scale Team Science OptIPuter Visualized Data HDTV Over Lambda 25 Miles Venter Institute
  • Calit2 Works with CENIC to Provide the California Optical Core for CineGrid Calit2 UCI USC SFSU UCB
    • In addition, 1Gb and 10Gb Connections to:
      • Seattle, Asia, Australia, New Zealand
      • Chicago, Europe, Russia, China
      • Tijuana, Rosarita Beach, Ensenada
    Calit2’s CineGrid Team is Working with Cinema Industry in LA and SF Extending SoCal OptIPuter to USC School of Cinema-Television Calit2 UCSD Prototype of CineGrid Digital Archive of Films Partnering with SFSU’s Institute for Next Generation Internet Discussions with CITRIS
  • First Trans-Pacific Super High Definition Telepresence Meeting in New Calit2 Digital Cinema Auditorium Lays Technical Basis for Global Digital Cinema Sony NTT SGI Keio University President Anzai UCSD Chancellor Fox