Rising Above_ Dubai Floods and the Fortitude of Dubai International Airport.pdf
Supercomputer End Users: the OptIPuter Killer Application
1. Supercomputer End Users:
the OptIPuter Killer Application
Keynote
DREN Networking and Security Conference
San Diego, CA
August 13, 2008
Dr. Larry Smarr
Director, California Institute for Telecommunications and
Information Technology
Harry E. Gruber Professor,
Dept. of Computer Science and Engineering
Jacobs School of Engineering, UCSD
2. Abstract
During the last few years, a radical restructuring of optical networks
supporting e-Science projects has occurred around the world. U.S.
universities are beginning to acquire access to high bandwidth lightwaves
(termed "lambdas") on fiber optics through the National LambdaRail,
Internet2's Circuit Services, and the Global Lambda Integrated Facility. The
NSF-funded OptIPuter project explores how user controlled 1- or 10- Gbps
lambdas can provide direct access to global data repositories, scientific
instruments, and computational resources from the researcher's Linux
clusters in their campus laboratories. These end user clusters are
reconfigured as "OptIPortals," providing the end user with local scalable
visualization, computing, and storage. Integration of high definition video with
OptIPortals creates a high performance collaboration workspace of global
reach. An emerging major new user community are end users of NSF’s
TeraGrid and DODs HPCMP, allowing them to directly optically connect to the
remote Tera or Peta-scale resources from their local laboratories and to bring
disciplinary experts from multiple sites into the local data and visualization
analysis process.
3. Interactive Supercomputing Collaboratory Prototype:
Using Analog Communications to Prototype the Fiber Optic Future
“What we really have to do is eliminate distance between
individuals who want to interact with other people and SIGGRAPH 1989
with other computers.”
― Larry Smarr, Director, NCSA
Illinois
Boston
“We’re using satellite technology…
to demo what It might be like to have
high-speed fiber-optic links between
advanced computers
in two different geographic locations.”
― Al Gore, Senator
Chair, US Senate Subcommittee on Science, Technology and Space
4. Chesapeake Bay Simulation Collaboratory : vBNS Linked
CAVE, ImmersaDesk, Power Wall, and Workstation
Alliance Project: Collaborative Video Production
via Tele-Immersion and Virtual Director
Alliance Application Technologies
Environmental Hydrology Team
Alliance 1997
4 MPixel PowerWall
UIC
Donna Cox, Robert Patterson, Stuart Levy, NCSA Virtual Director Team
Glenn Wheless, Old Dominion Univ.
5. ASCI Brought Scalable Tiled Walls to Support
Visual Analysis of Supercomputing Complexity
1999
LLNL Wall--20 MPixels (3x5 Projectors)
An Early sPPM Simulation Run
Source: LLNL
6. 60 Million Pixels Projected Wall
Driven By Commodity PC Cluster
At 15 Frames/s, The System Can Display 2.7 GB/Sec
2002
Source: Philip D. Heermann,
DOE ASCI Program
7. Challenge—How to Bring This Visualization Capability
to the Supercomputer End User?
2004
35Mpixel EVEREST Display ORNL
8. The OptIPuter Project: Creating High Resolution Portals
Over Dedicated Optical Channels to Global Science Data
Scalable
Adaptive
Graphics
Environment
(SAGE)
Now in
Sixth and
Final Year
Picture
Source:
Mark
Ellisman,
David Lee,
Jason Leigh
Calit2 (UCSD, UCI), SDSC, and UIC Leads—Larry Smarr PI
Univ. Partners: NCSA, USC, SDSU, NW, TA&M, UvA, SARA, KISTI, AIST
Industry: IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent
9. Challenge: Average Throughput of NASA Data Products
to End User is ~ 50 Mbps
Tested
May 2008
Internet2 Backbone is 10,000 Mbps!
Throughput is < 0.5% to End User
http://ensight.eos.nasa.gov/Missions/aqua/index.shtml
10. Dedicated 10Gbps Lambdas Provide
Cyberinfrastructure Backbone for U.S. Researchers
10 Gbps per User ~ 200x
Shared Internet Throughput
Interconnects
Two Dozen
State and Regional
Internet2 Dynamic Optical Networks
Circuit Network
Under Development
NLR 40 x 10Gb Wavelengths
Expanding with Darkstrand to 80
11. 9Gbps Out of 10 Gbps Disk-to-Disk Performance
Using LambdaStream between EVL and Calit2
9.3
Throughput in Gbps
9.35
9.3
9.25
9.22
9.2
9.15
CaveWave
9.1
9.01 9.02
9.05 TeraWave
9
8.95
8.9
8.85
San Diego to Chicago Chicago to San Diego
CAVEWave: TeraGrid:
20 senders to 20 receivers (point to point ) 20 senders to 20 receivers (point to point )
Effective Throughput = 9.01 Gbps Effective Throughput = 9.02 Gbps
(San Diego to Chicago) (San Diego to Chicago)
450.5 Mbps disk to disk transfer per stream 451 Mbps disk to disk transfer per stream
Effective Throughput = 9.30 Gbps Effective Throughput = 9.22 Gbps
(Chicago to San Diego) (Chicago to San Diego)
465 Mbps disk to disk transfer per stream 461 Mbps disk to disk transfer per stream
Dataset: 220GB Satellite Imagery of Chicago courtesy USGS.
Each file is 5000 x 5000 RGB image with a size of 75MB i.e ~ 3000 files
Source: Venkatram
Vishwanath, UIC EVL
12. NLR/I2 is Connected Internationally via
Global Lambda Integrated Facility
Source: Maxine Brown, UIC and Robert Patterson, NCSA
13. OptIPuter / OptIPortal
Scalable Adaptive Graphics Environment (SAGE) Applications
MagicCarpet
Bitplayer
Streaming Blue Marble
Streaming animation
dataset from San Diego
of tornado simulation
to EVL using UDP.
using UDP.
6.7Gbps
516 Mbps
~ 9 Gbps in Total.
SAGE Can Simultaneously Support These
Applications Without Decreasing Their Performance
SVC JuxtaView
Locally streaming Locally streaming the aerial
HD camera live photography of downtown
video using UDP. Chicago using TCP.
538Mbps 850 Mbps
Source: Xi Wang, UIC/EVL
14. OptIPuter Software Architecture--a Service-Oriented
Architecture Integrating Lambdas Into the Grid
Distributed Applications/ Web Services
Visualization
Telescience SAGE JuxtaView
Data Services
LambdaRAM Vol-a-Tile
Distributed Virtual Computer (DVC) API
DVC Configuration DVC Runtime Library
DVC Services DVC Job DVC
Scheduling Communication
DVC Core Services
Resource Namespace Security High Speed Storage
Identify/Acquire Management Management Communication Services
Globus
PIN/PDC GRAM GSI XIO RobuStore
Discovery
and Control GTP XCP UDT
IP
Lambdas CEP LambdaStream RBUDP
15. Two New Calit2 Buildings Provide
New Laboratories for “Living in the Future”
• “Convergence” Laboratory Facilities
– Nanotech, BioMEMS, Chips, Radio, Photonics
– Virtual Reality, Digital Cinema, HDTV, Gaming
• Over 1000 Researchers in Two Buildings
– Linked via Dedicated Optical Networks
UC Irvine
www.calit2.net
Preparing for a World in Which
Distance is Eliminated…
16. The Calit2 1/4 Gigapixel OptIPortals at UCSD and UCI
Are Joined to Form a Gbit/s HD Collaboratory
UCSD Wall to Campus Switch at 10 Gbps
Calit2@ UCI wall
Calit2@ UCSD wall NASA Ames Visit Feb. 29, 2008
UCSD cluster: 15 x Quad core Dell XPS with Dual nVIDIA 5600s
UCI cluster: 25 x Dual Core Apple G5
17. Cisco Telepresence Provides Leading Edge
Commercial Video Teleconferencing
• 191 Cisco TelePresence
85,854 TelePresence 13,450 Meetings Avoided
in Major Cities Globally
Meetings Scheduled to Date Travel
– US/Canada: 83 CTS Average to Date
3000, 46 CTS 1000 Weekly Average is 2,263 (Based on 8 Participants)
– APAC: 17 CTS 3000, Meetings
4 CTS 1000 ~$107.60 M To Date
108,736 Hours
– Japan: 4 CTS 3000, 2 Cubic Meters of Emissions
CTS 1000 Average is 1.25 Hours Saved 16,039,052 (6,775
– Europe: 22 CTS Cars off the Road)
3000, 10 CTS 1000
– Emerging: 3 CTS
3000
Uses QoS Over Shared Internet ~ 15 mbps
• Overall Average
Utilization is 45%
Cisco Bought WebEx
Source: Cisco 3/22/08
18. e-Science Collaboratory Without Walls Enabled by
Uncompressed HD Telepresence Over 10Gbps
iHDTV: 1500 Mbits/sec Calit2 to UW Research Channel Over NLR
May 23, 2007
John Delaney, PI LOOKING, Neptune
Photo: Harry Ammons, SDSC
19. OptIPlanet Collaboratory Persistent Infrastructure
Supporting Microbial Research
Photo Credit: Alan Decker Feb. 29, 2008
Ginger
Armbrust’s
Diatoms:
Micrographs,
Chromosomes,
Genetic
Assembly
iHDTV: 1500 Mbits/sec Calit2 to
UW Research Channel Over NLR
UW’s Research Channel
Michael Wellings
20. OptIPortals
Are Being Adopted Globally
AIST-Japan Osaka U-Japan KISTI-Korea CNIC-China
UZurich
NCHC-Taiwan
SARA- Netherlands Brno-Czech Republic
U. Melbourne,
EVL@UIC Calit2@UCSD Calit2@UCI Australia
21. Green
Initiative:
Can Optical
Fiber Replace
Airline Travel
for Continuing
Collaborations
?
Source: Maxine Brown, OptIPuter Project Manager
23. Launch of the 100 Megapixel OzIPortal Over Qvidium
Compressed HD on 1 Gbps CENIC/PW/AARNet Fiber
No Calit2 Person Physically Flew to Australia to Bring This Up!
January 15, 2008
Covise, Phil Weber, Jurgen Schulze, Calit2
CGLX, Kai-Uwe Doerr , Calit2
www.calit2.net/newsroom/release.php?id=1219
24. Victoria Premier and Australian Deputy Prime Minister
Asking Questions
www.calit2.net/newsroom/release.php?id=1219
25. University of Melbourne Vice Chancellor Glyn Davis
in Calit2 Replies to Question from Australia
26. OptIPuterizing Australian Universities in 2008:
CENIC Coupling to AARNet
UMelbourne/Calit2 Telepresence Session May 21, 2008
Two Week Lecture Tour
of Australian Research Universities
by Larry Smarr October 2008
Phil Scanlan
Founder-
Australian American
Leadership Dialogue
www.aald.org
AARNet's roadmap:
by 2011 up to
80 x 40 Gbit channels
27. First Trans-Pacific Super High Definition Telepresence
Meeting Using Digital Cinema 4k Streams
4k = 4000x2000 Pixels = 4xHD Streaming 4k
100 Times with JPEG 2000
the Resolution Compression
½ gigabit/sec
of YouTube!
Lays
Technical
Basis for
Global
Keio University Digital
President Anzai Cinema
Sony
UCSD NTT
Chancellor Fox
SGI
Calit2@UCSD Auditorium
28. From Digital Cinema to Scientific Visualization:
JPL Supercomputer Simulation of Monterey Bay
4k Resolution = 4 x High Definition
Source: Donna Cox, Robert Patterson, NCSA
Funded by NSF LOOKING Grant
29. Rendering Supercomputer Data
at Digital Cinema Resolution
Source: Donna Cox, Robert Patterson, Bob Wilhelmson, NCSA
30. EVL’s SAGE Global Visualcasting to Europe
September 2007
Gigabit Streams
Image Viewing Image Viewing
Image Image Image Image
Source Replication Viewing Viewing
OptIPortals at OptIPortal at
EVL Russian
OptIPuter OptIPuter OptIPortal OptIPortal at
Chicago Academy of
servers at SAGE- at SARA Masaryk
Sciences
CALIT2 Bridge at Amsterdam University
Moscow
San Diego StarLight Brno
Oct 1
Chicago
Source: Luc Renambot, EVL
31. Creating a California Cyberinfrastructure
of OptIPuter “On-Ramps” to NLR & TeraGrid Resources
UC Davis
UC Berkeley
UC San Francisco
UC Merced
UC Santa Cruz
Creating a Critical Mass of
UC Los Angeles OptIPuter End Users on
UC Santa Barbara UC Riverside a Secure LambdaGrid
UC Irvine
UC San Diego
CENIC Workshop at Calit2
Sept 15-16, 2008
Source: Fran Berman, SDSC , Larry Smarr, Calit2
32. CENIC’s New “Hybrid Network” - Traditional Routed IP
and the New Switched Ethernet and Optical Services
~ $14M
Invested
in
Upgrade
Now
Campuses
Need to
Upgrade
Source: Jim Dolgonas, CENIC
33. The “Golden Spike” UCSD Experimental Optical Core:
Ready to Couple Users to CENIC L1, L2, L3 Services
Quartzite Communications
To 10GigE cluster
node interfaces Goals by Core Year 3
2008:
CENIC L1, L2
>= 60 endpoints at 10 GigE
Quartzite Wavelength Services
Selective
>= 30 Packet switched
Core
.....
Switch
Lucent
>= 30 Switched wavelengths To 10GigE cluster
node interfaces and
other switches
>= 400 Connected endpoints
To cluster nodes
.....
Glimmerglass
Approximately 0.5 Tbps To cluster nodes
.....
GigE Switch with Arrive at the “Optical” Center
Production
OOO
Dual 10GigE Upliks
of Hybrid Campus Switch
32 10GigE Switch
To cluster nodes
.....
GigE Switch with
Dual 10GigE Upliks
Force10
...
To Packet Switch CalREN-HPR
GigE Switch with
Dual 10GigE Upliks other Research
nodes
Cloud
GigE
Funded by
10GigE
NSF MRI Campus Research
4 GigE
4 pair fiber Grant Cloud
Cisco 6509
Juniper T320
OptIPuter Border Router
Source: Phil Papadopoulos, SDSC/Calit2
(Quartzite PI, OptIPuter co-PI)
35. Block Layout of UCSD
Quartzite/OptIPuter Network
Glimmerglass
OOO Switch
~50 10 Gbps Lightpaths
10 More to Come
Quartzite
Application Specific
Embedded Switches
36. Calit2 Microbial Metagenomics Cluster-
Next Generation Optically Linked Science Data Server
Source: Phil Papadopoulos, SDSC, Calit2
512 Processors
~200TB
~5 Teraflops Sun
1GbE X4500
~ 200 Terabytes Storage and Storage
10GbE
Switched 10GbE
/ Routed
Core
37. Calit2 3D Immersive StarCAVE OptIPortal:
Enables Exploration of High Resolution Simulations
Connected at 50 Gb/s to Quartzite 15 Meyer Sound
Speakers +
Subwoofer
30 HD
Projectors!
Passive Polarization--
Optimized the
Polarization Separation
and Minimized Attenuation Source: Tom DeFanti, Greg Dawe, Calit2
Cluster with 30 Nvidia 5600 cards-60 GB Texture Memory
38. Next Step: Experiment on OptIPuter/OptIPortal
with Remote Supercomputer Power User
1 Billion Light-year Pencil
From a 20483 Hydro/N-Body Simulation
M. Norman, R. Harkness, P. Paschos
Working on Putting in
Calit2 StarCAVE
Structure of the Intergalactic Medium
1.3 M SUs, NERSC Seaborg
170 TB output
Source: Michael Norman, SDSC, UCSD
39. The Livermore Lightcone: 8 Large AMR Simulations
Covering 10 Billion Years “Look Back Time”
• 1.5 M SU on LLNL Thunder
• Generated 200 TB Data
• 0.4 M SU Allocated on
SDSC DataStar for
Data Analysis Alone
5123 Base Grid, 7
Levels of Adaptive
Refinement65,000
Spatial Dynamic
Range
Livermore Lightcone Tile 8
Source: Michael Norman, SDSC, UCSD
40. An 8192 x 8192 Image Extracted from Tile 8:
How to Display/Explore?
Digital Cinema Working on
Image Putting it on
Calit2
HIPerWall
OptIPortal
45. 300 Million Pixels of Viewing Real Estate
For Visually Analyzing Supercomputer Datasets
HDTV
Digital Cameras
Digital Cinema
Goal: Link Norman’s Lab OptIPortal
Over Quartzite, CENIC, NLR/TeraGrid to
Petascale Track 2 at Ranger@TACC and Kraken@NICS
by October 2008