“ Blowing up the Box--the Emergence of  the Planetary Computer " Invited Talk  Oak Ridge National Laboratory  Oak Ridge, TN October 13, 2005 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor,  Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD
“ What we really have to do is eliminate distance between individuals who want to interact with other people and with other computers.” ― Larry Smarr, Director, NCSA Long-Term Goal: Dedicated Fiber Optic Infrastructure Collaborative Interactive Visualization of Remote Data “ We’re using satellite technology…to demo what It might be like to have high-speed  fiber-optic links between advanced  computers in two different geographic locations.” ― Al Gore, Senator Chair, US Senate Subcommittee on Science, Technology and Space Illinois Boston SIGGRAPH 1989
From Metacomputer to TeraGrid and OptIPuter:  15 Years of Development TeraGrid PI OptIPuter PI
I-WAY Prototyped the Grid Supercomputing ‘95 I-WAY Project From I-Soft to Globus
Alliance 1997: Collaborative Video Production via Tele-Immersion and Virtual Director Donna Cox, Bob Patterson, Stuart Levy, Glen Wheless www.ncsa.uiuc.edu/People/cox/ Alliance Project Linking CAVE, Immersadesk,  Power Wall, and Workstation UIC
In Pursuit of Realistic TelePresence  Access Grid International Video Meetings Access Grid Lead-Argonne NSF STARTAP Lead-UIC’s Elec. Vis. Lab Can We Modify This Technology To Create Global Performance Spaces?
We Are Living Through A Fundamental Global Change—How Can We Glimpse the Future? [The Internet] has created a [global] platform  where intellectual work, intellectual capital,  could be delivered from anywhere.  It could be disaggregated, delivered, distributed, produced, and put back together again… The playing field is being leveled.” Nandan Nilekani, CEO Infosys (Bangalore, India)
California’s Institutes for Science and Innovation  A Bold Experiment in Collaborative Research California  NanoSystems Institute  UCSF UCB California Institute for Bioengineering,  Biotechnology,  and Quantitative Biomedical Research California Institute for Telecommunications and Information Technology Center for  Information Technology Research  in the Interest of Society UCSC UCD UCM www.ucop.edu/california-institutes UCI UCSD UCSB UCLA
Calit2 -- Research and Living Laboratories on the Future of the Internet www.calit2.net UC San Diego & UC Irvine Faculty Working in Multidisciplinary Teams With Students, Industry, and the Community
Two New Calit2 Buildings Will Provide  a Persistent Collaboration “Living Laboratory” Over 1000 Researchers in Two Buildings Linked via Dedicated Optical Networks International Conferences and Testbeds New Laboratory Facilities Virtual Reality, Digital Cinema, HDTV, Synthesis Nanotech, BioMEMS, Chips, Radio, Photonics, Grid, Data, Applications Bioengineering UC San Diego UC Irvine Preparing for an World in Which  Distance Has Been Eliminated…
The Calit2@UCSD Building is Designed for Extremely High Bandwidth 1.8 Million Feet of Cat6 Ethernet Cabling 150 Fiber Strands to Building; Experimental Roof Radio Antenna Farm Ubiquitous WiFi Photo: Tim Beach, Calit2 Over 9,000 Individual 10/100/1000 Mbps Drops in the Building
“ This is What Happened with  the Internet Stock Boom” “ It sparked a huge overinvestment in fiber-optic cable companies, which then laid massive amount of  fiber-optic cable on land and under the oceans,  which dramatically drove down the cost  of making a phone call  or  transmitting data anywhere in the world .” --Thomas Friedman,  The World is Flat (2005)
Worldwide Deployment of Fiber  Up 42% in 1999 Gilder Technology Report That’s Laying Fiber  at the Rate of Nearly  10,000 km/hour !! From Smarr Talk (2000)
Each Optical Fiber Can Now Carry  Many Parallel Line Paths or “Lambdas” ( WDM) Source: Steve Wallach, Chiaro Networks “ Lambdas”
Challenge: Average Throughput of NASA Data Products  to End User is Only < 50 Megabits/s  Tested from GSFC-ICESAT January 2005 http://ensight.eos.nasa.gov/Missions/icesat/index.shtml
From “Supercomputer–Centric”  to “Supernetwork-Centric” Cyberinfrastructure Megabit/s Gigabit/s Terabit/s Network Data Source: Timothy Lance, President, NYSERNet 32x10Gb “Lambdas” 1 GFLOP Cray2 60 TFLOP Altix Bandwidth of NYSERNet  Research Network Backbones T1 Optical WAN Research Bandwidth  Has Grown Much Faster Than  Supercomputer Speed! Computing Speed (GFLOPS)
15 Years Later 10Gb Parallel Lambda Cyber Backplane
The Global Lambda Integrated Facility (GLIF)  Creates MetaComputers on the Scale of Planet Earth  Many Countries are Interconnecting Optical Research Networks  to form a Global SuperNetwork www.glif.is Created in Reykjavik, Iceland 2003 www.glif.is Created in Reykjavik, Iceland 2003
September 26-30, 2005 Calit2 @ University of California, San Diego California Institute for Telecommunications and Information Technology The Networking Double Header of the Century  Will Be Driven by LambdaGrid Applications i Grid  2 oo 5 T   H   E  G   L   O   B   A   L  L   A   M   B   D   A  I   N   T   E   G   R   A   T   E   D  F   A   C   I   L   I   T   Y   Maxine Brown, Tom DeFanti, Co-Organizers www.startap.net/igrid2005/ http://sc05.supercomp.org
Adding Web and Grid Services to Lambdas  to Provide Real Time Control of Ocean Observatories Goal:  Prototype Cyberinfrastructure for NSF’s Ocean Research Interactive Observatory Networks (ORION) Building on OptIPuter LOOKING NSF ITR with PIs: John Orcutt & Larry Smarr - UCSD John Delaney & Ed Lazowska –UW Mark Abbott – OSU Collaborators at: MBARI, WHOI, NCSA, UIC, CalPoly, UVic, CANARIE, Microsoft, NEPTUNE-Canarie LOOKING:  ( L aboratory for the  O cean  O bservatory  K nowledge  In tegration  G rid) www.neptune.washington.edu http://lookingtosea.ucsd.edu/
First Remote Interactive High Definition Video  Exploration of Deep Sea Vents Source John Delaney & Deborah Kelley, UWash Canadian-U.S. Collaboration
The OptIPuter Project –    Creating a LambdaGrid “Web” for Gigabyte Data Objects NSF Large Information Technology Research Proposal Calit2 (UCSD, UCI) and UIC Lead Campuses—Larry Smarr PI Partnering Campuses: USC, SDSU, NW, TA&M, UvA, SARA, NASA Industrial Partners IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent $13.5 Million Over Five Years Linking Global Scale Science Projects to User’s Linux Clusters NIH Biomedical Informatics NSF EarthScope and ORION Research Network
OptIPuter End Nodes Are Smart Bit Buckets  i.e. Scalable Standards-Based Linux Clusters with Rocks & Globus Complete SW Install and HW Build in Under 2 Hours Building RockStar at SC2003 Source: Phil Papadopoulos, SDSC Rocks is the 2004 Most Important Software Innovation HPCwire Reader's Choice and Editor’s Choice Awards Rocks Team is Working with Sun to Understand How to Apply These Techniques to Solaris X – Based Clusters. Make it Possible to Match the Installation Speed of the Linux Version
Toward an Interactive Gigapixel Display Scalable Adaptive Graphics Environment (SAGE) Controls: 100 Megapixels Display  55-Panel 1/4 TeraFLOP  Driven by 30-Node Cluster of 64-bit Dual Opterons 1/3 Terabit/sec I/O 30 x 10GE interfaces Linked to OptIPuter 1/8 TB RAM 60 TB Disk Source: Jason Leigh, Tom DeFanti, EVL@UIC OptIPuter Co-PIs NSF LambdaVision MRI@UIC Calit2 is Building a LambdaVision Wall in Each of the UCI & UCSD  Buildings
Scalable Adaptive Graphics Environment (SAGE) Required for Working in Display-Rich Environments AccessGrid Live video feeds Information Must Be Able To Flexibly Move Around The Wall Source: Jason Leigh, UIC Remote laptop High-resolution maps 3D surface rendering Volume Rendering Remote sensing
The UCSD OptIPuter Deployment SIO SDSC CRCA Phys. Sci -Keck SOM JSOE  Preuss 6 th   College SDSC Annex Node M Earth Sciences SDSC Medicine Engineering  High School To CENIC Collocation Source: Phil Papadopoulos, SDSC;  Greg Hidley, Calit2 UCSD is Prototyping  a  Campus-Scale  OptIPuter SDSC Annex  Campus Provided Dedicated Fibers  Between Sites Linking  Linux Clusters UCSD Has ~ 50 Labs With Clusters ½ Mile Juniper T320 0.320 Tbps Backplane Bandwidth 20X Chiaro Estara 6.4 Tbps Backplane Bandwidth
Campuses Must Provide Fiber Infrastructure  to End-User Laboratories & Large Rotating Data Stores SIO Ocean Supercomputer IBM Storage Cluster 2 Ten Gbps Campus Lambda Raceway Streaming Microscope Source: Phil Papadopoulos, SDSC, Calit2 UCSD Campus  LambdaStore  Architecture Global  LambdaGrid
The Optical Core of the UCSD Campus-Scale Testbed -- Evaluating Packet Routing versus Lambda Switching Goals by 2007: >= 50 endpoints at 10 GigE >= 32 Packet switched >= 32 Switched wavelengths >= 300 Connected endpoints Approximately 0.5 TBit/s Arrive at the “Optical” Center of Campus Switching will be a Hybrid Combination of:  Packet, Lambda, Circuit -- OOO and Packet Switches Already in Place Source: Phil Papadopoulos,  SDSC, Calit2 Funded by NSF MRI Grant Lucent Glimmerglass Chiaro Networks
Calit2@UCSD Building will House  a Photonics Networking Laboratory Networking “Living Lab” Testbed Core Unconventional Coding High Capacity Networking Bidirectional Architectures Hybrid Signal Processing Interconnected to OptIPuter  Access to Real World Network Flows Allows System Tests of New Concepts UCSD Parametric Processing Laboratory UCSD Photonics
LambdaRAM:  Clustered Memory To Provide Low Latency Access To Large Remote Data Sets Giant Pool of Cluster Memory Provides Low-Latency Access to Large Remote Data Sets  Data Is Prefetched Dynamically LambdaStream Protocol Integrated into JuxtaView Montage Viewer 3 Gbps Experiments from Chicago to Amsterdam to UIC  LambdaRAM Accessed Data From Amsterdam Faster Than From Local Disk all 8-14 none all 8-14 1-7 Displayed region Visualization of the Pre-Fetch Algorithm none Data on Disk in Amsterdam Local Wall Source: David Lee, Jason Leigh
OptIPuter Software Architecture--a Service-Oriented Architecture Integrating Lambdas Into the Grid GTP XCP UDT LambdaStream CEP RBUDP Globus XIO GRAM GSI DVC Configuration Distributed Virtual Computer (DVC) API DVC Runtime Library Distributed Applications/ Web Services Telescience Vol-a-Tile SAGE JuxtaView Visualization  Data Services LambdaRAM DVC Services DVC Core Services DVC Job Scheduling DVC Communication Resource  Identify/Acquire Namespace Management Security Management High Speed Communication Storage Services IP Lambdas Discovery  and Control PIN/PDC RobuStore
Exercising the OptIPuter  Middleware Software “Stack” Optical Network Configuration Novel Transport Protocols Distributed Virtual Computer  (Coordinated Network and Resource Configuration) Visualization Applications (Neuroscience, Geophysics) Source-Andrew Chien, UCSD- OptIPuter Software System Architect 3-Layer Demo 5-Layer Demo 2-Layer Demo
First Two-Layer OptIPuter Terabit Juggling on  10G WANs Netherlands United States PNWGP Seattle StarLight Chicago CENIC  Los Angeles CENIC San Diego 10 GE UI at Chicago 10 GE 10 GE 10 GE 10 GE 10 GE 10 GE NIKHEF 2 GE 2 GE UCI ISI/USC NetherLight Amsterdam UCSD/SDSC SC2004 Pittsburgh U of Amsterdam CSE SIO SDSC JSOE 10 GE 10 GE 10 GE 2 GE 1 GE Trans-Atlantic Link SC2004: 17.8Gbps, a TeraBIT in < 1 minute! SC2005: Juggle Terabytes in a Minute Source-Andrew Chien, UCSD
Calit2 Intends to Jump Beyond Traditional Web-Accessible Databases Data  Backend (DB, Files) W E B  PORTAL (pre-filtered,  queries metadata) Response Request + many others Source: Phil Papadopoulos, SDSC, Calit2 BIRN PDB NCBI Genbank
Calit2’s Direct Access  Core Architecture Flat File Server Farm W E B  PORTAL + Web Services Moore  Environment Traditional User Response Request Data- Base Farm 10 GigE  Fabric Source: Phil Papadopoulos, SDSC, Calit2 Dedicated Compute Farm (100s of CPUs) TeraGrid: Cyberinfrastructure Backplane (scheduled activities, e.g. all by all comparison) (10000s of CPUs)  Web (other service) Local  Cluster Local Environment Direct Access  Lambda Cnxns Campus Grid OptIPuter Campus Cloud
Realizing the Dream: High Resolution Portals to Global Science Data 650 Mpixel 2-Photon Microscopy  Montage of HeLa Cultured Cancer Cells Green: Actin Red: Microtubles Light Blue: DNA Source: Mark Ellisman, David Lee, Jason Leigh, Tom Deerinck
Scalable Displays Being Developed  for Multi-Scale Biomedical Imaging Green: Purkinje Cells Red: Glial Cells Light Blue: Nuclear DNA Source: Mark Ellisman, David Lee, Jason Leigh Two-Photon Laser Confocal Microscope Montage of 40x36=1440 Images in 3 Channels of a Mid-Sagittal Section of Rat Cerebellum Acquired Over an 8-hour Period 300 MPixel Image!
Scalable Displays Allow Both  Global Content and Fine Detail Source: Mark Ellisman, David Lee, Jason Leigh 30 MPixel SunScreen Display Driven by a 20-node Sun Opteron Visualization Cluster
Allows for Interactive Zooming  from Cerebellum to Individual Neurons Source: Mark Ellisman, David Lee, Jason Leigh
Multi-Gigapixel Images are Available  from Film Scanners Today The Gigapxl Project http://gigapxl.org Balboa Park, San Diego Multi-GigaPixel Image
Large Image with Enormous Detail Require Interactive LambdaVision Systems One Square Inch Shot From 100 Yards The OptIPuter Project is Pursuing Obtaining some of these Images for LambdaVision 100M Pixel Walls http://gigapxl.org
Calit2 Is Applying OptIPuter Technologies to Post-Hurricane Recovery Working with NASA, USGS, NOAA, NIEHS, EPA, SDSU, SDSC, Duke, …
“ Infosys’s Global Conferencing Center Ground Zero  for the Indian Outsourcing Industry.” So this is our conference room,  probably the largest screen in Asia- this is forty digital screens [put  together].  We could be setting here [in Bangalore] with somebody from New York, London, Boston, San Francisco, all live.  …That’s globalization.” --Nandan Nilekani, CEO Infosys
Academics use the “Access Grid”  for Global Conferencing Access Grid Talk with 35 Locations  on 5 Continents— SC Global Keynote Supercomputing ‘04
Multiple HD Streams Over Lambdas  Will Radically Transform Global Collaboration U. Washington JGN II Workshop Osaka, Japan Jan 2005 Prof.  Osaka Prof. Aoyama Prof. Smarr Source: U Washington Research Channel Telepresence Using Uncompressed  1.5 Gbps HDTV Streaming Over IP on Fiber Optics-- 75x Home Cable “HDTV” Bandwidth!
200 Million Pixels of Viewing Real Estate! Calit2@UCI Apple Tiled Display Wall Driven by 25 Dual-Processor G5s 50 Apple 30” Cinema Displays Source: Falko Kuester, Calit2@UCI NSF Infrastructure Grant Data—One Foot Resolution  USGS Images of La Jolla, CA HDTV Digital Cameras Digital Cinema
SAGE in Use  on the UCSD NCMIR OptIPuter Display Wall  LambdaCam Used to Capture the Tiled Display on a Web Browser HD Video from BIRN Trailer Macro View of Montage Data Micro View of Montage Data Live Streaming Video of the RTS-2000 Microscope HD Video from the RTS Microscope Room Source: David Lee,  NCMIR, UCSD
Partnering with NASA to Combine Telepresence with  Remote Interactive Analysis of Data Over National LambdaRail HDTV Over  Lambda OptIPuter  Visualized  Data SIO/UCSD NASA  Goddard www.calit2.net/articles/article.php?id=660 August 8, 2005
First Trans-Pacific Super High Definition Telepresence Meeting in New Calit2 Digital Cinema Auditorium Lays Technical Basis for Global Digital Cinema Sony  NTT  SGI Keio University  President Anzai UCSD  Chancellor Fox
The OptIPuter Enabled Collaboratory: Remote Researchers Jointly Exploring Complex Data OptIPuter will Connect The Calit2@UCI  200M-Pixel Wall  to  The Calit2@UCSD 100M-Pixel Display With Shared Fast Deep Storage “ SunScreen” Run by Sun Opteron Cluster UCI UCSD
Creating CyberPorts on the National LambdaRail–  Prototypes at ACCESS DC and TRECC Chicago www.trecc.org
Calit2/SDSC Proposal to Create a UC Cyberinfrastructure  of OptIPuter “On-Ramps” to TeraGrid Resources UC San Francisco  UC San Diego  UC Riverside  UC Irvine  UC Davis  UC Berkeley UC Santa Cruz UC Santa Barbara  UC Los Angeles  UC Merced OptIPuter + CalREN-XD + TeraGrid = “OptiGrid” Source: Fran Berman, SDSC Creating a Critical Mass of End Users on a Secure LambdaGrid

Blowing up the Box--the Emergence of the Planetary Computer

  • 1.
    “ Blowing upthe Box--the Emergence of the Planetary Computer &quot; Invited Talk Oak Ridge National Laboratory Oak Ridge, TN October 13, 2005 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD
  • 2.
    “ What wereally have to do is eliminate distance between individuals who want to interact with other people and with other computers.” ― Larry Smarr, Director, NCSA Long-Term Goal: Dedicated Fiber Optic Infrastructure Collaborative Interactive Visualization of Remote Data “ We’re using satellite technology…to demo what It might be like to have high-speed fiber-optic links between advanced computers in two different geographic locations.” ― Al Gore, Senator Chair, US Senate Subcommittee on Science, Technology and Space Illinois Boston SIGGRAPH 1989
  • 3.
    From Metacomputer toTeraGrid and OptIPuter: 15 Years of Development TeraGrid PI OptIPuter PI
  • 4.
    I-WAY Prototyped theGrid Supercomputing ‘95 I-WAY Project From I-Soft to Globus
  • 5.
    Alliance 1997: CollaborativeVideo Production via Tele-Immersion and Virtual Director Donna Cox, Bob Patterson, Stuart Levy, Glen Wheless www.ncsa.uiuc.edu/People/cox/ Alliance Project Linking CAVE, Immersadesk, Power Wall, and Workstation UIC
  • 6.
    In Pursuit ofRealistic TelePresence Access Grid International Video Meetings Access Grid Lead-Argonne NSF STARTAP Lead-UIC’s Elec. Vis. Lab Can We Modify This Technology To Create Global Performance Spaces?
  • 7.
    We Are LivingThrough A Fundamental Global Change—How Can We Glimpse the Future? [The Internet] has created a [global] platform where intellectual work, intellectual capital, could be delivered from anywhere. It could be disaggregated, delivered, distributed, produced, and put back together again… The playing field is being leveled.” Nandan Nilekani, CEO Infosys (Bangalore, India)
  • 8.
    California’s Institutes forScience and Innovation A Bold Experiment in Collaborative Research California NanoSystems Institute UCSF UCB California Institute for Bioengineering, Biotechnology, and Quantitative Biomedical Research California Institute for Telecommunications and Information Technology Center for Information Technology Research in the Interest of Society UCSC UCD UCM www.ucop.edu/california-institutes UCI UCSD UCSB UCLA
  • 9.
    Calit2 -- Researchand Living Laboratories on the Future of the Internet www.calit2.net UC San Diego & UC Irvine Faculty Working in Multidisciplinary Teams With Students, Industry, and the Community
  • 10.
    Two New Calit2Buildings Will Provide a Persistent Collaboration “Living Laboratory” Over 1000 Researchers in Two Buildings Linked via Dedicated Optical Networks International Conferences and Testbeds New Laboratory Facilities Virtual Reality, Digital Cinema, HDTV, Synthesis Nanotech, BioMEMS, Chips, Radio, Photonics, Grid, Data, Applications Bioengineering UC San Diego UC Irvine Preparing for an World in Which Distance Has Been Eliminated…
  • 11.
    The Calit2@UCSD Buildingis Designed for Extremely High Bandwidth 1.8 Million Feet of Cat6 Ethernet Cabling 150 Fiber Strands to Building; Experimental Roof Radio Antenna Farm Ubiquitous WiFi Photo: Tim Beach, Calit2 Over 9,000 Individual 10/100/1000 Mbps Drops in the Building
  • 12.
    “ This isWhat Happened with the Internet Stock Boom” “ It sparked a huge overinvestment in fiber-optic cable companies, which then laid massive amount of fiber-optic cable on land and under the oceans, which dramatically drove down the cost of making a phone call or transmitting data anywhere in the world .” --Thomas Friedman, The World is Flat (2005)
  • 13.
    Worldwide Deployment ofFiber Up 42% in 1999 Gilder Technology Report That’s Laying Fiber at the Rate of Nearly 10,000 km/hour !! From Smarr Talk (2000)
  • 14.
    Each Optical FiberCan Now Carry Many Parallel Line Paths or “Lambdas” ( WDM) Source: Steve Wallach, Chiaro Networks “ Lambdas”
  • 15.
    Challenge: Average Throughputof NASA Data Products to End User is Only < 50 Megabits/s Tested from GSFC-ICESAT January 2005 http://ensight.eos.nasa.gov/Missions/icesat/index.shtml
  • 16.
    From “Supercomputer–Centric” to “Supernetwork-Centric” Cyberinfrastructure Megabit/s Gigabit/s Terabit/s Network Data Source: Timothy Lance, President, NYSERNet 32x10Gb “Lambdas” 1 GFLOP Cray2 60 TFLOP Altix Bandwidth of NYSERNet Research Network Backbones T1 Optical WAN Research Bandwidth Has Grown Much Faster Than Supercomputer Speed! Computing Speed (GFLOPS)
  • 17.
    15 Years Later10Gb Parallel Lambda Cyber Backplane
  • 18.
    The Global LambdaIntegrated Facility (GLIF) Creates MetaComputers on the Scale of Planet Earth Many Countries are Interconnecting Optical Research Networks to form a Global SuperNetwork www.glif.is Created in Reykjavik, Iceland 2003 www.glif.is Created in Reykjavik, Iceland 2003
  • 19.
    September 26-30, 2005Calit2 @ University of California, San Diego California Institute for Telecommunications and Information Technology The Networking Double Header of the Century Will Be Driven by LambdaGrid Applications i Grid 2 oo 5 T H E G L O B A L L A M B D A I N T E G R A T E D F A C I L I T Y Maxine Brown, Tom DeFanti, Co-Organizers www.startap.net/igrid2005/ http://sc05.supercomp.org
  • 20.
    Adding Web andGrid Services to Lambdas to Provide Real Time Control of Ocean Observatories Goal: Prototype Cyberinfrastructure for NSF’s Ocean Research Interactive Observatory Networks (ORION) Building on OptIPuter LOOKING NSF ITR with PIs: John Orcutt & Larry Smarr - UCSD John Delaney & Ed Lazowska –UW Mark Abbott – OSU Collaborators at: MBARI, WHOI, NCSA, UIC, CalPoly, UVic, CANARIE, Microsoft, NEPTUNE-Canarie LOOKING: ( L aboratory for the O cean O bservatory K nowledge In tegration G rid) www.neptune.washington.edu http://lookingtosea.ucsd.edu/
  • 21.
    First Remote InteractiveHigh Definition Video Exploration of Deep Sea Vents Source John Delaney & Deborah Kelley, UWash Canadian-U.S. Collaboration
  • 22.
    The OptIPuter Project– Creating a LambdaGrid “Web” for Gigabyte Data Objects NSF Large Information Technology Research Proposal Calit2 (UCSD, UCI) and UIC Lead Campuses—Larry Smarr PI Partnering Campuses: USC, SDSU, NW, TA&M, UvA, SARA, NASA Industrial Partners IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent $13.5 Million Over Five Years Linking Global Scale Science Projects to User’s Linux Clusters NIH Biomedical Informatics NSF EarthScope and ORION Research Network
  • 23.
    OptIPuter End NodesAre Smart Bit Buckets i.e. Scalable Standards-Based Linux Clusters with Rocks & Globus Complete SW Install and HW Build in Under 2 Hours Building RockStar at SC2003 Source: Phil Papadopoulos, SDSC Rocks is the 2004 Most Important Software Innovation HPCwire Reader's Choice and Editor’s Choice Awards Rocks Team is Working with Sun to Understand How to Apply These Techniques to Solaris X – Based Clusters. Make it Possible to Match the Installation Speed of the Linux Version
  • 24.
    Toward an InteractiveGigapixel Display Scalable Adaptive Graphics Environment (SAGE) Controls: 100 Megapixels Display 55-Panel 1/4 TeraFLOP Driven by 30-Node Cluster of 64-bit Dual Opterons 1/3 Terabit/sec I/O 30 x 10GE interfaces Linked to OptIPuter 1/8 TB RAM 60 TB Disk Source: Jason Leigh, Tom DeFanti, EVL@UIC OptIPuter Co-PIs NSF LambdaVision MRI@UIC Calit2 is Building a LambdaVision Wall in Each of the UCI & UCSD Buildings
  • 25.
    Scalable Adaptive GraphicsEnvironment (SAGE) Required for Working in Display-Rich Environments AccessGrid Live video feeds Information Must Be Able To Flexibly Move Around The Wall Source: Jason Leigh, UIC Remote laptop High-resolution maps 3D surface rendering Volume Rendering Remote sensing
  • 26.
    The UCSD OptIPuterDeployment SIO SDSC CRCA Phys. Sci -Keck SOM JSOE Preuss 6 th College SDSC Annex Node M Earth Sciences SDSC Medicine Engineering High School To CENIC Collocation Source: Phil Papadopoulos, SDSC; Greg Hidley, Calit2 UCSD is Prototyping a Campus-Scale OptIPuter SDSC Annex Campus Provided Dedicated Fibers Between Sites Linking Linux Clusters UCSD Has ~ 50 Labs With Clusters ½ Mile Juniper T320 0.320 Tbps Backplane Bandwidth 20X Chiaro Estara 6.4 Tbps Backplane Bandwidth
  • 27.
    Campuses Must ProvideFiber Infrastructure to End-User Laboratories & Large Rotating Data Stores SIO Ocean Supercomputer IBM Storage Cluster 2 Ten Gbps Campus Lambda Raceway Streaming Microscope Source: Phil Papadopoulos, SDSC, Calit2 UCSD Campus LambdaStore Architecture Global LambdaGrid
  • 28.
    The Optical Coreof the UCSD Campus-Scale Testbed -- Evaluating Packet Routing versus Lambda Switching Goals by 2007: >= 50 endpoints at 10 GigE >= 32 Packet switched >= 32 Switched wavelengths >= 300 Connected endpoints Approximately 0.5 TBit/s Arrive at the “Optical” Center of Campus Switching will be a Hybrid Combination of: Packet, Lambda, Circuit -- OOO and Packet Switches Already in Place Source: Phil Papadopoulos, SDSC, Calit2 Funded by NSF MRI Grant Lucent Glimmerglass Chiaro Networks
  • 29.
    Calit2@UCSD Building willHouse a Photonics Networking Laboratory Networking “Living Lab” Testbed Core Unconventional Coding High Capacity Networking Bidirectional Architectures Hybrid Signal Processing Interconnected to OptIPuter Access to Real World Network Flows Allows System Tests of New Concepts UCSD Parametric Processing Laboratory UCSD Photonics
  • 30.
    LambdaRAM: ClusteredMemory To Provide Low Latency Access To Large Remote Data Sets Giant Pool of Cluster Memory Provides Low-Latency Access to Large Remote Data Sets Data Is Prefetched Dynamically LambdaStream Protocol Integrated into JuxtaView Montage Viewer 3 Gbps Experiments from Chicago to Amsterdam to UIC LambdaRAM Accessed Data From Amsterdam Faster Than From Local Disk all 8-14 none all 8-14 1-7 Displayed region Visualization of the Pre-Fetch Algorithm none Data on Disk in Amsterdam Local Wall Source: David Lee, Jason Leigh
  • 31.
    OptIPuter Software Architecture--aService-Oriented Architecture Integrating Lambdas Into the Grid GTP XCP UDT LambdaStream CEP RBUDP Globus XIO GRAM GSI DVC Configuration Distributed Virtual Computer (DVC) API DVC Runtime Library Distributed Applications/ Web Services Telescience Vol-a-Tile SAGE JuxtaView Visualization Data Services LambdaRAM DVC Services DVC Core Services DVC Job Scheduling DVC Communication Resource Identify/Acquire Namespace Management Security Management High Speed Communication Storage Services IP Lambdas Discovery and Control PIN/PDC RobuStore
  • 32.
    Exercising the OptIPuter Middleware Software “Stack” Optical Network Configuration Novel Transport Protocols Distributed Virtual Computer (Coordinated Network and Resource Configuration) Visualization Applications (Neuroscience, Geophysics) Source-Andrew Chien, UCSD- OptIPuter Software System Architect 3-Layer Demo 5-Layer Demo 2-Layer Demo
  • 33.
    First Two-Layer OptIPuterTerabit Juggling on 10G WANs Netherlands United States PNWGP Seattle StarLight Chicago CENIC Los Angeles CENIC San Diego 10 GE UI at Chicago 10 GE 10 GE 10 GE 10 GE 10 GE 10 GE NIKHEF 2 GE 2 GE UCI ISI/USC NetherLight Amsterdam UCSD/SDSC SC2004 Pittsburgh U of Amsterdam CSE SIO SDSC JSOE 10 GE 10 GE 10 GE 2 GE 1 GE Trans-Atlantic Link SC2004: 17.8Gbps, a TeraBIT in < 1 minute! SC2005: Juggle Terabytes in a Minute Source-Andrew Chien, UCSD
  • 34.
    Calit2 Intends toJump Beyond Traditional Web-Accessible Databases Data Backend (DB, Files) W E B PORTAL (pre-filtered, queries metadata) Response Request + many others Source: Phil Papadopoulos, SDSC, Calit2 BIRN PDB NCBI Genbank
  • 35.
    Calit2’s Direct Access Core Architecture Flat File Server Farm W E B PORTAL + Web Services Moore Environment Traditional User Response Request Data- Base Farm 10 GigE Fabric Source: Phil Papadopoulos, SDSC, Calit2 Dedicated Compute Farm (100s of CPUs) TeraGrid: Cyberinfrastructure Backplane (scheduled activities, e.g. all by all comparison) (10000s of CPUs) Web (other service) Local Cluster Local Environment Direct Access Lambda Cnxns Campus Grid OptIPuter Campus Cloud
  • 36.
    Realizing the Dream:High Resolution Portals to Global Science Data 650 Mpixel 2-Photon Microscopy Montage of HeLa Cultured Cancer Cells Green: Actin Red: Microtubles Light Blue: DNA Source: Mark Ellisman, David Lee, Jason Leigh, Tom Deerinck
  • 37.
    Scalable Displays BeingDeveloped for Multi-Scale Biomedical Imaging Green: Purkinje Cells Red: Glial Cells Light Blue: Nuclear DNA Source: Mark Ellisman, David Lee, Jason Leigh Two-Photon Laser Confocal Microscope Montage of 40x36=1440 Images in 3 Channels of a Mid-Sagittal Section of Rat Cerebellum Acquired Over an 8-hour Period 300 MPixel Image!
  • 38.
    Scalable Displays AllowBoth Global Content and Fine Detail Source: Mark Ellisman, David Lee, Jason Leigh 30 MPixel SunScreen Display Driven by a 20-node Sun Opteron Visualization Cluster
  • 39.
    Allows for InteractiveZooming from Cerebellum to Individual Neurons Source: Mark Ellisman, David Lee, Jason Leigh
  • 40.
    Multi-Gigapixel Images areAvailable from Film Scanners Today The Gigapxl Project http://gigapxl.org Balboa Park, San Diego Multi-GigaPixel Image
  • 41.
    Large Image withEnormous Detail Require Interactive LambdaVision Systems One Square Inch Shot From 100 Yards The OptIPuter Project is Pursuing Obtaining some of these Images for LambdaVision 100M Pixel Walls http://gigapxl.org
  • 42.
    Calit2 Is ApplyingOptIPuter Technologies to Post-Hurricane Recovery Working with NASA, USGS, NOAA, NIEHS, EPA, SDSU, SDSC, Duke, …
  • 43.
    “ Infosys’s GlobalConferencing Center Ground Zero for the Indian Outsourcing Industry.” So this is our conference room, probably the largest screen in Asia- this is forty digital screens [put together]. We could be setting here [in Bangalore] with somebody from New York, London, Boston, San Francisco, all live. …That’s globalization.” --Nandan Nilekani, CEO Infosys
  • 44.
    Academics use the“Access Grid” for Global Conferencing Access Grid Talk with 35 Locations on 5 Continents— SC Global Keynote Supercomputing ‘04
  • 45.
    Multiple HD StreamsOver Lambdas Will Radically Transform Global Collaboration U. Washington JGN II Workshop Osaka, Japan Jan 2005 Prof. Osaka Prof. Aoyama Prof. Smarr Source: U Washington Research Channel Telepresence Using Uncompressed 1.5 Gbps HDTV Streaming Over IP on Fiber Optics-- 75x Home Cable “HDTV” Bandwidth!
  • 46.
    200 Million Pixelsof Viewing Real Estate! Calit2@UCI Apple Tiled Display Wall Driven by 25 Dual-Processor G5s 50 Apple 30” Cinema Displays Source: Falko Kuester, Calit2@UCI NSF Infrastructure Grant Data—One Foot Resolution USGS Images of La Jolla, CA HDTV Digital Cameras Digital Cinema
  • 47.
    SAGE in Use on the UCSD NCMIR OptIPuter Display Wall LambdaCam Used to Capture the Tiled Display on a Web Browser HD Video from BIRN Trailer Macro View of Montage Data Micro View of Montage Data Live Streaming Video of the RTS-2000 Microscope HD Video from the RTS Microscope Room Source: David Lee, NCMIR, UCSD
  • 48.
    Partnering with NASAto Combine Telepresence with Remote Interactive Analysis of Data Over National LambdaRail HDTV Over Lambda OptIPuter Visualized Data SIO/UCSD NASA Goddard www.calit2.net/articles/article.php?id=660 August 8, 2005
  • 49.
    First Trans-Pacific SuperHigh Definition Telepresence Meeting in New Calit2 Digital Cinema Auditorium Lays Technical Basis for Global Digital Cinema Sony NTT SGI Keio University President Anzai UCSD Chancellor Fox
  • 50.
    The OptIPuter EnabledCollaboratory: Remote Researchers Jointly Exploring Complex Data OptIPuter will Connect The Calit2@UCI 200M-Pixel Wall to The Calit2@UCSD 100M-Pixel Display With Shared Fast Deep Storage “ SunScreen” Run by Sun Opteron Cluster UCI UCSD
  • 51.
    Creating CyberPorts onthe National LambdaRail– Prototypes at ACCESS DC and TRECC Chicago www.trecc.org
  • 52.
    Calit2/SDSC Proposal toCreate a UC Cyberinfrastructure of OptIPuter “On-Ramps” to TeraGrid Resources UC San Francisco UC San Diego UC Riverside UC Irvine UC Davis UC Berkeley UC Santa Cruz UC Santa Barbara UC Los Angeles UC Merced OptIPuter + CalREN-XD + TeraGrid = “OptiGrid” Source: Fran Berman, SDSC Creating a Critical Mass of End Users on a Secure LambdaGrid

Editor's Notes

  • #34 Logo overlaps SIO text (fix)
  • #52 We hosted an SBIR workshop, participant in the MSCMC… Demonstration room – holds 50 people for large group presentations Training room – classroom style w/tables Large conference room – hold 12 – 20 comfortably Small conference room – hold 6 – 8 people All rooms have full audio-visual support; any media is supported: VHS, CD, DVD, … Facility is available for leasing