Published on

Talk to Nortel Visiting Team
Title: Calit2
La Jolla, CA

Published in: Education, Technology
  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide


  1. 1. “ Calit2 " Talk Nortel Visiting Team [email_address] December 12, 2005 Dr. Larry Smarr Director, California Institute for Telecommunications and Information Technology Harry E. Gruber Professor, Dept. of Computer Science and Engineering Jacobs School of Engineering, UCSD
  2. 2. Two New Calit2 Buildings Will Provide Major New Laboratories to Their Campuses <ul><li>New Laboratory Facilities </li></ul><ul><ul><li>Nanotech, BioMEMS, Chips, Radio, Photonics, Grid, Data, Applications </li></ul></ul><ul><ul><li>Virtual Reality, Digital Cinema, HDTV, Synthesis </li></ul></ul><ul><li>Over 1000 Researchers in Two Buildings </li></ul><ul><ul><li>Linked via Dedicated Optical Networks </li></ul></ul><ul><ul><li>International Conferences and Testbeds </li></ul></ul>UC Irvine UC San Diego Richard C. Atkinson Hall Dedication Oct. 28, 2005
  3. 3. The Calit2@UCSD Building is Designed for Prototyping Extremely High Bandwidth Applications 1.8 Million Feet of Cat6 Ethernet Cabling 150 Fiber Strands to Building; Experimental Roof Radio Antenna Farm Ubiquitous WiFi Photo: Tim Beach, Calit2 Over 9,000 Individual 1 Gbps Drops in the Building ~10G per Person UCSD is Only UC Campus with 10G CENIC Connection for ~30,000 Users
  4. 4. <ul><li>September 26-30, 2005 </li></ul><ul><li>Calit2 @ University of California, San Diego </li></ul><ul><li>California Institute for Telecommunications and Information Technology </li></ul>Calit2@UCSD Is Connected to the World at 10Gbps T H E G L O B A L L A M B D A I N T E G R A T E D F A C I L I T Y Maxine Brown, Tom DeFanti, Co-Chairs 50 Demonstrations, 20 Counties, 10 Gbps/Demo i Grid 2005
  5. 5. Nortel 10Gb Line-Speed Security Demo – iGrid@Calit2 Less than 500 nsecs Latency Added Source Data Linux Cluster Source Data Linux cluster at EVL Tile display Visualization Visualization Cluster 10G Wan OC-192c (Qwest) 10G Wan OC-192c (IRNC) Force10 12 Ge 4 Ge 4 Ge via I-WIRE Source Data Linux Cluster OC-192 GFP 10G Wan OC-192c CA*net4 4 Ge San Diego Ottawa 12 Ge Chicago NetherLight Amsterdam StarLight
  6. 6. First Trans-Pacific Super High Definition Telepresence Meeting in New Calit2 Digital Cinema Auditorium Lays Technical Basis for Global Digital Cinema Sony NTT SGI Keio University President Anzai UCSD Chancellor Fox
  7. 7. The OptIPuter Project – Creating a LambdaGrid “Web” for Gigabyte Data Objects <ul><li>NSF Large Information Technology Research Proposal </li></ul><ul><ul><li>Calit2 (UCSD, UCI) and UIC Lead Campuses—Larry Smarr PI </li></ul></ul><ul><ul><li>Partnering Campuses: USC, SDSU, NW, TA&M, UvA, SARA, NASA </li></ul></ul><ul><li>Industrial Partners </li></ul><ul><ul><li>IBM, Sun, Telcordia, Chiaro, Calient, Glimmerglass, Lucent </li></ul></ul><ul><li>$13.5 Million Over Five Years </li></ul><ul><li>Linking Global Scale Science Projects to User’s Linux Clusters </li></ul>NIH Biomedical Informatics NSF EarthScope and ORION Research Network
  8. 8. The Optical Core of the UCSD Campus-Scale Testbed -- Evaluating Packet Routing versus Lambda Switching <ul><li>Goals by 2007: </li></ul><ul><li>>= 50 endpoints at 10 GigE </li></ul><ul><li>>= 32 Packet switched </li></ul><ul><li>>= 32 Switched wavelengths </li></ul><ul><li>>= 300 Connected endpoints </li></ul>Approximately 0.5 TBit/s Arrive at the “Optical” Center of Campus Switching will be a Hybrid Combination of: Packet, Lambda, Circuit -- OOO and Packet Switches Already in Place Source: Phil Papadopoulos, SDSC, Calit2 Funded by NSF MRI Grant Lucent Glimmerglass Chiaro Networks
  9. 9. Toward an Interactive Gigapixel Display <ul><li>Scalable Adaptive Graphics Environment (SAGE) Controls: </li></ul><ul><li>100 Megapixels Display </li></ul><ul><ul><li>55-Panel </li></ul></ul><ul><li>1/4 TeraFLOP </li></ul><ul><ul><li>Driven by 30-Node Cluster of 64-bit Dual Opterons </li></ul></ul><ul><li>1/3 Terabit/sec I/O </li></ul><ul><ul><li>30 x 10GE interfaces </li></ul></ul><ul><ul><li>Linked to OptIPuter </li></ul></ul><ul><li>1/8 TB RAM </li></ul><ul><li>60 TB Disk </li></ul>Source: Jason Leigh, Tom DeFanti, EVL@UIC OptIPuter Co-PIs NSF LambdaVision MRI@UIC Calit2 is Building a LambdaVision Wall in Each of the UCI & UCSD Buildings
  10. 10. OptIPuter Software Architecture--a Service-Oriented Architecture Integrating Lambdas Into the Grid GTP XCP UDT LambdaStream CEP RBUDP Globus XIO GRAM GSI DVC Configuration Distributed Virtual Computer (DVC) API DVC Runtime Library Distributed Applications/ Web Services Telescience Vol-a-Tile SAGE JuxtaView Visualization Data Services LambdaRAM DVC Services DVC Core Services DVC Job Scheduling DVC Communication Resource Identify/Acquire Namespace Management Security Management High Speed Communication Storage Services IP Lambdas Discovery and Control PIN/PDC RobuStore
  11. 11. Calit2’s Direct Access Core Architecture Will Create Next Generation Metagenomics Server Traditional User Response Request Source: Phil Papadopoulos, SDSC, Calit2 + Web Services <ul><ul><li>Sargasso Sea Data </li></ul></ul><ul><ul><li>Sorcerer II Expedition (GOS) </li></ul></ul><ul><ul><li>JGI Community Sequencing Project </li></ul></ul><ul><ul><li>Moore Marine Microbial Project </li></ul></ul><ul><ul><li>NASA Goddard Satellite Data </li></ul></ul>Flat File Server Farm W E B PORTAL Dedicated Compute Farm (100s of CPUs) TeraGrid: Cyberinfrastructure Backplane (scheduled activities, e.g. all by all comparison) (10000s of CPUs) Web (other service) Local Cluster Local Environment Direct Access Lambda Cnxns Data- Base Farm 10 GigE Fabric
  12. 12. Adding Web & Grid Services to Optical Channels to Provide Real Time Control of Ocean Observatories <ul><li>Goal: </li></ul><ul><ul><li>Prototype Cyberinfrastructure for NSF’s Ocean Research Interactive Observatory Networks (ORION) Building on OptIPuter </li></ul></ul><ul><li>LOOKING NSF ITR with PIs: </li></ul><ul><ul><li>John Orcutt & Larry Smarr - UCSD </li></ul></ul><ul><ul><li>John Delaney & Ed Lazowska –UW </li></ul></ul><ul><ul><li>Mark Abbott – OSU </li></ul></ul><ul><li>Collaborators at: </li></ul><ul><ul><li>MBARI, WHOI, NCSA, UIC, CalPoly, UVic, CANARIE, Microsoft, NEPTUNE-Canarie </li></ul></ul>LOOKING: ( L aboratory for the O cean O bservatory K nowledge In tegration G rid) LOOKING is Driven By NEPTUNE CI Requirements Making Management of Gigabit Flows Routine
  13. 13. Partnering with NASA to Combine Telepresence with Remote Interactive Analysis of Data Over National LambdaRail HDTV Over Lambda OptIPuter Visualized Data SIO/UCSD NASA Goddard August 8, 2005
  14. 14. Calit2/SDSC Proposal to Create a UC Cyberinfrastructure of OptIPuter “On-Ramps” to TeraGrid Resources UC San Francisco UC San Diego UC Riverside UC Irvine UC Davis UC Berkeley UC Santa Cruz UC Santa Barbara UC Los Angeles UC Merced OptIPuter + CalREN-XD + TeraGrid = “OptiGrid” Source: Fran Berman, SDSC Creating a Critical Mass of End Users on a Secure LambdaGrid
  15. 15. Interdisciplinary Groups in Networks, Circuits, and Information Theory
  16. 16. Circuits and Wireless
  17. 17. Wireless SensorNets Driving an Ultra High Bandwidth Fiber Optic Backbone Create a Planetary Scale Computer