PAPADOPOULOS_OptIPut..

362 views
320 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
362
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
1
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

PAPADOPOULOS_OptIPut..

  1. 1. OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program Director, Grids and clusters San Diego Supercomputer Center September 2003
  2. 2. UCSD Heavy Lifters <ul><li>Greg Hidley, School of Engineering, Director of Cal-(IT) 2 Technology Infrastructure </li></ul><ul><li>Mason Katz, SDSC, Cluster Development Group Leader </li></ul><ul><li>David Hutches, School of Engineering </li></ul><ul><li>Ted O’Connell, School of Engineering </li></ul><ul><li>Max Okumoto, School of Engineering </li></ul>
  3. 3. Year 1 Mod-0, UCSD
  4. 4. Building an Experimental Apparatus <ul><li>Mod-0 Optiputer Ethernet (Packet) Based </li></ul><ul><ul><li>Focused as an Immediately-usable High-bandwidth Distributed Platform </li></ul></ul><ul><ul><li>Multiple Sites on Campus ( a Few Fiber Miles ) </li></ul></ul><ul><ul><li>Next-generation Highly-scalable Optical Chiaro Router at Center of Network </li></ul></ul><ul><li>Hardware Balancing Act </li></ul><ul><ul><li>Experiments Really Require Large Data Generators and Consumers </li></ul></ul><ul><ul><li>Science Drivers Require Significant Bandwidth to Storage </li></ul></ul><ul><ul><li>OptIPuter Predicated on Price/performance curves of > 1GE networks </li></ul></ul><ul><li>System Issues </li></ul><ul><ul><li>How does one Build and Manage a Reconfigurable Distributed Instrument? </li></ul></ul>
  5. 5. Raw Hardware <ul><li>Center of UCSD Network is a Chiaro Internet Router </li></ul><ul><ul><li>Unique Optical Cross Connect Scales to 6.4 Tbit/sec Today </li></ul></ul><ul><ul><ul><li>We Have the 640 Gigabit “Starter” System </li></ul></ul></ul><ul><ul><ul><li>Has “Unlimited” Bandwidth from our Perspective </li></ul></ul></ul><ul><ul><ul><li>Programmable Network Processors </li></ul></ul></ul><ul><ul><li>Supports Multiple Routing Instances (Virtual Cut-through) </li></ul></ul><ul><ul><ul><li>“ Wild West” OptIPuter-routed (Campus) </li></ul></ul></ul><ul><ul><ul><li>High-performance Research in Metro (CalREN-HPR) and Wide-area </li></ul></ul></ul><ul><ul><ul><li>Interface to Campus Production Network with Appropriate Protections </li></ul></ul></ul><ul><li>Endpoints are Commodity Clusters </li></ul><ul><ul><li>Clustered Commodity-based CPUs, Linux. GigE on Every Node. </li></ul></ul><ul><ul><ul><li>Differentiated as Storage vs. Compute vs. Visualization </li></ul></ul></ul><ul><ul><li>> $800K of Donated Equipment From Sun And IBM </li></ul></ul><ul><ul><ul><li>128 Node (256 Gbit/s) Intel-based Cluster from Sun (Delivered 2 Weeks ago) </li></ul></ul></ul><ul><ul><ul><li>48 Node (96 Gbit/s), 21TB (~300 Spindles) Storage Cluster from IBM (in Process) </li></ul></ul></ul><ul><ul><li>SIO VIZ Cluster Purchased by Project </li></ul></ul>
  6. 6. Raw Campus Fiber Plant: First Find the Conduit
  7. 7. Storewidth Investigations: General Model Parallel Pipes, Large Bisection, Unified Name Space Viz, Compute or other Clustered Endpoint Storage Cluster with Multiple network and drive pipes Large Virtual Disk (Multiple Network Pipes) Symmetric “Storage Service” DAV DAV DAV DAV DAV Local Cluster Interconnect httpd pvfs httpd pvfs httpd pvfs httpd pvfs httpd pvfs Local Cluster Interconnect httpd pvfs httpd pvfs httpd pvfs httpd pvfs httpd pvfs Local Cluster Interconnect Aggregation Switch Aggregation Switch Chiaro Aggregation Switch 1.6 Gbit/s (200 MB/s) - 6 clients & servers (HTTP, 1GB file) 1.1 Gbit/s (140 MB/s) - 7 clients & servers (davFS, 1GB file) baseline
  8. 8. Year 2 – Mod-0, UCSD
  9. 9. Southern Cal Metro Extension Year 2
  10. 10. Aggregates <ul><li>Year 1 (Network Build) </li></ul><ul><ul><li>Chiaro Router Purchased, Installed, Working (Feb) </li></ul></ul><ul><ul><li>5 sites on Campus. Each with 4 GigE Uplinks to Chiaro </li></ul></ul><ul><ul><li>Private Fiber, UCSD-only. </li></ul></ul><ul><ul><li>~40 Individual nodes, Most Shared with Other Projects </li></ul></ul><ul><ul><li>Endpoint resource poor. Network Rich </li></ul></ul><ul><li>Year 2 (Endpoint Enhancements) </li></ul><ul><ul><li>Chiaro Router – Additional Line Cards, IPV6, Starting 10GigE Deployment </li></ul></ul><ul><ul><li>8 Sites on Campus </li></ul></ul><ul><ul><li>h 3 Metro Sites </li></ul></ul><ul><ul><li>Multiple Virtual Routers for Connection to Campus, CENIC HPR, others </li></ul></ul><ul><ul><li>> 200 Nodes. Most are Donated (Sun and IBM) . Most Dedicated to OptIPuter </li></ul></ul><ul><ul><li>Infiniband Test Network on 16 nodes + Direct IB Switch to GigE </li></ul></ul><ul><ul><li>Enough Resource to Support Data-intensive Activity, </li></ul></ul><ul><ul><li>Slightly network poor . </li></ul></ul><ul><li>Year 3 + (Balanced Expansion Driven by Research Requirements) </li></ul><ul><ul><li>Expand 10GigE deployments </li></ul></ul><ul><ul><li>Bring Network, Endpoint, and DWDM (Mod-1) Forward Together </li></ul></ul><ul><ul><li>Aggregate at Least a Terabit (both Network and Endpoints) by Year 5 </li></ul></ul>
  11. 11. Managing a few hundred endpoints <ul><li>Rocks Toolkit used on over 130 Registered Clusters. Several Top500 Clusters </li></ul><ul><ul><li>Descriptions Easily Express Different System Configurations </li></ul></ul><ul><ul><li>Support IA32 and IA64. Opteron in Progress </li></ul></ul><ul><li>OptIPuter is Extending the Base Software </li></ul><ul><ul><li>Integrate Experimental Protocols/Kernels/Middleware into stack </li></ul></ul><ul><ul><li>Build Visualization and Storage Endpoints </li></ul></ul><ul><ul><li>Adding Common Grid (NMI) Services through Collaboration with GEON/BIRN </li></ul></ul>

×