Supporting Optical Networking for Research and Education in the

448 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
448
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
5
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Title Slide
  • Content Slide
  • This is the final build including local networking
  • Network usage requirements. Tier 2 to Tier 3: Requires 1.6 Gbps per transfer (2 TB's in 3 hours).  Please note that transfers will occur on a regular basis.
  • Christian Todorov
  • Image Slide
  • 1. These Measurement Tools are active at all router locations on the Internet2 Network
  • End Slide
  • Supporting Optical Networking for Research and Education in the

    1. 1. Supporting Optical Networking for Research and Education in the United States Christian Todorov September 20, 2007
    2. 2. Agenda <ul><li>Previous Generation Network </li></ul><ul><li>New Requirements of R&E </li></ul><ul><li>The Internet 2 Network </li></ul><ul><li>Looking Forward </li></ul>
    3. 3. Abilene Network <ul><li>Internet 2 ’s previous network (Abilene) was IP only and based on unprotected 10G (OC-192) waves on the Qwest infrastructure </li></ul><ul><li>Utilized IS-IS </li></ul><ul><li>Natively supported IPv4/v6 and multicast </li></ul><ul><li>IP is great… for most people… most of the time… </li></ul>
    4. 4. Changing Needs <ul><li>As the capabilities available to the research community changed, the demands on the network also changed </li></ul><ul><li>e-science applications and facilities grew and exerted greater performance pressures on the network where TCP and shared environments were no longer acceptable </li></ul><ul><li>Dedicated infrastructure was becoming a requirement </li></ul>
    5. 5. The New Requirements <ul><li>High performance applications are dependent on high performance networks </li></ul><ul><li>Networks must be fast, reliable, scalable, have flexible architectures, be cost effective, capable of delivering multiple services across multiple network layers, easy to operate and maintain, and have a view towards the future </li></ul><ul><li>Enable the user – the network as a service </li></ul>
    6. 6. Demands on the Network <ul><li>Entering the age of large scientific facilities </li></ul><ul><ul><li>Large Hadron Collider at CERN </li></ul></ul><ul><ul><li>Very Long Baseline Arrays (radio astronomy) </li></ul></ul><ul><ul><li>Large Synoptic Survey Telescope (2010-13) – 30TB/night </li></ul></ul><ul><li>An increasingly diverse set of demanding applications are utilizing network resources </li></ul><ul><ul><li>Telemedicine: BIRN project, proteomics, tele-surgery, remote ICU, radiology: high-resolution 3D color fMRI brain scan = 4.5PB </li></ul></ul><ul><ul><li>Telepresence: master classes, virtual classrooms, tele-psychiatry </li></ul></ul><ul><ul><li>High performance video delivery: Uncompressed HD, Cinegrid </li></ul></ul><ul><ul><li>Disaster Recovery and distributed storage </li></ul></ul>
    7. 8. <ul><li>People </li></ul><ul><ul><li>3000 CERN employees </li></ul></ul><ul><ul><li>6500 visiting scientists from 500 Universities in 80 countries </li></ul></ul><ul><li>Physical Size </li></ul><ul><ul><li>27 Km circumference </li></ul></ul><ul><ul><li>9300 magnets </li></ul></ul><ul><ul><li>7 Tev nominal proton energy </li></ul></ul><ul><ul><li>600 million collisions per second </li></ul></ul><ul><li>Experimental Facilities </li></ul><ul><ul><li>ALICE (A Large Ion Collider Experiment) – Study quark-gluon plasma </li></ul></ul><ul><ul><li>ATLAS (A Toroidal LHC ApparatuS) – Search for Higgs boson </li></ul></ul><ul><ul><li>CMS (Compact Muon Solenoid) – Search for Higgs boson </li></ul></ul><ul><ul><li>LHCb (LHC-beauty) – Study the CP violation phenomenon </li></ul></ul><ul><ul><li>LHCf (LHC-forward) – Study astroparticle physics </li></ul></ul>CERN – Large Science Facility
    8. 9. US Tier 2 (15 orgs) CMS Atlas US Tier 3 (68 orgs) US Tier 4 (1500 US scientists) Scientists Request Data LHCOPN GEANT-ESNet-Internet2 Internet2/Connectors Internet2/Connectors Local Infrastructure CERN Tier 0 Raw Data FNAL BNL Shared Data Storage and Reduction Tier 1 (12 orgs)
    9. 10. CERN Tier 0 to Tier1: Requires 10-40 Gbps Tier 1 to Tier 2: Requires 10-20 Gbps LHCOPN GEANT-ESNet-Internet2 Internet2/Connectors Internet2/Connectors Tier 1 or 2 to Tier 3: Estimate: Requires 1.6 Gbps per transfer Peak Flow Network Requirements Local Infrastructure
    10. 11. Tier 1 to Tier 2 Traffic
    11. 12. The Internet 2 Network Overview <ul><li>Layer 1: Managed wavelengths from Level(3) Communications </li></ul><ul><ul><li>Level(3) owns and manages Infinera optical gear: responsible for software upgrades, equipment maintenance, remote hands, sparing, NOC services </li></ul></ul><ul><ul><li>Internet 2 NOC has total provisioning control </li></ul></ul><ul><li>Layer 2: Internet 2 owned and managed Ciena CoreDirectors </li></ul><ul><ul><li>Using DRAGON GMPLS control plane </li></ul></ul><ul><li>Layer 3: Internet 2 owned and managed Juniper T640s </li></ul><ul><li>Expanded Observatory </li></ul><ul><ul><li>Platform for layer 1/3 network performance data collection, collocation, experimentation </li></ul></ul><ul><ul><li>perfSONAR integration for intra- & inter-network performance analysis </li></ul></ul><ul><li>International connectivity </li></ul><ul><ul><li>Layer 1 network extended to international exchange points in Seattle, Chicago and New York City </li></ul></ul><ul><ul><li>Peering points in Seattle, PAIX, Equinix Chicago </li></ul></ul>
    12. 13. Network Design
    13. 14. Network Numbers <ul><li>13,500 long haul route miles, 64 metro fiber route miles </li></ul><ul><li>Deployed and configured over 300 Infinera Network Elements </li></ul><ul><li>23 Ciena Core Directors </li></ul><ul><li>9 Juniper T640s </li></ul><ul><li>Day 1 capacity of 100Gbps </li></ul><ul><li>Built 27 custom collocation suites representing 3,365 sqft of space including: </li></ul><ul><ul><li>91 Racks - Internet 2 , ESnet, third-parties </li></ul></ul><ul><ul><li>60 Individual bulk cables with 48 & 96 fiber count </li></ul></ul><ul><li>Internet 2 and ESNet NOCs get same, real-time feeds as the Level(3) NOCs in Atlanta & Denver </li></ul><ul><li>Developed the Virtual Network Operations Center – Provisioning and Troubleshooting Dashboard </li></ul>
    14. 15. Title of slide <ul><li>Level 1 </li></ul><ul><ul><li>Level 2 </li></ul></ul><ul><ul><ul><li>Level 3 </li></ul></ul></ul><ul><ul><ul><ul><li>Level 4 </li></ul></ul></ul></ul><ul><ul><ul><ul><ul><li>Level 5 </li></ul></ul></ul></ul></ul>
    15. 16. <ul><li>Best-Effort 10G IP Service </li></ul><ul><ul><li>Enables delivery of advanced content, commodity services, etc. </li></ul></ul><ul><ul><li>Dual stack IPv4, IPv6; IPv4 & v6 multicast and jumbo frame enabled </li></ul></ul><ul><li>Point-to-Point Wavelength Services </li></ul><ul><li>Circuit Service for static or on-demand bandwidth </li></ul><ul><ul><li>Point-to-point Ethernet (VLAN) Framed SONET Circuit </li></ul></ul><ul><ul><li>Point-to-point SONET Circuit </li></ul></ul><ul><ul><li>Bandwidth provisioning available in 50 Mbps increments </li></ul></ul><ul><ul><li>Supports GFP, VCAT and LCAS </li></ul></ul><ul><ul><li>Various protection options for both waves and sub-rate circuits </li></ul></ul><ul><li>Physical Connection </li></ul><ul><ul><li>1 or 10 Gigabit Ethernet </li></ul></ul><ul><ul><li>OC-192 SONET </li></ul></ul>Flexible Infrastructure Supporting e-Science, Network Research & Education
    16. 17. <ul><li>Infinera DWDM Gear - Static at the start </li></ul><ul><li>Grooming capabilities in ADM to provide sub channels and HOPI types of activities at the start </li></ul><ul><li>Simplified and standardized interface to connectors, exchange points, and other global research and education networks - 2 x 10 Gbps interfaces </li></ul><ul><li>Measurement and control servers will support the node </li></ul>
    17. 18. Wavelength & Circuit Services <ul><li>Connection oriented services provide for: </li></ul><ul><ul><li>Guaranteed bandwidth and predictable jitter and latency (repeatable, dependable performance between collaborating sites) </li></ul></ul><ul><ul><li>Traffic segregation (support specific policy or traffic engineering requirements) </li></ul></ul><ul><ul><li>Router bypass: Express links created for high-bandwidth, limited duration long-haul traffic reducing the need for mid-path L3 interfaces </li></ul></ul><ul><ul><ul><li>Cost efficiency: L3 router blades cost > L2 ports > L1 or L0 interfaces </li></ul></ul></ul><ul><ul><ul><li>Capability tradeoff but could possibly improve performance </li></ul></ul></ul>
    18. 19. Lightpath Provisioning
    19. 20. Multi-Service/Domain/Layer/ Vendor Provisioning Regional Network Regional Network Internet 2 Network ESNet Dynamic Ethernet Dynamic Ethernet TDM GEANT IP Network (MPLS, L2VPN) Ethernet Router SONET Switch Ctrl Element Domain Controller LSP Data Plane Control Plane Adjacency <ul><li>Multi-Domain Provisioning </li></ul><ul><li>Interdomain ENNI (Web Service and OIF/GMPLS) </li></ul><ul><li>Multi-domain, multi-stage path computation process </li></ul><ul><li>AAA </li></ul><ul><li>Scheduling </li></ul>TDM Slide from Tom Lehman, ISI-East GUI XML AST
    20. 21. Wavelength & Circuit Services <ul><li>Automated circuit provisioning enable rapid deployment and efficient utilization of capital investment </li></ul><ul><ul><li>Establishing end-to-end lightpaths is a non-trivial task: it is resource intensive and error prone </li></ul></ul><ul><ul><li>Automated reservation, allocation, and provisioning enables co-scheduling of network and non-network resources – i.e. radio telescopes </li></ul></ul><ul><li>Greater efficiency in the core network means these savings can be passed down to the members as lower cost wavelength and IP services. </li></ul>
    21. 22. Scalability and Operability <ul><li>The Internet 2 Network is based on a unique arrangement with Level 3 that represents a hybrid approach to carrier provided resources. </li></ul><ul><li>Internet 2 has full control over the provisioning on the network but does not bear the responsibility of supporting and maintaining the physical infrastructure: fiber, amps, transport equipment, etc. </li></ul><ul><li>Level 3’s support of the physical network frees Internet 2 of having higher levels of specialized engineering resources dedicated to network support. </li></ul><ul><li>The Internet 2 NOC has a full view into the underlying transport equipment and works jointly with a dedicated NOC group within Level 3. </li></ul><ul><li>The agreement with Level 3 is essentially one for capacity that has no upper limit. Internet 2 can continue to add capacity to network even beyond the carrying capacity of the transport chassis and the fiber. </li></ul><ul><li>The Internet 2 network is constructed on a dedicated fiber pair and with dedicated transport equipment </li></ul><ul><li>The Infinera, Ciena, and Juniper equipment used in the network are 40G capable and each has 100G on their roadmaps </li></ul>
    22. 23. <ul><li>Dependence on the network is increasing </li></ul><ul><ul><li>Distributed applications </li></ul></ul><ul><ul><li>Moving larger data sets </li></ul></ul><ul><li>Network is growing much more complex </li></ul><ul><ul><li>Dynamic and static circuits </li></ul></ul><ul><ul><li>Network security issues </li></ul></ul><ul><li>Need to better understand the network </li></ul><ul><ul><li>User must know what performance levels to expect </li></ul></ul><ul><ul><li>Network operators must be able to demonstrate that the network meets or exceeds those expectations. </li></ul></ul><ul><ul><li>Application developers must have access to tools that differentiate between network problems and application problems. </li></ul></ul>Expanded Network Measurement
    23. 24. <ul><li>OWAMP (latency) </li></ul><ul><ul><li>Regular tests between all routers, and on-demand </li></ul></ul><ul><li>BWCTL (throughput) </li></ul><ul><ul><li>New version with more ‘testers’ available in August </li></ul></ul><ul><ul><li>Regular tests between all routers, and on-demand </li></ul></ul><ul><li>NDT (User Diagnostic) </li></ul><ul><ul><li>3.4.1 available now, with better logging and error handling </li></ul></ul><ul><li>NPToolKit (Knoppix system image that includes all tools) </li></ul><ul><ul><li>Recent versions of Measurement Tools installed and pre-configured </li></ul></ul><ul><ul><li>http ://e2epi.internet2.edu/network-performance-toolkit.html </li></ul></ul>Internet 2 Measurement Tools
    24. 25. <ul><li>perfSONAR motivation </li></ul><ul><ul><li>Most organizations monitor and diagnose their own network </li></ul></ul><ul><ul><li>Networking is becoming an increasingly cross-domain effort </li></ul></ul><ul><ul><li>Monitoring and diagnostics must also become a cross-domain effort </li></ul></ul><ul><li>perfSONAR </li></ul><ul><ul><li>A set of protocols and schemas for implementing a Service-Oriented Architecture for sharing/controlling network performance tools </li></ul></ul><ul><ul><li>A global community of users and developers. Joint collaboration between GEANT2, ESnet, Internet2 and RNP (Brazil) as well as numerous connected participants </li></ul></ul><ul><ul><li>Provides infrastructure for network performance monitoring on cross-domain links; contains a set of services delivering performance measurements in a federated environment </li></ul></ul>perfSONAR
    25. 26. Objectives <ul><li>The vision for the Internet 2 Network is a seamless integrated network facility that allows for applications and users to transparently utilize the services and network layers that most appropriately serve their needs, when they need it, in a cost effective manner. </li></ul><ul><li>This network facility will allow users to focus on their work and not on the network. </li></ul>
    26. 27. <ul><li>Questions? </li></ul>

    ×