Supporting Optical Networking for Research and Education in the
Upcoming SlideShare
Loading in...5
×
 

Supporting Optical Networking for Research and Education in the

on

  • 514 views

 

Statistics

Views

Total Views
514
Views on SlideShare
514
Embed Views
0

Actions

Likes
0
Downloads
4
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Title Slide
  • Content Slide
  • This is the final build including local networking
  • Network usage requirements. Tier 2 to Tier 3: Requires 1.6 Gbps per transfer (2 TB's in 3 hours).  Please note that transfers will occur on a regular basis.
  • Christian Todorov
  • Image Slide
  • 1. These Measurement Tools are active at all router locations on the Internet2 Network
  • End Slide

Supporting Optical Networking for Research and Education in the Supporting Optical Networking for Research and Education in the Presentation Transcript

  • Supporting Optical Networking for Research and Education in the United States Christian Todorov September 20, 2007
  • Agenda
    • Previous Generation Network
    • New Requirements of R&E
    • The Internet 2 Network
    • Looking Forward
  • Abilene Network
    • Internet 2 ’s previous network (Abilene) was IP only and based on unprotected 10G (OC-192) waves on the Qwest infrastructure
    • Utilized IS-IS
    • Natively supported IPv4/v6 and multicast
    • IP is great… for most people… most of the time…
  • Changing Needs
    • As the capabilities available to the research community changed, the demands on the network also changed
    • e-science applications and facilities grew and exerted greater performance pressures on the network where TCP and shared environments were no longer acceptable
    • Dedicated infrastructure was becoming a requirement
  • The New Requirements
    • High performance applications are dependent on high performance networks
    • Networks must be fast, reliable, scalable, have flexible architectures, be cost effective, capable of delivering multiple services across multiple network layers, easy to operate and maintain, and have a view towards the future
    • Enable the user – the network as a service
  • Demands on the Network
    • Entering the age of large scientific facilities
      • Large Hadron Collider at CERN
      • Very Long Baseline Arrays (radio astronomy)
      • Large Synoptic Survey Telescope (2010-13) – 30TB/night
    • An increasingly diverse set of demanding applications are utilizing network resources
      • Telemedicine: BIRN project, proteomics, tele-surgery, remote ICU, radiology: high-resolution 3D color fMRI brain scan = 4.5PB
      • Telepresence: master classes, virtual classrooms, tele-psychiatry
      • High performance video delivery: Uncompressed HD, Cinegrid
      • Disaster Recovery and distributed storage
  •  
    • People
      • 3000 CERN employees
      • 6500 visiting scientists from 500 Universities in 80 countries
    • Physical Size
      • 27 Km circumference
      • 9300 magnets
      • 7 Tev nominal proton energy
      • 600 million collisions per second
    • Experimental Facilities
      • ALICE (A Large Ion Collider Experiment) – Study quark-gluon plasma
      • ATLAS (A Toroidal LHC ApparatuS) – Search for Higgs boson
      • CMS (Compact Muon Solenoid) – Search for Higgs boson
      • LHCb (LHC-beauty) – Study the CP violation phenomenon
      • LHCf (LHC-forward) – Study astroparticle physics
    CERN – Large Science Facility
  • US Tier 2 (15 orgs) CMS Atlas US Tier 3 (68 orgs) US Tier 4 (1500 US scientists) Scientists Request Data LHCOPN GEANT-ESNet-Internet2 Internet2/Connectors Internet2/Connectors Local Infrastructure CERN Tier 0 Raw Data FNAL BNL Shared Data Storage and Reduction Tier 1 (12 orgs)
  • CERN Tier 0 to Tier1: Requires 10-40 Gbps Tier 1 to Tier 2: Requires 10-20 Gbps LHCOPN GEANT-ESNet-Internet2 Internet2/Connectors Internet2/Connectors Tier 1 or 2 to Tier 3: Estimate: Requires 1.6 Gbps per transfer Peak Flow Network Requirements Local Infrastructure
  • Tier 1 to Tier 2 Traffic
  • The Internet 2 Network Overview
    • Layer 1: Managed wavelengths from Level(3) Communications
      • Level(3) owns and manages Infinera optical gear: responsible for software upgrades, equipment maintenance, remote hands, sparing, NOC services
      • Internet 2 NOC has total provisioning control
    • Layer 2: Internet 2 owned and managed Ciena CoreDirectors
      • Using DRAGON GMPLS control plane
    • Layer 3: Internet 2 owned and managed Juniper T640s
    • Expanded Observatory
      • Platform for layer 1/3 network performance data collection, collocation, experimentation
      • perfSONAR integration for intra- & inter-network performance analysis
    • International connectivity
      • Layer 1 network extended to international exchange points in Seattle, Chicago and New York City
      • Peering points in Seattle, PAIX, Equinix Chicago
  • Network Design
  • Network Numbers
    • 13,500 long haul route miles, 64 metro fiber route miles
    • Deployed and configured over 300 Infinera Network Elements
    • 23 Ciena Core Directors
    • 9 Juniper T640s
    • Day 1 capacity of 100Gbps
    • Built 27 custom collocation suites representing 3,365 sqft of space including:
      • 91 Racks - Internet 2 , ESnet, third-parties
      • 60 Individual bulk cables with 48 & 96 fiber count
    • Internet 2 and ESNet NOCs get same, real-time feeds as the Level(3) NOCs in Atlanta & Denver
    • Developed the Virtual Network Operations Center – Provisioning and Troubleshooting Dashboard
  • Title of slide
    • Level 1
      • Level 2
        • Level 3
          • Level 4
            • Level 5
    • Best-Effort 10G IP Service
      • Enables delivery of advanced content, commodity services, etc.
      • Dual stack IPv4, IPv6; IPv4 & v6 multicast and jumbo frame enabled
    • Point-to-Point Wavelength Services
    • Circuit Service for static or on-demand bandwidth
      • Point-to-point Ethernet (VLAN) Framed SONET Circuit
      • Point-to-point SONET Circuit
      • Bandwidth provisioning available in 50 Mbps increments
      • Supports GFP, VCAT and LCAS
      • Various protection options for both waves and sub-rate circuits
    • Physical Connection
      • 1 or 10 Gigabit Ethernet
      • OC-192 SONET
    Flexible Infrastructure Supporting e-Science, Network Research & Education
    • Infinera DWDM Gear - Static at the start
    • Grooming capabilities in ADM to provide sub channels and HOPI types of activities at the start
    • Simplified and standardized interface to connectors, exchange points, and other global research and education networks - 2 x 10 Gbps interfaces
    • Measurement and control servers will support the node
  • Wavelength & Circuit Services
    • Connection oriented services provide for:
      • Guaranteed bandwidth and predictable jitter and latency (repeatable, dependable performance between collaborating sites)
      • Traffic segregation (support specific policy or traffic engineering requirements)
      • Router bypass: Express links created for high-bandwidth, limited duration long-haul traffic reducing the need for mid-path L3 interfaces
        • Cost efficiency: L3 router blades cost > L2 ports > L1 or L0 interfaces
        • Capability tradeoff but could possibly improve performance
  • Lightpath Provisioning
  • Multi-Service/Domain/Layer/ Vendor Provisioning Regional Network Regional Network Internet 2 Network ESNet Dynamic Ethernet Dynamic Ethernet TDM GEANT IP Network (MPLS, L2VPN) Ethernet Router SONET Switch Ctrl Element Domain Controller LSP Data Plane Control Plane Adjacency
    • Multi-Domain Provisioning
    • Interdomain ENNI (Web Service and OIF/GMPLS)
    • Multi-domain, multi-stage path computation process
    • AAA
    • Scheduling
    TDM Slide from Tom Lehman, ISI-East GUI XML AST
  • Wavelength & Circuit Services
    • Automated circuit provisioning enable rapid deployment and efficient utilization of capital investment
      • Establishing end-to-end lightpaths is a non-trivial task: it is resource intensive and error prone
      • Automated reservation, allocation, and provisioning enables co-scheduling of network and non-network resources – i.e. radio telescopes
    • Greater efficiency in the core network means these savings can be passed down to the members as lower cost wavelength and IP services.
  • Scalability and Operability
    • The Internet 2 Network is based on a unique arrangement with Level 3 that represents a hybrid approach to carrier provided resources.
    • Internet 2 has full control over the provisioning on the network but does not bear the responsibility of supporting and maintaining the physical infrastructure: fiber, amps, transport equipment, etc.
    • Level 3’s support of the physical network frees Internet 2 of having higher levels of specialized engineering resources dedicated to network support.
    • The Internet 2 NOC has a full view into the underlying transport equipment and works jointly with a dedicated NOC group within Level 3.
    • The agreement with Level 3 is essentially one for capacity that has no upper limit. Internet 2 can continue to add capacity to network even beyond the carrying capacity of the transport chassis and the fiber.
    • The Internet 2 network is constructed on a dedicated fiber pair and with dedicated transport equipment
    • The Infinera, Ciena, and Juniper equipment used in the network are 40G capable and each has 100G on their roadmaps
    • Dependence on the network is increasing
      • Distributed applications
      • Moving larger data sets
    • Network is growing much more complex
      • Dynamic and static circuits
      • Network security issues
    • Need to better understand the network
      • User must know what performance levels to expect
      • Network operators must be able to demonstrate that the network meets or exceeds those expectations.
      • Application developers must have access to tools that differentiate between network problems and application problems.
    Expanded Network Measurement
    • OWAMP (latency)
      • Regular tests between all routers, and on-demand
    • BWCTL (throughput)
      • New version with more ‘testers’ available in August
      • Regular tests between all routers, and on-demand
    • NDT (User Diagnostic)
      • 3.4.1 available now, with better logging and error handling
    • NPToolKit (Knoppix system image that includes all tools)
      • Recent versions of Measurement Tools installed and pre-configured
      • http ://e2epi.internet2.edu/network-performance-toolkit.html
    Internet 2 Measurement Tools
    • perfSONAR motivation
      • Most organizations monitor and diagnose their own network
      • Networking is becoming an increasingly cross-domain effort
      • Monitoring and diagnostics must also become a cross-domain effort
    • perfSONAR
      • A set of protocols and schemas for implementing a Service-Oriented Architecture for sharing/controlling network performance tools
      • A global community of users and developers. Joint collaboration between GEANT2, ESnet, Internet2 and RNP (Brazil) as well as numerous connected participants
      • Provides infrastructure for network performance monitoring on cross-domain links; contains a set of services delivering performance measurements in a federated environment
    perfSONAR
  • Objectives
    • The vision for the Internet 2 Network is a seamless integrated network facility that allows for applications and users to transparently utilize the services and network layers that most appropriately serve their needs, when they need it, in a cost effective manner.
    • This network facility will allow users to focus on their work and not on the network.
    • Questions?