SlideShare a Scribd company logo
1 of 38
Networking for the Future of DOE Science William E. Johnston  ESnet Department Head and Senior Scientist wej@es.net, www.es.net This talk is available at www.es.net Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of Science Institute of Computer Science (ICS) of the Foundation for Research and Technology –  Hellas (FORTH)., Feb.  15, 2008
DOE’s Office of Science: Enabling Large-Scale Science ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
ESnet Defined ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
ESnet as a Large-Scale Science Data Transport Network   A few hundred of the 25,000,000 hosts that the Labs connect to every month account for 50% of all ESnet traffic  (which primarily high energy physics, nuclear physics, and climate data at this point) Most of ESnet’s traffic (>85%) goes to and comes from outside of ESnet. This reflects the highly collaborative nature of large-scale science (which is one of the main focuses of DOE’s Office of Science). = the R&E source or destination of ESnet’s top 100 sites (all R&E) (the DOE Lab destination or source of each flow is not shown)
ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (12/2007) LVK SNLL PNNL LIGO Lab DC Offices MIT/ PSFC AMES LLNL NNSA Sponsored (13+) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) ~45 end user sites CA*net4   France GLORIAD   (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren commercial peering points PAIX-PA Equinix, etc. ESnet core hubs NEWY JGI LBNL SLAC NERSC SDSC ORNL MREN StarTap Taiwan (TANet2,   ASCC) AU AU SEA CHI-SL Specific R&E network peers Office Of Science Sponsored (22) KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 NETL ANL FNAL International (1-10 Gb/s) 10 Gb/s SDN core (I2, NLR) 10Gb/s IP core MAN rings (≥ 10 Gb/s) Lab supplied links OC12 / GigEthernet OC3  (155 Mb/s) 45 Mb/s and less Salt Lake DOE Geography is only representational Other R&E peering points INL YUCCA MT BECHTEL-NV LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL BNL NREL GA DOE-ALB DOE GTN NNSA SINet (Japan) Russia (BINP) ELPA WASH CERN (USLHCnet: DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc  SUNN Abilene SNV1 Equinix ALBU CHIC NASA Ames UNM MAXGPoP NLR AMPATH (S. America) AMPATH (S. America) R&E networks ATLA NSF/IRNC funded IARC PacWave KAREN / REANNZ Internet2 SINGAREN ODN Japan Telecom America Starlight USLHCNet NLR PacWave Abilene Equinix DENV SUNN NASH Internet2 NYSERNet MAN LAN USHLCNet to GÉANT 2700 miles / 4300 km  1625 miles / 2545 km
For Scale The relatively large geographic ESnet scale of  makes it a challenge for a small organization to build, maintain, and operate the network. 2700 miles / 4300 km  1625 miles / 2545 km
ESnet History IP and virtual circuits on a configurable optical infrastructure with at least 5-6 optical channels of 10-100 Gbps each Partnership with Internet2 and US Research & Education community to build a dedicated national optical network ESnet4 2007-2012 IP over 2.5 and 10Gbps SONET Partnership with Qwest to build a national Packet over SONET network and optical channel Metropolitan Area fiber ESnet3 2000-2007 IP over 155 Mbps ATM net Partnership with Sprint to build the first national footprint ATM network ESnet2 1995-2000 56 Kbps to 45 Mbps X.25 ESnet formed to serve the Office of Science ESnet1 1986-1995 56 Kbps microwave and satellite links ESnet0/MFENet ESnet0/MFENet  mid-1970s-1986
Operating  Science Mission  Critical Infrastructure ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Disaster Recovery and Stability ,[object Object],[object Object],DNS ,[object Object],[object Object],Remote Engineer ,[object Object],Duplicate Infrastructure Deploying full replication of the NOC databases and servers and Science Services databases in the NYC Qwest carrier hub ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Remote Engineer LBNL PPPL BNL AMES TWC ATL HUB SEA HUB ALB HUB NYC HUBS DC HUB ELP HUB CHI HUB SNV HUB
ESnet is a Highly Reliable Infrastructure “ 5 nines” (>99.995%) “ 3 nines” (>99.5%) “ 4 nines” (>99.95%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.
ESnet is An Organization Structured for the Service Applied R&D for new network services (Circuit services and end-to-end monitoring) Network operations and user support (24x7x365, end-to-end problem resolution Network engineering, routing and network services, WAN security Science collaboration services (Public Key Infrastructure certification authorities, AV conferencing, email lists, Web services) 3.5 5.5 3.0 7.4 8.3 5.5 Internal infrastructure,  disaster recovery, security Management, accounting, compliance  Deployment and WAN maintenance
ESnet FY06 Budget is Approximately $26.6M Total expenses:  $26.6M Target carryover: $1.0M Management and compliance: $0.7M Collaboration  services: $1.6 Internal infrastructure, security, disaster recovery: $3.4M Operations:  $1.1M Engineering & research: $2.9M WAN  equipment:  $2.0M Special projects  (Chicago and LI MANs): $1.2M Total funds:  $26.6M SC R&D: $0.5M Carryover: $1M SC Special Projects:  $1.2M Circuits & hubs: $12.7M SC funding: $20.1M Other DOE: $3.8M Approximate Budget Categories
A Changing Science Environment is the Key Driver of the Next Generation ESnet ,[object Object],[object Object],[object Object],[object Object],[object Object]
LHC will be the largest scientific experiment and generate the most data the scientific community has ever tried to manage. The data management model involves a world-wide collection of data centers that store, manage, and analyze the data and that are integrated through network connections with typical speeds in the 10+ Gbps range. closely coordinated and interdependent distributed systems that must have predictable intercommunication for effective functioning [ICFA SCIC]
FNAL Traffic is Representative of all CMS Traffic Accumulated data (Terabytes) received by CMS Data Centers (“tier1” sites) and many analysis centers (“tier2” sites) during the past 12 months (15 petabytes of data)  [LHC/CMS]  FNAL
When A Few Large Data Sources/Sinks Dominate Traffic it is Not Surprising that Overall Network Usage Follows the Patterns of the Very Large Users - This Trend Will Reverse in the Next Few Weeks as the Next Round of LHC Data Challenges Kicks Off FNAL Outbound Traffic
What Does this Aggregate Traffic Look like at a Site FNAL Outbound CMS Traffic Max= 1064 MBy/s (8.5 Gb/s), Average = 394 MBy/s (3.2 Gb/s) Mbytes/sec 8 Gb/s
Total ESnet Traffic Exceeds 2 Petabytes/Month 2.7 PBytes in July 2007 1 PBytes in April 2006 ESnet traffic historically has increased 10x every 47 months Overall traffic tracks the very large science use of the network
Large-Scale Data Analysis Systems (Typified by the LHC) Generate Several Types of Requirements ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],This slide drawn from  [ICFA SCIC]
Enabling Large-Scale Science ,[object Object],[object Object],[object Object],[object Object],[object Object]
ESnet Response to the Requirements ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Networking for Large-Scale Science ,[object Object],[object Object],[object Object],[object Object]
ESnet4 Network Architecture ESnet sites ESnet IP hubs Metro area rings (MANs) Circuit connections to other science networks (e.g. USLHCNet) ESnet SDN hubs The multiple interconnected rings provide high reliability IP network physical infrastructure is packet over un-switched Ethernet Science Data Network (SDN) physical infrastructure is switched Ethernet circuits Core-to-site-to-core dual connection for those sites not on metro area rings Other IP networks
Approach to Satisfying Capacity Requiremetns  - a Scalable Optical Infrastructure b Houston Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver Los Angeles Nashville 20G 20G 20G 20G 20G 20G 20G ESnet 4 Backbone Optical Circuit Configuration (December, 2008) DC 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub (IP and SDN devices)
Typical ESnet and Internet2 Core Hub fiber east fiber west fiber north/south Ciena CoreDirector grooming device Infinera DTN Internet2 ESnet ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Network Testbed Implemented as a Optical Overlay dynamically allocated and routed waves (future) ESnet metro-area networks Internet2/Level3/Infinera National Optical Infrastructure optical interface to  R&E Regional Nets T640 various equipment and experimental control plane management systems BMM BMM M320 IP core SDN core switch
Conceptual Architecture of the Infinera DTN Digital Terminal 100 Gb/s 100 Gb/s BMM 100 Gb/s DLM 10x10 Gb/s 10 Gb/s ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],100 Gb/s ,[object Object],[object Object],[object Object],[object Object],[object Object],intra- and inter-DLM switch fabric WAN Control Processor OE/EO TAM TOM TOM OSC
ESnet Metropolitan Area Network Ring Architecture for High Reliability Sites MAN fiber ring: 2-4 x 10 Gbps channels provisioned initially, with expansion capacity to 16-64 Site router Site gateway router ESnet production IP core Site Large Science Site ESnet managed virtual circuit services tunneled through the IP backbone Virtual Circuits to Site Site LAN Site edge router IP core router ESnet SDN core ESnet IP core hub ESnet switch ESnet managed λ  / circuit services SDN circuits to site systems ESnet MAN switch Virtual Circuit to Site SDN core west SDN core east IP core west IP core east ESnet production IP service Independent port card supporting multiple 10 Gb/s line interfaces T320 US LHCnet switch SDN core switch SDN core switch US LHCnet switch MAN site switch MAN site switch
Typical Medium Size ESnet4 Hub DC power controllers 10 Gbps network tester Cisco 7609, Science Data Network  switch (about $365K, list) Juniper M320, core IP router, (about $1.65M, list) Juniper M7i, IP peering router, (about $60K, list) secure terminal server (telephone modem access) local Ethernet DC power controllers
Note that the major ESnet  sites are now directly on the ESnet “core” network Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (>1   ) Atlanta Nashville OC48 (0) (1) 5  4  4  4  4  4  4  5  5  5  3  4  4  5  5  5  5  5  4  (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (27) (14) (24) (15) (30) 3  3  (3) (21) West Chicago MAN Long Island MAN bandwidth into and out of FNAL is equal to, or greater, than the ESnet core bandwidth Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core (1  ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) BNL 32 AoA, NYC  USLHCNet JLab ELITE ODU  MATP Wash., DC FNAL 600 W. Chicago Starlight ANL USLHCNet Atlanta MAN 180 Peachtree  56 Marietta (SOX) ORNL Wash., DC  Houston Nashville San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC
ESnet4 IP + SDN, 2011 Configuration Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (>1   ) Atlanta Nashville OC48 (0) (1) 5  4  4  4  4  4  4  5  5  5  3  4  4  5  5  5  5  5  4  (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (27) (14) (24) (15) (30) 3  3  (3) (21) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core (1  ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)
Cleveland Europe (GÉANT) Asia-Pacific New York Chicago Washington   DC Atlanta CERN   (30 Gbps) Seattle Albuquerque Australia San Diego LA Denver South America (AMPATH) South America (AMPATH) Canada (CANARIE) CERN   (30 Gbps) Canada (CANARIE) Asia-Pacific Asia Pacific GLORIAD   (Russia and China) Boise Houston Jacksonville Tulsa Boston Science Data Network Core IP Core Kansas City Australia Core network fiber path is ~ 14,000 miles / 24,000 km Sunnyvale El Paso ESnet4 Planed Configuration 40-50 Gbps in 2009-2010, 160-400 Gbps in 2011-2012   1625 miles / 2545 km  2700 miles / 4300 km  Production IP core ( 10Gbps ) ◄ SDN core ( 20-30-40Gbps )  ◄ MANs (20-60 Gbps) or backbone loops for site access International connections IP core hubs Primary DOE Labs SDN (switch) hubs High speed cross-connects with Ineternet2/Abilene Possible hubs
Multi-Domain Virtual Circuits as a Service ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
Virtual Circuit Service Functional Requirements ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]
ESnet Virtual Circuit Service: OSCARS ( O n-demand  S ecured  C ircuits and  A dvanced  R eservation  S ystem) User Application ,[object Object],[object Object],[object Object],[object Object],[object Object],User Instructions to routers and switches to setup/teardown LSPs Web-Based User Interface Authentication, Authorization, And Auditing Subsystem Bandwidth Scheduler Subsystem Path Setup Subsystem Reservation Manager User app request via AAAS User feedback Human User User request via WBUI
Environment of Science is Inherently Multi-Domain ,[object Object],[object Object],[object Object],FNAL (AS3152) [US] ESnet (AS293) [US] GEANT (AS20965) [Europe] DFN (AS680) [Germany] DESY (AS1754) [Germany]
[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],Inter-domain Reservations: A Hard Problem
Environment of Science is Inherently Multi-Domain ,[object Object],[object Object],[object Object],FNAL (AS3152) [US] ESnet (AS293) [US] GEANT (AS20965) [Europe] DFN (AS680) [Germany] DESY (AS1754) [Germany] broker Local domain circuit manager Local domain circuit manager Local domain circuit manager Local domain circuit manager Local domain circuit manager
OSCARS Update ,[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object],[object Object]

More Related Content

What's hot

An Integrated West Coast Science DMZ for Data-Intensive Research
An Integrated West Coast Science DMZ for Data-Intensive ResearchAn Integrated West Coast Science DMZ for Data-Intensive Research
An Integrated West Coast Science DMZ for Data-Intensive ResearchLarry Smarr
 
Adam Downey Resume
Adam Downey ResumeAdam Downey Resume
Adam Downey ResumeAdam Downey
 
2009 HEP Science Network Requirements Workshop Final Report
2009 HEP Science Network Requirements Workshop Final Report2009 HEP Science Network Requirements Workshop Final Report
2009 HEP Science Network Requirements Workshop Final Reportbutest
 
The ECP Exascale Computing Project
The ECP Exascale Computing ProjectThe ECP Exascale Computing Project
The ECP Exascale Computing Projectinside-BigData.com
 
GlobusWorld 2021: Arecibo Observatory Data Movement
GlobusWorld 2021: Arecibo Observatory Data MovementGlobusWorld 2021: Arecibo Observatory Data Movement
GlobusWorld 2021: Arecibo Observatory Data MovementGlobus
 
Exascale Computing Project (ECP) Update
Exascale Computing Project (ECP) UpdateExascale Computing Project (ECP) Update
Exascale Computing Project (ECP) Updateinside-BigData.com
 
The Pacific Research Platform
The Pacific Research PlatformThe Pacific Research Platform
The Pacific Research PlatformLarry Smarr
 
Exascale Computing Project Update
Exascale Computing Project UpdateExascale Computing Project Update
Exascale Computing Project Updateinside-BigData.com
 
The Pacific Research Platform
The Pacific Research PlatformThe Pacific Research Platform
The Pacific Research PlatformLarry Smarr
 
Advanced Cyberinfrastructure Enabled Services and Applications in 2021
Advanced Cyberinfrastructure Enabled Services and Applications in 2021Advanced Cyberinfrastructure Enabled Services and Applications in 2021
Advanced Cyberinfrastructure Enabled Services and Applications in 2021Larry Smarr
 
Exascale Computing Project - Driving a HUGE Change in a Changing World
Exascale Computing Project - Driving a HUGE Change in a Changing WorldExascale Computing Project - Driving a HUGE Change in a Changing World
Exascale Computing Project - Driving a HUGE Change in a Changing Worldinside-BigData.com
 
Cost and Energy Reduction Evaluation for ARM Based Web Servers
Cost and Energy Reduction Evaluation for ARM Based Web ServersCost and Energy Reduction Evaluation for ARM Based Web Servers
Cost and Energy Reduction Evaluation for ARM Based Web ServersSébastien Lafond
 
Positioning University of California Information Technology for the Future: S...
Positioning University of California Information Technology for the Future: S...Positioning University of California Information Technology for the Future: S...
Positioning University of California Information Technology for the Future: S...Larry Smarr
 
Welcome ndm11
Welcome ndm11Welcome ndm11
Welcome ndm11balmanme
 
Ben Evans SPEDDEXES 2014
Ben Evans SPEDDEXES 2014Ben Evans SPEDDEXES 2014
Ben Evans SPEDDEXES 2014aceas13tern
 
The Exascale Computing Project and the future of HPC
The Exascale Computing Project and the future of HPCThe Exascale Computing Project and the future of HPC
The Exascale Computing Project and the future of HPCinside-BigData.com
 

What's hot (20)

An Integrated West Coast Science DMZ for Data-Intensive Research
An Integrated West Coast Science DMZ for Data-Intensive ResearchAn Integrated West Coast Science DMZ for Data-Intensive Research
An Integrated West Coast Science DMZ for Data-Intensive Research
 
International R/E Routing (v1.0)
International R/E Routing (v1.0)International R/E Routing (v1.0)
International R/E Routing (v1.0)
 
TransPAC2 Workplan - Measurement (v9)
TransPAC2 Workplan - Measurement (v9)TransPAC2 Workplan - Measurement (v9)
TransPAC2 Workplan - Measurement (v9)
 
Adam Downey Resume
Adam Downey ResumeAdam Downey Resume
Adam Downey Resume
 
2009 HEP Science Network Requirements Workshop Final Report
2009 HEP Science Network Requirements Workshop Final Report2009 HEP Science Network Requirements Workshop Final Report
2009 HEP Science Network Requirements Workshop Final Report
 
The ECP Exascale Computing Project
The ECP Exascale Computing ProjectThe ECP Exascale Computing Project
The ECP Exascale Computing Project
 
GlobusWorld 2021: Arecibo Observatory Data Movement
GlobusWorld 2021: Arecibo Observatory Data MovementGlobusWorld 2021: Arecibo Observatory Data Movement
GlobusWorld 2021: Arecibo Observatory Data Movement
 
Jet - ACE & TransPAC3
Jet - ACE & TransPAC3Jet - ACE & TransPAC3
Jet - ACE & TransPAC3
 
Exascale Computing Project (ECP) Update
Exascale Computing Project (ECP) UpdateExascale Computing Project (ECP) Update
Exascale Computing Project (ECP) Update
 
TransPAC2 - Security (v8)
TransPAC2 - Security (v8)TransPAC2 - Security (v8)
TransPAC2 - Security (v8)
 
The Pacific Research Platform
The Pacific Research PlatformThe Pacific Research Platform
The Pacific Research Platform
 
Exascale Computing Project Update
Exascale Computing Project UpdateExascale Computing Project Update
Exascale Computing Project Update
 
The Pacific Research Platform
The Pacific Research PlatformThe Pacific Research Platform
The Pacific Research Platform
 
Advanced Cyberinfrastructure Enabled Services and Applications in 2021
Advanced Cyberinfrastructure Enabled Services and Applications in 2021Advanced Cyberinfrastructure Enabled Services and Applications in 2021
Advanced Cyberinfrastructure Enabled Services and Applications in 2021
 
Exascale Computing Project - Driving a HUGE Change in a Changing World
Exascale Computing Project - Driving a HUGE Change in a Changing WorldExascale Computing Project - Driving a HUGE Change in a Changing World
Exascale Computing Project - Driving a HUGE Change in a Changing World
 
Cost and Energy Reduction Evaluation for ARM Based Web Servers
Cost and Energy Reduction Evaluation for ARM Based Web ServersCost and Energy Reduction Evaluation for ARM Based Web Servers
Cost and Energy Reduction Evaluation for ARM Based Web Servers
 
Positioning University of California Information Technology for the Future: S...
Positioning University of California Information Technology for the Future: S...Positioning University of California Information Technology for the Future: S...
Positioning University of California Information Technology for the Future: S...
 
Welcome ndm11
Welcome ndm11Welcome ndm11
Welcome ndm11
 
Ben Evans SPEDDEXES 2014
Ben Evans SPEDDEXES 2014Ben Evans SPEDDEXES 2014
Ben Evans SPEDDEXES 2014
 
The Exascale Computing Project and the future of HPC
The Exascale Computing Project and the future of HPCThe Exascale Computing Project and the future of HPC
The Exascale Computing Project and the future of HPC
 

Viewers also liked

ProjectTox: Free as in freedom Skype replacement
ProjectTox: Free as in freedom Skype replacementProjectTox: Free as in freedom Skype replacement
ProjectTox: Free as in freedom Skype replacementWei-Ning Huang
 
Streaming Overview Final.ppt
Streaming Overview Final.pptStreaming Overview Final.ppt
Streaming Overview Final.pptVideoguy
 
Best practices for live streaming
Best practices for live streamingBest practices for live streaming
Best practices for live streamingAlden Fertig
 
Other Social Media Tools for Business - Communication
Other Social Media Tools for Business - CommunicationOther Social Media Tools for Business - Communication
Other Social Media Tools for Business - CommunicationMFG Innovationsagentur
 
VoIP Wars: Destroying Jar Jar Lync (Filtered version)
VoIP Wars: Destroying Jar Jar Lync (Filtered version)VoIP Wars: Destroying Jar Jar Lync (Filtered version)
VoIP Wars: Destroying Jar Jar Lync (Filtered version)Fatih Ozavci
 

Viewers also liked (8)

Chapter2 Application
Chapter2 ApplicationChapter2 Application
Chapter2 Application
 
VirtualArtsTV
VirtualArtsTVVirtualArtsTV
VirtualArtsTV
 
ProjectTox: Free as in freedom Skype replacement
ProjectTox: Free as in freedom Skype replacementProjectTox: Free as in freedom Skype replacement
ProjectTox: Free as in freedom Skype replacement
 
Streaming Overview Final.ppt
Streaming Overview Final.pptStreaming Overview Final.ppt
Streaming Overview Final.ppt
 
Best practices for live streaming
Best practices for live streamingBest practices for live streaming
Best practices for live streaming
 
Other Social Media Tools for Business - Communication
Other Social Media Tools for Business - CommunicationOther Social Media Tools for Business - Communication
Other Social Media Tools for Business - Communication
 
VoIP Wars: Destroying Jar Jar Lync (Filtered version)
VoIP Wars: Destroying Jar Jar Lync (Filtered version)VoIP Wars: Destroying Jar Jar Lync (Filtered version)
VoIP Wars: Destroying Jar Jar Lync (Filtered version)
 
what is iptv ?
what is iptv ?what is iptv ?
what is iptv ?
 

Similar to [.ppt]

ESnet Defined: Challenges and Overview Department of Energy ...
ESnet Defined: Challenges and Overview Department of Energy ...ESnet Defined: Challenges and Overview Department of Energy ...
ESnet Defined: Challenges and Overview Department of Energy ...Videoguy
 
Physics Research in an Era of Global Cyberinfrastructure
Physics Research in an Era of Global CyberinfrastructurePhysics Research in an Era of Global Cyberinfrastructure
Physics Research in an Era of Global CyberinfrastructureLarry Smarr
 
The Pacific Research Platform: a Science-Driven Big-Data Freeway System
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemThe Pacific Research Platform: a Science-Driven Big-Data Freeway System
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemLarry Smarr
 
Iaetsd survey on big data analytics for sdn (software defined networks)
Iaetsd survey on big data analytics for sdn (software defined networks)Iaetsd survey on big data analytics for sdn (software defined networks)
Iaetsd survey on big data analytics for sdn (software defined networks)Iaetsd Iaetsd
 
Science and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated EraScience and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated EraLarry Smarr
 
Science Engagement: A Non-Technical Approach to the Technical Divide
Science Engagement: A Non-Technical Approach to the Technical DivideScience Engagement: A Non-Technical Approach to the Technical Divide
Science Engagement: A Non-Technical Approach to the Technical DivideCybera Inc.
 
A California-Wide Cyberinfrastructure for Data-Intensive Research
A California-Wide Cyberinfrastructure for Data-Intensive ResearchA California-Wide Cyberinfrastructure for Data-Intensive Research
A California-Wide Cyberinfrastructure for Data-Intensive ResearchLarry Smarr
 
Calit2: a View Into the Future of the Wired and Unwired Internet
Calit2: a View Into the Future of the Wired and Unwired InternetCalit2: a View Into the Future of the Wired and Unwired Internet
Calit2: a View Into the Future of the Wired and Unwired InternetLarry Smarr
 
Experimental analysis of channel interference in ad hoc network
Experimental analysis of channel interference in ad hoc networkExperimental analysis of channel interference in ad hoc network
Experimental analysis of channel interference in ad hoc networkijcsa
 
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...
 Wireless FasterData and Distributed Open Compute Opportunities and (some) Us... Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...Larry Smarr
 
Andrew Wiedlea - Wireless FasterData and Distributed Open Compute Opportuniti...
Andrew Wiedlea - Wireless FasterData and Distributed Open Compute Opportuniti...Andrew Wiedlea - Wireless FasterData and Distributed Open Compute Opportuniti...
Andrew Wiedlea - Wireless FasterData and Distributed Open Compute Opportuniti...Larry Smarr
 
Ceoa Nov 2005 Final Small
Ceoa Nov 2005 Final SmallCeoa Nov 2005 Final Small
Ceoa Nov 2005 Final SmallLarry Smarr
 
The Science DMZ
The Science DMZThe Science DMZ
The Science DMZJisc
 
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...Larry Smarr
 
Opportunities for Advanced Technology in Telecommunications
Opportunities for Advanced Technology in TelecommunicationsOpportunities for Advanced Technology in Telecommunications
Opportunities for Advanced Technology in TelecommunicationsLarry Smarr
 
The Pacific Research Platform
The Pacific Research PlatformThe Pacific Research Platform
The Pacific Research PlatformLarry Smarr
 
Impact of Grid Computing on Network Operators and HW Vendors
Impact of Grid Computing on Network Operators and HW VendorsImpact of Grid Computing on Network Operators and HW Vendors
Impact of Grid Computing on Network Operators and HW VendorsTal Lavian Ph.D.
 
Grid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applicationsGrid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applicationsTal Lavian Ph.D.
 

Similar to [.ppt] (20)

ESnet Defined: Challenges and Overview Department of Energy ...
ESnet Defined: Challenges and Overview Department of Energy ...ESnet Defined: Challenges and Overview Department of Energy ...
ESnet Defined: Challenges and Overview Department of Energy ...
 
Physics Research in an Era of Global Cyberinfrastructure
Physics Research in an Era of Global CyberinfrastructurePhysics Research in an Era of Global Cyberinfrastructure
Physics Research in an Era of Global Cyberinfrastructure
 
The Pacific Research Platform: a Science-Driven Big-Data Freeway System
The Pacific Research Platform: a Science-Driven Big-Data Freeway SystemThe Pacific Research Platform: a Science-Driven Big-Data Freeway System
The Pacific Research Platform: a Science-Driven Big-Data Freeway System
 
Iaetsd survey on big data analytics for sdn (software defined networks)
Iaetsd survey on big data analytics for sdn (software defined networks)Iaetsd survey on big data analytics for sdn (software defined networks)
Iaetsd survey on big data analytics for sdn (software defined networks)
 
Science and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated EraScience and Cyberinfrastructure in the Data-Dominated Era
Science and Cyberinfrastructure in the Data-Dominated Era
 
Science Engagement: A Non-Technical Approach to the Technical Divide
Science Engagement: A Non-Technical Approach to the Technical DivideScience Engagement: A Non-Technical Approach to the Technical Divide
Science Engagement: A Non-Technical Approach to the Technical Divide
 
A California-Wide Cyberinfrastructure for Data-Intensive Research
A California-Wide Cyberinfrastructure for Data-Intensive ResearchA California-Wide Cyberinfrastructure for Data-Intensive Research
A California-Wide Cyberinfrastructure for Data-Intensive Research
 
Calit2: a View Into the Future of the Wired and Unwired Internet
Calit2: a View Into the Future of the Wired and Unwired InternetCalit2: a View Into the Future of the Wired and Unwired Internet
Calit2: a View Into the Future of the Wired and Unwired Internet
 
SomeSlides
SomeSlidesSomeSlides
SomeSlides
 
Experimental analysis of channel interference in ad hoc network
Experimental analysis of channel interference in ad hoc networkExperimental analysis of channel interference in ad hoc network
Experimental analysis of channel interference in ad hoc network
 
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...
 Wireless FasterData and Distributed Open Compute Opportunities and (some) Us... Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...
Wireless FasterData and Distributed Open Compute Opportunities and (some) Us...
 
Andrew Wiedlea - Wireless FasterData and Distributed Open Compute Opportuniti...
Andrew Wiedlea - Wireless FasterData and Distributed Open Compute Opportuniti...Andrew Wiedlea - Wireless FasterData and Distributed Open Compute Opportuniti...
Andrew Wiedlea - Wireless FasterData and Distributed Open Compute Opportuniti...
 
Ceoa Nov 2005 Final Small
Ceoa Nov 2005 Final SmallCeoa Nov 2005 Final Small
Ceoa Nov 2005 Final Small
 
The Science DMZ
The Science DMZThe Science DMZ
The Science DMZ
 
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...
The Jump to Light Speed - Data Intensive Earth Sciences are Leading the Way t...
 
Research on Blue Waters
Research on Blue WatersResearch on Blue Waters
Research on Blue Waters
 
Opportunities for Advanced Technology in Telecommunications
Opportunities for Advanced Technology in TelecommunicationsOpportunities for Advanced Technology in Telecommunications
Opportunities for Advanced Technology in Telecommunications
 
The Pacific Research Platform
The Pacific Research PlatformThe Pacific Research Platform
The Pacific Research Platform
 
Impact of Grid Computing on Network Operators and HW Vendors
Impact of Grid Computing on Network Operators and HW VendorsImpact of Grid Computing on Network Operators and HW Vendors
Impact of Grid Computing on Network Operators and HW Vendors
 
Grid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applicationsGrid optical network service architecture for data intensive applications
Grid optical network service architecture for data intensive applications
 

More from Videoguy

Energy-Aware Wireless Video Streaming
Energy-Aware Wireless Video StreamingEnergy-Aware Wireless Video Streaming
Energy-Aware Wireless Video StreamingVideoguy
 
Microsoft PowerPoint - WirelessCluster_Pres
Microsoft PowerPoint - WirelessCluster_PresMicrosoft PowerPoint - WirelessCluster_Pres
Microsoft PowerPoint - WirelessCluster_PresVideoguy
 
Proxy Cache Management for Fine-Grained Scalable Video Streaming
Proxy Cache Management for Fine-Grained Scalable Video StreamingProxy Cache Management for Fine-Grained Scalable Video Streaming
Proxy Cache Management for Fine-Grained Scalable Video StreamingVideoguy
 
Free-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksFree-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksVideoguy
 
Instant video streaming
Instant video streamingInstant video streaming
Instant video streamingVideoguy
 
Video Streaming over Bluetooth: A Survey
Video Streaming over Bluetooth: A SurveyVideo Streaming over Bluetooth: A Survey
Video Streaming over Bluetooth: A SurveyVideoguy
 
Video Streaming
Video StreamingVideo Streaming
Video StreamingVideoguy
 
Reaching a Broader Audience
Reaching a Broader AudienceReaching a Broader Audience
Reaching a Broader AudienceVideoguy
 
Considerations for Creating Streamed Video Content over 3G ...
Considerations for Creating Streamed Video Content over 3G ...Considerations for Creating Streamed Video Content over 3G ...
Considerations for Creating Streamed Video Content over 3G ...Videoguy
 
ADVANCES IN CHANNEL-ADAPTIVE VIDEO STREAMING
ADVANCES IN CHANNEL-ADAPTIVE VIDEO STREAMINGADVANCES IN CHANNEL-ADAPTIVE VIDEO STREAMING
ADVANCES IN CHANNEL-ADAPTIVE VIDEO STREAMINGVideoguy
 
Impact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingImpact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingVideoguy
 
Application Brief
Application BriefApplication Brief
Application BriefVideoguy
 
Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Videoguy
 
Streaming Video into Second Life
Streaming Video into Second LifeStreaming Video into Second Life
Streaming Video into Second LifeVideoguy
 
Flash Live Video Streaming Software
Flash Live Video Streaming SoftwareFlash Live Video Streaming Software
Flash Live Video Streaming SoftwareVideoguy
 
Videoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoguy
 
Streaming Video Formaten
Streaming Video FormatenStreaming Video Formaten
Streaming Video FormatenVideoguy
 
iPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareiPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareVideoguy
 
Glow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxGlow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxVideoguy
 

More from Videoguy (20)

Energy-Aware Wireless Video Streaming
Energy-Aware Wireless Video StreamingEnergy-Aware Wireless Video Streaming
Energy-Aware Wireless Video Streaming
 
Microsoft PowerPoint - WirelessCluster_Pres
Microsoft PowerPoint - WirelessCluster_PresMicrosoft PowerPoint - WirelessCluster_Pres
Microsoft PowerPoint - WirelessCluster_Pres
 
Proxy Cache Management for Fine-Grained Scalable Video Streaming
Proxy Cache Management for Fine-Grained Scalable Video StreamingProxy Cache Management for Fine-Grained Scalable Video Streaming
Proxy Cache Management for Fine-Grained Scalable Video Streaming
 
Adobe
AdobeAdobe
Adobe
 
Free-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer NetworksFree-riding Resilient Video Streaming in Peer-to-Peer Networks
Free-riding Resilient Video Streaming in Peer-to-Peer Networks
 
Instant video streaming
Instant video streamingInstant video streaming
Instant video streaming
 
Video Streaming over Bluetooth: A Survey
Video Streaming over Bluetooth: A SurveyVideo Streaming over Bluetooth: A Survey
Video Streaming over Bluetooth: A Survey
 
Video Streaming
Video StreamingVideo Streaming
Video Streaming
 
Reaching a Broader Audience
Reaching a Broader AudienceReaching a Broader Audience
Reaching a Broader Audience
 
Considerations for Creating Streamed Video Content over 3G ...
Considerations for Creating Streamed Video Content over 3G ...Considerations for Creating Streamed Video Content over 3G ...
Considerations for Creating Streamed Video Content over 3G ...
 
ADVANCES IN CHANNEL-ADAPTIVE VIDEO STREAMING
ADVANCES IN CHANNEL-ADAPTIVE VIDEO STREAMINGADVANCES IN CHANNEL-ADAPTIVE VIDEO STREAMING
ADVANCES IN CHANNEL-ADAPTIVE VIDEO STREAMING
 
Impact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video StreamingImpact of FEC Overhead on Scalable Video Streaming
Impact of FEC Overhead on Scalable Video Streaming
 
Application Brief
Application BriefApplication Brief
Application Brief
 
Video Streaming Services – Stage 1
Video Streaming Services – Stage 1Video Streaming Services – Stage 1
Video Streaming Services – Stage 1
 
Streaming Video into Second Life
Streaming Video into Second LifeStreaming Video into Second Life
Streaming Video into Second Life
 
Flash Live Video Streaming Software
Flash Live Video Streaming SoftwareFlash Live Video Streaming Software
Flash Live Video Streaming Software
 
Videoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions CookbookVideoconference Streaming Solutions Cookbook
Videoconference Streaming Solutions Cookbook
 
Streaming Video Formaten
Streaming Video FormatenStreaming Video Formaten
Streaming Video Formaten
 
iPhone Live Video Streaming Software
iPhone Live Video Streaming SoftwareiPhone Live Video Streaming Software
iPhone Live Video Streaming Software
 
Glow: Video streaming training guide - Firefox
Glow: Video streaming training guide - FirefoxGlow: Video streaming training guide - Firefox
Glow: Video streaming training guide - Firefox
 

[.ppt]

  • 1. Networking for the Future of DOE Science William E. Johnston ESnet Department Head and Senior Scientist wej@es.net, www.es.net This talk is available at www.es.net Energy Sciences Network Lawrence Berkeley National Laboratory Networking for the Future of Science Institute of Computer Science (ICS) of the Foundation for Research and Technology – Hellas (FORTH)., Feb. 15, 2008
  • 2.
  • 3.
  • 4. ESnet as a Large-Scale Science Data Transport Network A few hundred of the 25,000,000 hosts that the Labs connect to every month account for 50% of all ESnet traffic (which primarily high energy physics, nuclear physics, and climate data at this point) Most of ESnet’s traffic (>85%) goes to and comes from outside of ESnet. This reflects the highly collaborative nature of large-scale science (which is one of the main focuses of DOE’s Office of Science). = the R&E source or destination of ESnet’s top 100 sites (all R&E) (the DOE Lab destination or source of each flow is not shown)
  • 5. ESnet Provides Global High-Speed Internet Connectivity for DOE Facilities and Collaborators (12/2007) LVK SNLL PNNL LIGO Lab DC Offices MIT/ PSFC AMES LLNL NNSA Sponsored (13+) Joint Sponsored (3) Other Sponsored (NSF LIGO, NOAA) Laboratory Sponsored (6) ~45 end user sites CA*net4 France GLORIAD (Russia, China) Korea (Kreonet2 Japan (SINet) Australia (AARNet) Canada (CA*net4 Taiwan (TANet2) Singaren commercial peering points PAIX-PA Equinix, etc. ESnet core hubs NEWY JGI LBNL SLAC NERSC SDSC ORNL MREN StarTap Taiwan (TANet2, ASCC) AU AU SEA CHI-SL Specific R&E network peers Office Of Science Sponsored (22) KAREN/REANNZ ODN Japan Telecom America NLR-Packetnet Abilene/I2 NETL ANL FNAL International (1-10 Gb/s) 10 Gb/s SDN core (I2, NLR) 10Gb/s IP core MAN rings (≥ 10 Gb/s) Lab supplied links OC12 / GigEthernet OC3 (155 Mb/s) 45 Mb/s and less Salt Lake DOE Geography is only representational Other R&E peering points INL YUCCA MT BECHTEL-NV LANL SNLA Allied Signal PANTEX ARM KCP NOAA OSTI ORAU SRS JLAB PPPL BNL NREL GA DOE-ALB DOE GTN NNSA SINet (Japan) Russia (BINP) ELPA WASH CERN (USLHCnet: DOE+CERN funded) GÉANT - France, Germany, Italy, UK, etc SUNN Abilene SNV1 Equinix ALBU CHIC NASA Ames UNM MAXGPoP NLR AMPATH (S. America) AMPATH (S. America) R&E networks ATLA NSF/IRNC funded IARC PacWave KAREN / REANNZ Internet2 SINGAREN ODN Japan Telecom America Starlight USLHCNet NLR PacWave Abilene Equinix DENV SUNN NASH Internet2 NYSERNet MAN LAN USHLCNet to GÉANT 2700 miles / 4300 km 1625 miles / 2545 km
  • 6. For Scale The relatively large geographic ESnet scale of makes it a challenge for a small organization to build, maintain, and operate the network. 2700 miles / 4300 km 1625 miles / 2545 km
  • 7. ESnet History IP and virtual circuits on a configurable optical infrastructure with at least 5-6 optical channels of 10-100 Gbps each Partnership with Internet2 and US Research & Education community to build a dedicated national optical network ESnet4 2007-2012 IP over 2.5 and 10Gbps SONET Partnership with Qwest to build a national Packet over SONET network and optical channel Metropolitan Area fiber ESnet3 2000-2007 IP over 155 Mbps ATM net Partnership with Sprint to build the first national footprint ATM network ESnet2 1995-2000 56 Kbps to 45 Mbps X.25 ESnet formed to serve the Office of Science ESnet1 1986-1995 56 Kbps microwave and satellite links ESnet0/MFENet ESnet0/MFENet mid-1970s-1986
  • 8.
  • 9.
  • 10. ESnet is a Highly Reliable Infrastructure “ 5 nines” (>99.995%) “ 3 nines” (>99.5%) “ 4 nines” (>99.95%) Dually connected sites Note: These availability measures are only for ESnet infrastructure, they do not include site-related problems. Some sites, e.g. PNNL and LANL, provide circuits from the site to an ESnet hub, and therefore the ESnet-site demarc is at the ESnet hub (there is no ESnet equipment at the site. In this case, circuit outages between the ESnet equipment and the site are considered site issues and are not included in the ESnet availability metric.
  • 11. ESnet is An Organization Structured for the Service Applied R&D for new network services (Circuit services and end-to-end monitoring) Network operations and user support (24x7x365, end-to-end problem resolution Network engineering, routing and network services, WAN security Science collaboration services (Public Key Infrastructure certification authorities, AV conferencing, email lists, Web services) 3.5 5.5 3.0 7.4 8.3 5.5 Internal infrastructure, disaster recovery, security Management, accounting, compliance Deployment and WAN maintenance
  • 12. ESnet FY06 Budget is Approximately $26.6M Total expenses: $26.6M Target carryover: $1.0M Management and compliance: $0.7M Collaboration services: $1.6 Internal infrastructure, security, disaster recovery: $3.4M Operations: $1.1M Engineering & research: $2.9M WAN equipment: $2.0M Special projects (Chicago and LI MANs): $1.2M Total funds: $26.6M SC R&D: $0.5M Carryover: $1M SC Special Projects: $1.2M Circuits & hubs: $12.7M SC funding: $20.1M Other DOE: $3.8M Approximate Budget Categories
  • 13.
  • 14. LHC will be the largest scientific experiment and generate the most data the scientific community has ever tried to manage. The data management model involves a world-wide collection of data centers that store, manage, and analyze the data and that are integrated through network connections with typical speeds in the 10+ Gbps range. closely coordinated and interdependent distributed systems that must have predictable intercommunication for effective functioning [ICFA SCIC]
  • 15. FNAL Traffic is Representative of all CMS Traffic Accumulated data (Terabytes) received by CMS Data Centers (“tier1” sites) and many analysis centers (“tier2” sites) during the past 12 months (15 petabytes of data) [LHC/CMS] FNAL
  • 16. When A Few Large Data Sources/Sinks Dominate Traffic it is Not Surprising that Overall Network Usage Follows the Patterns of the Very Large Users - This Trend Will Reverse in the Next Few Weeks as the Next Round of LHC Data Challenges Kicks Off FNAL Outbound Traffic
  • 17. What Does this Aggregate Traffic Look like at a Site FNAL Outbound CMS Traffic Max= 1064 MBy/s (8.5 Gb/s), Average = 394 MBy/s (3.2 Gb/s) Mbytes/sec 8 Gb/s
  • 18. Total ESnet Traffic Exceeds 2 Petabytes/Month 2.7 PBytes in July 2007 1 PBytes in April 2006 ESnet traffic historically has increased 10x every 47 months Overall traffic tracks the very large science use of the network
  • 19.
  • 20.
  • 21.
  • 22.
  • 23. ESnet4 Network Architecture ESnet sites ESnet IP hubs Metro area rings (MANs) Circuit connections to other science networks (e.g. USLHCNet) ESnet SDN hubs The multiple interconnected rings provide high reliability IP network physical infrastructure is packet over un-switched Ethernet Science Data Network (SDN) physical infrastructure is switched Ethernet circuits Core-to-site-to-core dual connection for those sites not on metro area rings Other IP networks
  • 24. Approach to Satisfying Capacity Requiremetns - a Scalable Optical Infrastructure b Houston Clev. Houston Kansas City Boston Sunnyvale Seattle San Diego Albuquerque El Paso Chicago New York City Washington DC Atlanta Denver Los Angeles Nashville 20G 20G 20G 20G 20G 20G 20G ESnet 4 Backbone Optical Circuit Configuration (December, 2008) DC 10 Gb/s SDN core (NLR) 10/2.5 Gb/s IP core (QWEST) 10 Gb/s IP core (Level3) 10 Gb/s SDN core (Level3) MAN rings (≥ 10 G/s) Lab supplied links Future ESnet Hub ESnet Hub (IP and SDN devices)
  • 25.
  • 26.
  • 27. ESnet Metropolitan Area Network Ring Architecture for High Reliability Sites MAN fiber ring: 2-4 x 10 Gbps channels provisioned initially, with expansion capacity to 16-64 Site router Site gateway router ESnet production IP core Site Large Science Site ESnet managed virtual circuit services tunneled through the IP backbone Virtual Circuits to Site Site LAN Site edge router IP core router ESnet SDN core ESnet IP core hub ESnet switch ESnet managed λ / circuit services SDN circuits to site systems ESnet MAN switch Virtual Circuit to Site SDN core west SDN core east IP core west IP core east ESnet production IP service Independent port card supporting multiple 10 Gb/s line interfaces T320 US LHCnet switch SDN core switch SDN core switch US LHCnet switch MAN site switch MAN site switch
  • 28. Typical Medium Size ESnet4 Hub DC power controllers 10 Gbps network tester Cisco 7609, Science Data Network switch (about $365K, list) Juniper M320, core IP router, (about $1.65M, list) Juniper M7i, IP peering router, (about $60K, list) secure terminal server (telephone modem access) local Ethernet DC power controllers
  • 29. Note that the major ESnet sites are now directly on the ESnet “core” network Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (>1  ) Atlanta Nashville OC48 (0) (1) 5  4  4  4  4  4  4  5  5  5  3  4  4  5  5  5  5  5  4  (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (27) (14) (24) (15) (30) 3  3  (3) (21) West Chicago MAN Long Island MAN bandwidth into and out of FNAL is equal to, or greater, than the ESnet core bandwidth Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core (1  ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20) BNL 32 AoA, NYC USLHCNet JLab ELITE ODU MATP Wash., DC FNAL 600 W. Chicago Starlight ANL USLHCNet Atlanta MAN 180 Peachtree 56 Marietta (SOX) ORNL Wash., DC Houston Nashville San Francisco Bay Area MAN LBNL SLAC JGI LLNL SNLL NERSC
  • 30. ESnet4 IP + SDN, 2011 Configuration Denver Seattle Sunnyvale LA San Diego Chicago Raleigh Jacksonville KC El Paso Albuq. Tulsa Clev. Boise Wash. DC Salt Lake City Portland Baton Rouge Houston Pitts. NYC Boston Philadelphia Indianapolis (>1  ) Atlanta Nashville OC48 (0) (1) 5  4  4  4  4  4  4  5  5  5  3  4  4  5  5  5  5  5  4  (7) (17) (19) (20) (22) (23) (29) (28) (8) (16) (32) (2) (4) (5) (6) (9) (11) (13) (25) (26) (10) (12) (27) (14) (24) (15) (30) 3  3  (3) (21) Layer 1 optical nodes at eventual ESnet Points of Presence ESnet IP switch only hubs ESnet IP switch/router hubs ESnet SDN switch hubs Layer 1 optical nodes not currently in ESnet plans Lab site ESnet IP core (1  ) ESnet Science Data Network core ESnet SDN core, NLR links (existing) Lab supplied link LHC related link MAN link International IP Connections Internet2 circuit number (20)
  • 31. Cleveland Europe (GÉANT) Asia-Pacific New York Chicago Washington DC Atlanta CERN (30 Gbps) Seattle Albuquerque Australia San Diego LA Denver South America (AMPATH) South America (AMPATH) Canada (CANARIE) CERN (30 Gbps) Canada (CANARIE) Asia-Pacific Asia Pacific GLORIAD (Russia and China) Boise Houston Jacksonville Tulsa Boston Science Data Network Core IP Core Kansas City Australia Core network fiber path is ~ 14,000 miles / 24,000 km Sunnyvale El Paso ESnet4 Planed Configuration 40-50 Gbps in 2009-2010, 160-400 Gbps in 2011-2012 1625 miles / 2545 km 2700 miles / 4300 km Production IP core ( 10Gbps ) ◄ SDN core ( 20-30-40Gbps ) ◄ MANs (20-60 Gbps) or backbone loops for site access International connections IP core hubs Primary DOE Labs SDN (switch) hubs High speed cross-connects with Ineternet2/Abilene Possible hubs
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44. DOEGrids CA (Active Certificates) Usage Statistics * Report as of Jan 17, 2008 US, LHC ATLAS project adopts ESnet CA service
  • 45. DOEGrids CA Usage - Virtual Organization Breakdown * DOE-NSF collab. & Auto renewals ** OSG Includes (BNL, CDF, CIGI, CMS, CompBioGrid, DES, DOSAR, DZero, Engage, Fermilab, fMRI, GADU, geant4, GLOW, GPN, GRASE, GridEx, GROW, GUGrid, i2u2, ILC, iVDGL, JLAB, LIGO, mariachi, MIS, nanoHUB, NWICG, NYGrid, OSG, OSGEDU, SBGrid, SDSS, SLAC, STAR & USATLAS)
  • 46.
  • 48.
  • 49.
  • 50.
  • 51.