High performance computing принципы проектирования сети
Summit x670
1. Core backbone
Top of the Rack
HPCC
Campus
aggregation
Summit X670
extreme@muk.ua
2. Extreme Networks® Product Portfolio
Network Summit X480 BlackDiamond® 8800 BlackDiamond X
Management with 8900-Series Series
Modules
Summit X670
E4G 200/400
Only 400 model stacks
VIM4-40G4X
8900-40G6X-Xm
Ridgeline™
Summit X460 Summit X650
Motorola ADSP BlackDiamond 8800
with C-Series
VIM3-40G4X Modules
Wireless Summit X450a
Single-Radio AP
Adaptive AP
Wallplate AP
Controller w/ AP*
Summit X450e
BlackDiamond 8800
Summit X250e Summit X440 with 8500-Series
Modules
Summit® WM
EAS
Summit X150 Summit X350
ReachNXT™
10/100M 1G 10G 40G 1/10/40G 10/40/100G
Fixed SummitStack™ Modular
2
3. Новый коммутатор X670
Тенденции Summit X670V
• 48 x 10 GbE + 4 x 40 GbE (or 64 x10 GbE)
– Консолидация и конвергенция
• Поддержка MLAG, Selective QinQ
– Необходимость в интерфейсах
• Технология MPLS (H-VPLS)
10 GbE и выше • Низкое энергопотребление,
отказоустойчивые блоки питания
Продукты 40 GbE Summit
– 48 x 10 GbE + 4 x 40 GbE
(или 64 x 10 GbE) в форм-факторе 1
RU (Summit X670V)
– 4 x 40 GbE модуль для Summit®
X650 and X480
– Высокоскоростное стекирование 160 Защита инвестиций
GbE или 320 GbEна большие • 4 x 40 GbE модуль для Summit X670V,
расстояния через интерфейсы 40 X650 и X480
GbE (Summit X670V/X650)
3
4. Серия Summit X670
Summit® X670V-48x
– 48 портов GbE/10 GbE
– Один модуль расширения для VIM4-40G-4X:
• 4 порта 40 GbE
• 16 портов 10 GbE с разветвителями
• 64 порта 10 GbE в коммутаторе
• SummitStack™-V320 используя 4 порта 40
GbE
– Стеккирование SummitStack-V используя два
порта 10 GbE
Summit X670-48x
– 48 портов 1 GbE/10 GbE
– Стеккирование SummitStack-V используя два
порта 10 GbE ports
4
5. Summit® X670V Switch Design
Motion Sensor
48-port Dual Speed 1 GbE/10 GbE (SFP+) passive copper
Expansion Slot with Optional AC/DC Power Optional Redundant
2+1 Fan Tray AC/DC Power Supply
4-port 40 GbE Module (QSFP+) Supply
5
6. Модель Summit X670V-48x
Широкая функциональность и
масштабируемость
– 128K L2 MAC адресов (в 4 раза больше, чем
Summit® X650)
– 16K маршрутов IPv4
Передняя панель: 48-port 10 GbE (SFP+) Data Center Bridging (DCB)
– Priority Flow Control (PFC)
+ – Enhanced Transmission Selection (ETS)
Задняя панель: 4-port 40 GbE option – Data Center Bridging Exchange Protocol
(QSFP+) (DCBX)
Cut-through, задержка менее1 µsec
Настраиваемые порты 40 GbE
– Каждый порт 40 GbE может быть настроен
как 4 порта 10 GbE
Модули QSFP+
40 GbE Optics Адаптер 40 GbE to 4 x 10 GbE 40 GbE Passive Copper до 3х метров
40 GbE Active Fiber до 100 метров
6
7. Summit® X670 Switch Design
Motion Sensor
48-port Dual Speed 1 GbE/10 GbE (SFP+) passive copper
AC/DC Power Optional Redundant
2+1 Fan Tray AC/DC Power Supply
Supply
7
8. Модель Summit X670-48x
Широкая функциональность и
масштабируемость
– 128K L2 MAC адресов (в 4 раза
больше, чем Summit® X650)
Передняя панель: 48-port 10 GbE (SFP+) – 16K маршрутов IPv4
+ Data Center Bridging (DCB)
Задняя панель: Нет порта данных – Priority Flow Control (PFC)
– Enhanced Transmission Selection (ETS)
– Data Center Bridging Exchange Protocol
(DCBX)
Cut-through, задержка менее1 µsec
PHY-less Design (функции уже интегрированы в ASIC)
– Более низкая задержка и энергопотребление
– Не поддерживает модули LRM или выше, интерфейсы SFP+
passive copper до 5-ти метров
– Улучшенное ценообразование
8
9. Summit X670 Motion Sensor
Motion Sensor
В коммутаторы серии
Summit® X670 установлены
сенсоры движения
– Обнаружение движущихся объектов
для обеспечения безопасности
– Возможность выключения LEDs
когда нет движения
– Логгирование движений EMS
(Event Management System)
– Генерация трапов SNMP, сообщений
syslog, и т.д.
9
10. Summit® X670V/X670 Реверсивный
поток воздуха
Коммутатор поставляется с двумя вариантами блоков вентиляторов
• Модель Front-to-back airflow (FB)
• Стандартная инсталляция, порты10 GbE впереди шкафа
• Модель Back-to-front airflow (BF)
• Установка коммутатора с задней стороны шкафа
Поток воздуха от PSU должен совпадать по направлению с блоками
вентиляторов
• 450W PSU только FB airflow
• 450W PSUs те же, что и Summit® X480 AC/DC PSUs
• 550W PSUs имеют две опции: front-to-back и back-to-front airflow
• 550W AC FB и BF airflow
• 550W DC FB и BF airflow
Направление воздуха запасных блоков питания должно совпадать с уже
установленными блоками питания
!!! Одновременная поддержка блоков питания AC и DC в одном
коммутаторе
10
11. Опции стекирования коммутаторов
Summit X670V and X650 model only Summit® X670V
With 40 GbE Module (VIM4-40G4X) Summit X670V
– 4-port SummitStack™-V320 Summit X650
– 2-port SummitStack™-V160 Summit X650
– 2-port SummitStack-V80 Summit X480
– Compatible with Summit X460/X480 with
SummitStack-V80 modules Summit X670V
Summit X670 and X670V models Summit X670
– 2-port SummitStack-V (only 47,48 ports) Summit X650
– Compatible with Summit Summit X460/480
X650/X480/X450e/X450a/X460
Summit X460/480
Summit switch X670V / X650 X480 X460
SummitStack-V320 Yes Yes No
SummitStack-V160 Yes Yes No
SummitStack-V80 Yes (use V160) Yes Yes
SummitStack-V Yes Yes Yes
SummitStack Yes Yes Yes
11
12. Summit X670 Series Optics Support
SFP/SFP+ Transceivers Summit® X670-48x Summit X670V-48x
48 x SFP+ ports 48 x SFP+ ports
1000BASE-SX SFP Yes Yes
1000BASE-LX SFP Yes Yes
1000BASE-ZX SFP Yes Yes
1000BASE-LX100 SFP Yes Yes
10/100/1000BASE-T SFP Yes Yes, using reference sell
1000BX-D/U SFP Yes Yes
10GBASE-SR SFP+ Yes Yes
10GBASE-LR SFP+ Yes Yes
10GBASE-LRM SFP+ (NEW) No Yes
10GBASE-ER SFP+ Yes Yes
10GBASE-CR SFP+ 1m – 10m Up to 5 meters Up to 10 meters
QSFP+ Transceivers Summit X670-48x Summit X670V-48x
N/A 4 x QSFP+ ports
QSFP+ passive copper cable Yes
QSFP+ active fiber cable Yes
40GBASE-SR4 QSFP+ optic Yes
12
13. Summit Plugin 40G Ethernet
• VIM4-40G4X (X670V)
– QSFP+ connector
– 4 x 40G ports
– Each port can be configured
into 4x10G
– S3, S4 used for stacking
SummitStack – V160
– S1,S3 and S2,S4 used for
stacking SummitStack – V320
– LED 1 is blue when 40G
– LED 1-4 is green when 10G
13
14. Summit X670 Software License
Feature Packs can be applied to both license levels
Core License (Option)
BGP4/BGP4+, IS-IS for IPv4/v6,
OSPFv2/v3-full, MSDP, PIM- Direct Attach™
DM/SM/SSM, EAPS-full-mode, etc. Feature Pack
(Direct Attach & VEPA)
Advanced Edge (Base) MPLS
Includes all L2/L3 switching with
STP, LAG, M-LAG, EAPS-edge-
Feature Pack
(MPLS, VPLS, H-VPLS)
mode, XNV™, CLEAR-Flow, and
RIPv1/v2/ng for IPv4/v6 OFPFv2/v3
edge mode, VRRP, PBR, PIM-SM-
edge-mode, etc.
14
15. Summit X670V and X650 Comparison
Summit® X670 offers equal or greater scalability than Summit X650
Feature Summit X670V Summit X650 Notes
Max Ports 48 x 10 GbE + 4 x 40GbE 24 x 10 GbE + 8 x 10GbE 2x Speed/Density
Summit X650 for 10GbT
10G port type SFP+ SFP+ or 10GBASE-T
opportunity
SummitStack™-V320 SummitStack-V320,
Stacking Virtual Chassis
V160,V80,V V160,V80,V, 256, -512
L2 MAC 128K 32K 4x Scalability
L3 Route 16K LPM 12K LPM Equivalent
QoS Egress 8 queues per port Egress 8 queues per port Equivalent
ACLs 2K ingress, 1K egress 2K ingress, 512 egress 2 x Scalability
Ingress policing and egress Ingress policing and egress Bidirectional w/o
BW Control
shaping shaping loopback
Dual, hot-swappable AC/DC, More flexible with AC/DC
Power Dual, hot-swappable AC/DC
mixed support mixed support
2+1 fan tray, front-to-back and Removable fan tray, Higher availability and
FAN/Cooling
back-to-front front-to-back ideal airflow
Operating
ExtremeXOS® ExtremeXOS Consistent Modular OS
System
15
16. The Extreme Networks Solution
M-LAG
DCBX M-LAG Масштабируемое, надежное и
(Multiswitch Link Aggregation)
производительное ядро ЦОД
Модульная
ОС
XNV™
Direct Attach™
OpenFlow
40GE
High-density
IDM
..позволяет агрегировать два или более
физических канала на двух коммутаторах в
один логический. В качестве подключаемого
сетевого устройства может быть сервер или
коммутатор, порты которого настроена как
обычный LAG (Link Aggregation Group) или
teaming сетевых карт.
16
17. Увеличение пропускной
способности 2х
Меньше конфигурирования и увеличение пропускной
способности!
• MLAG - Multiswitch LAG
• Объединение двух коммутаторов
в виртуальный коммутатор с
синхронизацией таблиц
коммутации
• Выглядит для других
коммутаторов одним
коммутатором, поэтому возможно
использовать группу каналов
(LAG), которая работает с обоими
коммутаторами одновременно
17
18. The Extreme Networks Solution
M-LAG
DCBX DCBX DCB devices capabilities
(Data Center Bridge Capabilities
Модульная Exchange Protocol) IEEE 802.1Qaz exchange within Data Center
ОС
XNV™
Direct Attach™
OpenFlow
40GE
High-density
IDM
.. протокол DCBX (data center discovery and
capability exchange protocol) используется
устройствами поддерживающими DCB для
обмена информации о конфигурации
между напрямую подключенными
соседями. Протокол так же может
использоваться для поиска ошибок в
конфигурации параметров DCB.
18
19. The Extreme Networks Solution
M-LAG
Модульная
Высокая надежность и
DCBX
Операционная непрерывность работы
Модульная
ОС
Система
XNV™
Direct Attach™
OpenFlow
40GE
High-density
IDM
..модульная ОС позволяет загружать и
выгружать модули, активировать
лицензии без прекращения работы
коммутатора. Предоставляет высокую
доступность и надежность. Идеальна
для ЦОД, где серверы и приложения
динамически перемещаются.
19
20. The Extreme Networks Solution
M-LAG
Сеть, понимающая
DCBX XNV™ современные решения по
ExtremeXOS® Network Virtualization
Модульная виртуализации ЦОД
ОС
XNV™
Direct Attach™
OpenFlow
40GE
High-density
IDM
… набор программных решений, позволяющих
полностью контролировать виртуальные
машины в течение их жизненного цикла на
сетевом уровне, а так же предоставляющий
сетевому администратору графическое
средство для работы с ВМ в ЦОД.
20
21. The Extreme Networks Solution
M-LAG
Уменьшает количество сетевых
DCBX Direct Attach™ уровней перемещая
Модульная (VEPA) коммутацию ВМ в сеть ЦОД
ОС
XNV™
Direct Attach™
OpenFlow
40GE
High-density
IDM
.. это следующая ступень развития
виртуализации, позволяющая виртуальным
машинам подключаться к сети напрямую, без
необходимости коммутации на уровне
серверного программного обеспечения. Тем
самым возвращая коммутацию обратно в сеть.
21
22. The Extreme Networks Solution
M-LAG
Мощное средство управления
DCBX OpenFlow коммутацией в ЦОД основанное
Модульная на открытых стандартах
ОС
XNV™
Direct Attach™
OpenFlow
40GE
High-density
IDM
.. это протокол, позволяющий внешнему
устройству управлять forwarding tables
коммутатора. OpenFlow был предложен и
первоначально разработан в Стэнфордском
университете для стандартизации
интерфейса между forwarding plane
коммутатора и control plane.
22
The Summit X670 offers many stacking options – and the VIM module gives the Summit X670V even more stacking flexibility. Using the Summit X670V, you can use SummitStack-V60 or SummitStack V80 cables to do two-port stacking, or upgrade to four-port stacking with SummitStack-V320 cables. Two-port stacking with SummitStack-V160 provides 160 gigabits of stacking, while two-port SummitStack-V80 cuts that back to 80 gigabits for smaller, more economical applications. All these stacking methods mesh with the Summit X650 and the Summit X480, which also support the 40-gigabit VIM modules.
Most optics work with both the Summit X670 and the Summit X670V. However, the copper GBIC 10/100/1000 tri-speed copper SFP optics works fine on the Summit X670, but not on the X670V. The LRM and QSFP+ optics work only with the Summit X670V, while the 10-gig passive copper cables can transmit up to 10 meters from the Summit X670V, but only five meters from the Summit X670 because it connects to the Summit X670 through a PHY. Optics that work with both switch models include SFP; BX; GigE; BXB and BX-U; 10-gigabit SR, LR, and ER.
Another technology that’s becoming very important in the data center is Multi-Chassis Link Aggregation, which is the multi-path capability on our switches. M-LAG is a powerful technology because it is upgraded through software, not hardware. It is an easy way to provide multi path capability in the data center.Our two-tier network architecture can support up to 4608 10 GB Ethernet servers. This is achieved by using our BlackDiamond® X8 chassis-based products and our Summit® X670 top-of-rack product. And we can scale to more servers by adding more tiers in the network.
Another key area of Data Center Infrastructure is what’s called Data Center Bridging, which is used for data and storage integration. DCB is a set of extensions to Ethernet that allow for the lossless transmission of data across an Ethernet network. Let’s have a look at the DCB puzzle…The first one DCBX, which is a Layer 2 communications protocol (Data Center Bridge Capabilities Exchange), which allows DCB (Data Center Bridging) devices to discover and exchange capability information using the protocol (DCBX). The protocol may also be used for miss-configuration detection and for configuration of the peer. Ethernet does not currently have adequate facilities to control and manage the allocation of network bandwidth to different network traffic sources and/or types (traffic differentiation) or to allow the management capabilities to efficiently and fairly prioritize bandwidth utilization across these sources and traffic types. Lacking these complete capabilities, data center managers must either over provision network bandwidth for peak loads, accept customer complaints during these periods, or manage traffic prioritization at the source side by limiting the amount of non-prioritytraffic entering the network.Overcoming these limitations is the key to enabling Ethernet as the foundation for true converged data center networks supporting the LAN, storage, and inter-processor communications. Ethernet has been successful in evolutionary enhancements due to its ability to be backward compatible and allowing plug-and-play deployment.DCB needs to be able to interoperate with traditional Ethernet devices that do not have DCB capabilities. This plug-and-play functionality is provided to DCB devices by the already mentioned protocol called DCBX. This protocol is defined inIEEE 802.1Qaz Task Force. It provides ability for Ethernet devices (bridges, end stations) to detect DCB capability of the peer device. It also allows configuration distribution from one node to another. This simplifies management of DCB nodes significantly. The DCBX protocol uses LLDP services for exchanging DCB capabilities.
Another key area of Data Center Infrastructure is what’s called Data Center Bridging, which is used for data and storage integration. DCB is a set of extensions to Ethernet that allow for the lossless transmission of data across an Ethernet network. Let’s have a look at the DCB puzzle…The first one DCBX, which is a Layer 2 communications protocol (Data Center Bridge Capabilities Exchange), which allows DCB (Data Center Bridging) devices to discover and exchange capability information using the protocol (DCBX). The protocol may also be used for miss-configuration detection and for configuration of the peer. Ethernet does not currently have adequate facilities to control and manage the allocation of network bandwidth to different network traffic sources and/or types (traffic differentiation) or to allow the management capabilities to efficiently and fairly prioritize bandwidth utilization across these sources and traffic types. Lacking these complete capabilities, data center managers must either over provision network bandwidth for peak loads, accept customer complaints during these periods, or manage traffic prioritization at the source side by limiting the amount of non-prioritytraffic entering the network.Overcoming these limitations is the key to enabling Ethernet as the foundation for true converged data center networks supporting the LAN, storage, and inter-processor communications. Ethernet has been successful in evolutionary enhancements due to its ability to be backward compatible and allowing plug-and-play deployment.DCB needs to be able to interoperate with traditional Ethernet devices that do not have DCB capabilities. This plug-and-play functionality is provided to DCB devices by the already mentioned protocol called DCBX. This protocol is defined inIEEE 802.1Qaz Task Force. It provides ability for Ethernet devices (bridges, end stations) to detect DCB capability of the peer device. It also allows configuration distribution from one node to another. This simplifies management of DCB nodes significantly. The DCBX protocol uses LLDP services for exchanging DCB capabilities.
ExtremeXOSNetwork Virtualization or, as we call it, XNV™. Thiskey piece to the best-of-breed solutions in the data center revolves around virtualization. Extreme Networks has a unique virtualization solution. Extreme Networks believes that virtualization needs to be hypervisor-agnostic. By that, we mean that networking infrastructure needs to seamlessly integrate with the virtualization environment, no matter what hypervisor is in that environment, and if there are multiple hypervisors in that environment to be able to support that as well.Extreme Networks XNV Virtualization Management is a set of software loadable modules that deliver an unprecedented level of visibility, control, and automation of virtual machines into the hands of the network administrator. XNV provides tight integration of the server virtualization environment with the networking infrastructure.It does this in a fully automated, or zero-touch manner, and works with all of the different virtualization technologies. Extreme Networks new XNV technology, now in its first release. The current release supports VMware and Citrix, and tightly integrates the network infrastructure with VMware and Citrix based server virtualization.In the forthcoming second release of XNV, both KVM and Microsoft hypervisor technology is planned to be supported and give Extreme Networks a hypervisor-agnostic approach to integrating the network with the server virtualization environment.
Another important technology is Virtual Ethernet Port Aggregator or VEPA. This technology enables data center operators to move the switching functionality out of the hypervisor and back into the network. This technology is being ratified as we speak in the IEEE Standards organizations and we are starting to see commercial products in the market. Extreme Networks is one of the first in the industry to have support for VEPA aggregation in its switching platform. It is available across our data center switching ports. The second key component that’s specific to Extreme Networks is a technology we call Direct Attach™. Direct Attach architecture is Extreme Networks implementation of virtual machine switching performed in the network. Various vendors have taken the path of implementing virtual machine switching within the server through the hypervisor, typically called a vSwitch or virtual switch, which adds an extra layer of complexity or switching to the network infrastructure.Extreme Networks Direct Attach approach to the data center takes the path of moving virtual machine switching back to the network and out of the server domain. This has a lot of implications and a lot of benefits, specifically around performance, security and manageability in the data center environment. These are two data center terms specific to Extreme Networks that are important to be familiar with.
Extreme Networks is working very closely with OpenStack to provide the Networking Layer of that OpenStack infrastructure. So as services evolve from pure CaaS (Compute-as-a-Service) or SaaS (Storage-as-as-Service) to include more networking functionality, Extreme Networks is working with them to provide that network level functionality.The other technology that’s gaining momentum in the marketplace is OpenFlow. OpenFlow enables the separation of the data and control plane and the management of data center services through an OpenFlow controller. Thistechnology is becoming very popular inmany service provider environments that require a large degree of scale. OpenFlow provides for easy management and separation of data and control plane functions.Extreme Networks is working very closely with OpenFlow on integrating its switching functionality into the OpenFlow controller environments.A current trend in the data center is the move toward an open, best-of-breed data standard solutions environment. There is organization called Open Data Center Alliance, which is a consortium of over 50 of the Fortune 500 companies that is basically defining what services they want in an open data center architecture.