Your SlideShare is downloading. ×
0
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Nexus 9000 Update
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Nexus 9000 Update

4,004

Published on

An exciting Introduction to Cisco Nexus 9000 Series Switches

An exciting Introduction to Cisco Nexus 9000 Series Switches

Published in: Technology
0 Comments
6 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
4,004
On Slideshare
0
From Embeds
0
Number of Embeds
5
Actions
Shares
0
Downloads
314
Comments
0
Likes
6
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Single Management Information Tree
  • 48-port
  • Merchant+Scales to 200k 10G hosts, 1M+ IP addresses, IPv4 & v6, 64k tenantsVarious Fabric configuration to achieve cost effective configurations for access and aggregation (2, 3, 6 fabric modules)Encapsulation Normalization and label routingMechanical DesignNo mid/back plane – efficient airflow, no hot spots, less power/ cooling needIncreased reliability by separating Sup and System Controller – chassis “control plane” separate from switch “control plane”Increased performance through distributed CPU architecture – SUP, Linecard, Fabric ModuleDimensioned for NG serdes speed of 25-30G (vs 11.5G today) – doubling BW2x Power headroom (upto 8 PS total)Object Oriented Programmable OS2 operation models – NX-OS and ACI.. Next Gen Development and verification MethodolgyFully automated end-endPerformance and StressDataplane and Control plane
  • 90%+ efficient at 20%/50%/100% of load
  • The Cisco Nexus 9500 Series support redundant half-width supervisor engines that are responsible for control-plane functions. The switch software, Enhanced NX-OS, runs on the supervisor modules. The redundant supervisor modules take active and standby roles supporting stateful switch over in the event of supervisor module hardware failure and In-Service Software Upgrade (ISSU) that allows for software upgrade/maintenance without impacting production services.  The CPU complex of the Nexus 9500 supervisor is based on the Intel Romley platform with 4 core Sand Bridge Exon processors and Patsburg Platform Controller Hub chipset. The default RDAM size is 16 GB that is field upgradable to 48 GB. There is a build-in 64 GB SSD to provide additional on-board non-volatile storage. The high speed multi-core CPU and the large memory build the foundation for a fast and reliable control plane of the switch system. Control plane protocols will benefit from the ample computation horsepower and achieve fast initiation and instantaneous convergence upon network state changes. Additionally, the expandable large DRAM and multi-core CPU provides sufficient compute power and resource to support c-group based Linux containers in which 3rd party applications can be installed and run in a well-contained environment. The on-board SSD provides extra storage for logs, image files and 3rd party applications.The supervisor engine has a serial console port (RJ-45) and a 10/100/1000 Ethernet management port (RJ-45) for out-of-band management. Two USB 2.0 interfaces support using an external USB flash storage for image, syslog, configuration file transfer and other uses. A Pulse-per-second(PPS) clock input port on the supervisor module supports accurate timing synchronization. The communication between the supervisor and the fabric modules or line cards utilize either Ethernet Out-of-Band Channel (EOBC) or Ethernet Protocol Channel (EPC). Both channels have a central hub on the System Controllers. The supervisor modules have redundant paths to the System Controllers.
  • The System Controllers of Cisco Nexus 9500 Series are used to offload internal switching functions and power supply/fan tray access from the Supervisor Engines.Figure 4 Cisco Nexus 9500 Series System ControllerThe System Controllers are the intra-system communication central hubs. It hosts two main control and management communication paths, Ethernet Out-of-Band Channel (EOBC) and Ethernet Protocol Channel (EPC), between supervisor engines, line cards and fabric modules,  All intra-system management communication across modules takes place through the EOBC channel. The EOBC channel is provided via a switch chipset on the System Controllers that inter-connects all modules together, including supervisor engines, fabric modules and line cards. The EPC channel handles intra-system data plane protocol communication. This communication pathway is provided by another redundant Ethernet switch chipset on the System Controllers. Not like the EOBC channel, the EPC switch only connects fabric modules to supervisor engines. If protocol packets need to be sent to supervisor, line cards utilize the internal data path to transfer packets to fabric module first, fabric module then redirects them via the EPC channel to supervisor engines. The System Controller also communicates with and manages power supply units and fan controllers via the system management bus (SMB). The Cisco Nexus 9500 Series support redundant System Controllers. When two System Controllers present in a chassis, an arbitration process will be utilized to select the active System Controller and the other one will assume the secondary role to provide redundancy. Advanced Risc Machines…..
  • The removal of a fan tray to replace or service Fabric Modules does not require downtime….
  • A Nexus 9500 Series switch can have up to six fabric modules that all function in active mode. Each fabric module possesses multiple network forwarding engines, 2 for a Nexus 9508 Switch and 4 for a Nexus 9516 Switch (Figure 8).The Fabric Module of Nexus 9500 Series Switches performs the following important functions in the modular chassis architecture:Provide high-speed non-blocking data forwarding connectivity for line cards. All links on network forwarding engines are active data paths. Each fabric module can provide up to 8 40Gbps links to every line card slot. A nexus 9500 chassis deployed with 6 fabric modules can potentially obtain 48 40Gbps fabric paths for every line card slot, that is equivalent to 3.84 Tbps full duplex bandwidth per slot.Perform distributed LPM (Longest Prefix Match) routing lookup for IPv4 and IPv6 traffic. LPM forwarding information is stored on fabric modules on a Nexus 9500 Series switch. It supports up to 128K IPv4 prefixes or 32K IPv6 prefixes.Perform distributed multicasts lookup and packet replication to send copies of multicast packets to receiving egress line cards.
  • While the switch control plane is centrally run on the Supervisor Engines, the packet lookup and forwarding functions in the data plane are conducted in a highly distributed fashion that involves line cards and fabric modules.  Both line cards and fabric modules of the Cisco Nexus 9500 Series are equipped with multiple network forwarding engines that perform packet lookup, processing and forwarding functions.  Each network forwarding engine is built with 32 40GE links and a core switching capacity of 1.44 Billion packets per second. The 1.44 Billion pps core capacity limits the network forwarding engine to support line-rate forwarding on up to 24 40GE ports for packets smaller than 256 Bytes. Since many modern-day datacenter applications use small-sized packets, it is essential to support line-rate performance for even the smallest packet of 64-byte. In order to achieve this level of forwarding capability, Nexus 9500 Series line cards are designed to use only up to 24 port on each network forwarding engine, among which 12 40GE ports are used as fabric ports for connectivity to fabric modules and the other 12 ports are used as front panel interfaces to support 1, 10, 40 and future 100GE user data ports. The number of network forwarding engines on a line card varies with line card types and the required forwarding capacity to guarantee wire-rate performance for all packet sizes. The fabric module for Nexus 9508 has two network forwarding engines and that for Nexus 9516 has four so that with up to 6 fabric modules, all line cards in a Nexus 9500 chassis can receive adequate data path bandwidth for forwarding traffic at line rate.  With the sufficient data path bandwidth and packet forwarding capacity provided by the network forwarding engines on line cards and fabric modules, the Nexus 9500 Series switches are true non-blocking switch platforms. The network forwarding engines use a combination of dedicated TCAM table space and shared hash table memory to store Layer-2 and Layer-3 forwarding information (Figure 6). Layer-2 switching uses a MAC address table, Layer-3 host routing needs a Host/ARP table and Layr-3 prefix routing requires a LPM prefix table. A 16K traditional TCAM table is dedicated to LPM table, but at the same time, L2 MAC, L3 Host and L3 LPM tables share a hash table memory that’s called Unified Forwarding Table (UFT). Inside the UFT, 32K entries are reserved for L2 MAC addresses, and 16K are reserved for L3 Host entries. The rest 256K entries are shared resources. This shared hash table can be programmed in variable modes to provide different memory allocations for the three information tables. The memory resource sharing and various programmable modes allows for a greater flexibility of memory allocation to cater for different deployment scenarios and to increase the efficiency of the memory resource utilization. To further increase the system-wide forwarding scalability, the Nexus 9500 Series switches are designed to use the UFT tables on line cards and fabric modules for different forwarding lookup functions. They each are programmed in the corresponding mode that caters for the maximum scalability of their given functions. The UFT on line cards stores L2 MAC table and L3 Host table, therefore line cards are responsible for L2 switching lookup and L3 host routing lookup. The UFT on fabric modules hosts L3 LPM table and fabric modules perform L3 LPM routing lookup. Both line cards and fabric modules have multicast tables and take a part in distributed multicast lookup and packet replication. Figure 7 depicts the system-wide forwarding scalability of Nexus 9500 Series switches.
  • VxLAN Bridging/Gateway/Routing
  • Network forwarding engines on a Nexus 9500 Series switch line card perform ingress pipeline and egress pipeline packet processing. Depending on the ingress and egress port locations on the switch, the ingress pipeline and egress pipeline can be on the same or different network forwarding engine or line card. As aforementioned, network forwarding engines on line cards conduct L2 switching lookup and L3 host routing lookup. Line cards are equipped with sufficient number of network forwarding engines and fabric ports to support line-rate forwarding performance for all IP packet sizes.  In addition to the adequate data forwarding capacity for line-rate data plane performance, Nexus 9500 Series switch line cards also have a build-in dual-core CPU to offload or speed up some control plane tasks, such as programming the hardware table resources, collecting and sending line card counters and statistics, offloading BFD protocol handling from the supervisors. This greatly increases the system performance. This greatly improves the system performance.
  • The N9K-X9636PQ line card provides 36 40GE QSFP front panel ports. It has three network forwarding engines for packet forwarding, each supports 12 40GE front panel ports. The trace lengths from a network forwarding engine to the 12 QSFP optics it supports are all under 7", that alleviates the need for PHY or re-timers. Therefore, the entire line card has a PHY-less design. This reduces the data transport latency on the port by briefly 100ns, decreases the port power consumption and improves reliability with a line card design of fewer active components.
  • 48x 1/10G SFP+ Line Card (N9K-X9564PX)This line card provides 48 1G or 10G SPF+ ports and 4 40G QSFP ports. The port speed flexibility allows multidimensional data center deployments with Nexus 9500 Series Switches that can fit in data center access, aggregation and core. There are two network forwarding engines on this line card, one supports the 48 1/10G front panel ports and the other supports the 4 40G ports. Due to the longer trace lengths between the first network forwarding engine and some of the 48 1/10G ports, these ports have to be implemented with re-timers, including ports 1-4 and ports 37-48. Ports 5-36 are still directly driven by the network forwarding engine to keep the benefits of a PHY-less design. Because of the port speed flexibility, the front panel ports can be operating at different speeds. Speed mismatch is one of the top reasons for port congestion and packet buffering. To accommodate this additional need for buffer, the line card is built with two extended processors each offering an additional 40MB packet buffer space while operating in a pass-through mode along the data path between a network forwarding engine and fabric modules. Transit traffic between front panel ports and fabric modules, as well as locally switched traffic from a 10G port to a 1G port, can take advantage of the extended buffer space. 48x 1/10GBastT Line Card (N9K-X9564TX)This line card provides 48 1G/10GBaseT ports and 4 40G QSFP ports. It has a similar architecture as N9K-X9564PX except for that all the 48 1G/10GBasteT ports have been implemented with10GT PHYs to convert to 1G/10GBaseT physical media.
  • Merchant+Scale to 200k 10G hosts, 1M+ IP addresses, IPv4 & v6, 64k tenantsVarious Fabric configuration to achieve cost effective configurations for access and aggregation (2, 3, 6 fabric modules)Encapsulation Normalization and label routingMechanical DesignNo mid/back plane – efficient airflow, no hot spots, less power/ cooling needIncreased reliability by separating Sup and System Controller – chassis “control plane” separate from switch “control plane”Increased performance through distributed CPU architecture – SUP, Linecard, Fabric ModuleDimensioned for NG serdes speed of 25-30G (vs 11.5G today) – doubling BW2x Power headroom (upto 8 PS total)Object Oriented Programmable OS2 operation models – NX-OS and ACI.. Next Gen Development and verification MethodolgyFully automated end-endPerformance and StressDataplane and Control plane
  • System will have the following hardware features96 10GbaseT ports8 40G-QSFP ports1 1000baseT management port1 RS232 console port2 USB 2.0 portsFront to back and back to front airflow options2+2 redundant power supply options. Both AC and DC versions of power supplies are to be available2+1 redundant 80mm fans2RU heightLess than 20 in depthUses Packet Processor and Extended Packet Processor ASICs
  • Sochi system will have the following hardware features:96 10 GbpsbaseT ports8 40 Gbps-QSFP ports1 1000 baseT management port1 RS232 console port2 USB 2.0 portsFront-to-back and back-to-front airflow options2+2 redundant power supply options. Both AC and DC versions of power supplies are to be available at FCS2+1 redundant 80mm fans2 RU heightLess than 20 inches in depthUses Broadcom TridenNFE and Cisco NorthStar ASICs
  • Sochi system will have the following hardware features:96 10 GbpsbaseT ports8 40 Gbps-QSFP ports1 1000 baseT management port1 RS232 console port2 USB 2.0 portsFront-to-back and back-to-front airflow options2+2 redundant power supply options. Both AC and DC versions of power supplies are to be available at FCS2+1 redundant 80mm fans2 RU heightLess than 20 inches in depthUses Broadcom TridenNFE and Cisco NorthStar ASICs
  • +++ detail on port indicatorSochi system will have the following hardware features:96 10 GbpsbaseT ports8 40 Gbps-QSFP ports1 1000 baseT management port1 RS232 console port2 USB 2.0 portsFront-to-back and back-to-front airflow options2+2 redundant power supply options. Both AC and DC versions of power supplies are to be available at FCS2+1 redundant 80mm fans2 RU heightLess than 20 inches in depthUses Broadcom TridenNFE and InsiemeNorthStar ASICs
  • With its high port density and performance for 1/10/40 GE connectivity, Nexus 9500 Series Switches cater for the next generation data center infrastructure that has 1/10GE at the access/leaf and 40GE links at the aggregation/spine to provide more scalable bandwidth for datacenter applications. But migrating a current 10GE infrastructure to 40GE involves more than network platform upgrade. The existing short reach 40GE optics transceivers, either SR4 or CSR4, feature independent transmitter and receiver sections, each with 4 fiber strands in parallel. For a duplex 40G connection, 8 fiber strands are required. On the other hand, the current 10GE cabling infrastructure uses 2 MMF fiber strands for one 10GE connection. So, changing connectivity from 10GE to 40GE involves a forklifting cabling infrastructure upgrade or rebuild. Leaving the extra cost alone, the interruption to services makes it almost impossible to migrate a ground field data center onto 40GE.  Cisco QSFP Bi-Directional transceiver technology solves this problem by providing the capability of transmitting full duplex 40G over two strands of MMF fiber with LC connectors. In another word, QSFP BiDi transceiver allows for 40GE connectivity to re-use the existing 10G fibers and the exiting fiber trunk without the need of expansion or re-build.
  • As the building blocks of the Data Center (Compute, Storage, Applications, Network) become increasingly intertwined, businesses are looking at ways to further increase agility, flexibility, and scale, whilst reducing time-to-deployment and ease maintenance tasks.Due to the scale of deployments, the compute world has seen many advancements in this area, with networking left behind as a “black box” in the Data Center. Hence, there is much interest to open up the networking platforms, enabling them with automation & programmatic tools from the compute world…
  • As the building blocks of the Data Center (Compute, Storage, Applications, Network) become increasingly intertwined, businesses are looking at ways to further increase agility, flexibility, and scale, whilst reducing time-to-deployment and ease maintenance tasks.Due to the scale of deployments, the compute world has seen many advancements in this area, with networking left behind as a “black box” in the Data Center. Hence, there is much interest to open up the networking platforms, enabling them with automation & programmatic tools from the compute world…
  • As the building blocks of the Data Center (Compute, Storage, Applications, Network) become increasingly intertwined, businesses are looking at ways to further increase agility, flexibility, and scale, whilst reducing time-to-deployment and ease maintenance tasks.Due to the scale of deployments, the compute world has seen many advancements in this area, with networking left behind as a “black box” in the Data Center. Hence, there is much interest to open up the networking platforms, enabling them with automation & programmatic tools from the compute world…
  • As the building blocks of the Data Center (Compute, Storage, Applications, Network) become increasingly intertwined, businesses are looking at ways to further increase agility, flexibility, and scale, whilst reducing time-to-deployment and ease maintenance tasks.Due to the scale of deployments, the compute world has seen many advancements in this area, with networking left behind as a “black box” in the Data Center. Hence, there is much interest to open up the networking platforms, enabling them with automation & programmatic tools from the compute world…
  • Transcript

    • 1. NEXUS 9000 An exciting Introduction to Cisco Nexus 9000 Series Switches Cisco’s NX-OS Mode The next generation of data center switching John Lawrence Data Center Consulting Systems Engineer Mobile: 303-886-4507 © 2013 Cisco and/or its affiliates. All rights reserved.
    • 2. • Overall Design Goals • Nexus 9500 Modular Switch • Airflow and Power • Out of Band Management Paths • Fabric Scale and Connectivity • Nexus 9300 Fixed Switch • Cost Effective 40Gb Optics • Use Cases and Designs APIC Check us out Online Agility Simplicity Automation and Visibility © 2013 Cisco and/or its affiliates. All rights reserved. Performance and Scale Security Open www.cisco.com/go/aci
    • 3. Open Ecosystem Application/ Workload Orchestration & Scheduler Unified Northbound API Unified Information Model & API Object Oriented Data Model, Single MIT (mgmt information tree), Policy Engine Object Oriented Switch OS Hardware Innovation and 40G optimization © 2013 Cisco and/or its affiliates. All rights reserved. APIC iNXOS
    • 4. SCALABLE 1/10 /40/100 GE PERFORMANCE Nexus® 9300 FCS Q4 2013 FCS Q1 2014 FCS Q1 2014 Nexus 9500 FCS Q4 2013 Aggregation line card 36 40G QSFP+ FCS Q1 2014 ACI Ready Leaf Line Card 48 1/10G-T & 4 QSFP+ FCS Q1 2014 ACI-ready Leaf line card 48 1/10G SFP+ & 4 QSFP+ 48 1/10G SFP+ & 12 QSFP+ 96 1/10G-T & 8 QSFP+ 12-port QSFP+ GEM FCS Q4 2013 C9500 8-Slot FLEXIBLE FORM FACTORS CAN ENABLE VARIABLE DATA CENTER DESIGN AND SCALING PERFORMANCE PORTS PRICE POWER PROGRAMMABILITY
    • 5. NXOS Mode and ACI  Merchant+ Foundation  State of the Art Mechanical Design  Object Oriented Programmable Operating System  Next Generation Development and Verification Methodology  Two Modes of Operation  Standalone (NX-OS)  Fabric Mode (NX-OS + ACI) © 2013 Cisco and/or its affiliates. All rights reserved.
    • 6. NXOS Mode Only • Modern: 64-bit Linux 3.4.10 Kernel • Comprehensive Purpose Built DC Feature Set: L2/L3/VXLAN, ISSU, Patching* (Cold, Hot), Online Diagnostics (GOLD) etc. • Trimmed base image: (approximately 50%) and single binary image - common image across all platforms for deployed mode • Fault Containment: Complete process isolation for both features & services • Resiliency: Restartable user-space network stack & driver, stateful restart of processes – no switch reboot etc. • Open Management and Programmable Infrastructure: CLI, SNMP, NetConf/XML, OnePK, Open Containers, JSON etc. © 2013 Cisco and/or its affiliates. All rights reserved.
    • 7. NXOS Mode Only Single Image  NXOS image vs. system/kickstart  Same image across Fixed/Modular Integrated Fault Monitoring, Detection and Recovery  Process crashes Feature  Kernel crashes  Hardware Failures API Management Infrastructure Solutions – Fixed and Modular The Hgh Availability Infrastructure API network cannot go down  Stateful Process re-startability  Stateful Switchover  Software Patching (Cold and/or InService) In Service Software Upgrade (ISSU) © 2013 Cisco and/or its affiliates. All rights reserved. Hardware Drivers API Netstack 64-bit Kernel
    • 8. NXOS Mode Only Many customers spend extensive time and effort to test and qualify software prior to deployment. In today’s environments, if a defect is found, effectively root-caused, and integrated, since it is rolled out through a maintenance release, customers would need to restart their qualification cycle, wasting time, and pushing out deployment dates… Bug Found, Diagnose, Ro ot Cause Begin Code Test & Qualification Cycle Maint. Released Restart Qual Cycle Defect Resolved, integrated into Maint. Target Deployment 6 Months 10 Months © 2013 Cisco and/or its affiliates. All rights reserved. Actual Deployment
    • 9. NXOS Mode Only The Nexus9000 Standalone platforms introduces new patching capabilities that allows specific defects to be rolled out in an independent package that can be applied to existing base software binaries. This will help reduce customer code certification times, leading to greater customer satisfaction… NX-OS with Patching Bug Found, Diagnose, Root Cause Begin Code Test & Qualification Cycle Continue Qualify With additional tests Defect Resolved, Patch Released 6 Months 7 Months © 2013 Cisco and/or its affiliates. All rights reserved. Actual Deployment Target Deployment
    • 10. NXOS Mode Only • Upgrade service executable or library in a NX-OS image • Version and Compatibility control • Allows Reverting from a Patch • Integration with server management tools like Chef/Puppet Image Server Admin Puppet Enterprise Console Puppet Master © 2013 Cisco and/or its affiliates. All rights reserved. Devices
    • 11. NXOS Mode Only • To effectively scale, a distributed management approach is optimal. • In distributed systems, maintaining consistency between resources is key to scaling safely. • Why is my traffic not going through this path... • My routing table looks good .. How do I debug • Link consistency • Port Channel consistency • IPv4, IPv6 route consistency • VLAN membership consistency • STP state consistency © 2013 Cisco and/or its affiliates. All rights reserved. sys05-eor2# show forwarding ipv6 inconsistency module 22 IPV6 Consistency check : table_id(0x80000001) slot(22) Execution time : 239 ms () No inconsistent adjacencies. Inconsistent routes: 1. slot(22), vrf(default), prefix (2001:65:80::/45), Route missing in FIB Hardware. sys05-eor2#
    • 12. • Overall Design Goals • Nexus 9500 Modular Switch • Airflow and Power • Out of Band Management Paths • Fabric Scale and Connectivity • Nexus 9300 Fixed Switch • Cost Effective 40Gb Optics • Use Cases and Designs APIC Check us out Online Agility Simplicity Automation and Visibility © 2013 Cisco and/or its affiliates. All rights reserved. Performance and Scale Security Open www.cisco.com/go/aci
    • 13. NXOS Mode and ACI Nexus 9508 Front View Nexus 9508 Rear View 8 Line Card Slots Max 3.84 Tbps/Slot duplex 3 Fan Trays 3 or 6 Fabric Modules (behind fan trays) Redundant Supervisor Engines Redundant System Controller Cards 3000W AC Power Supplies 2+0, 2+1, 2+2 Redundancy Support up to 8 Power supports C97-730019​-01 © 2013 Cisco and/or its affiliates. All rights reserved. Designed for:  Power Efficiency  Cooling Efficiency  Reliability  Future Scale No Mid-plane for LC to FM connectivity Cisco Confidential 13
    • 14. NXOS Mode and ACI 19 in • Maximum three chassis per rack 30 in 17.5 in – Assuming 18KW per rack • Up to 3,456 10G line rate ports per rack • Up to 864 40G line rate ports per rack • Designed for at least 2.5x speed increase in next gen ASICs C97-730019​-01 © 2013 Cisco and/or its affiliates. All rights reserved. 13 RU Front View Cisco Confidential 14
    • 15. NXOS Mode and ACI • Single 20A input at 220V • • • Support for range of international cabling options 92%+ Efficiency Range of PS configurations • • • • Minimum 1 PSU, Maximum 8 PSU (2) PSU for fully loaded chassis N+1 redundancy N+N grid redundancy • 2x head room for future port densities, bandwidth, and optics • Power Efficiency • Platinum rated power supplies, 92% power efficiency across all workloads • 3.5W per 10 Gbps Port • 14W per 40 Gbps Port 3000W AC PSU 80 Plus Platinum is equivalent to Climate Saver/ Green Grid Platinum rating © 2013 Cisco and/or its affiliates. All rights reserved.
    • 16. NXOS Mode and ACI • Redundant half-width supervisor engine • Sandy Bridge, Quad Core, 1.8GHz • 16GB Memory • RAM upgradable to 48GB • 64 GB SSD (default) • Common for 4, 8 and 16 slot chassis • Performance/ Scale Focused • Range of Management Interfaces © 2013 Cisco and/or its affiliates. All rights reserved. Console Port (2) USB Ports Management Port External Clock Input Pulse per Second (Precision Time Protocol)
    • 17. NXOS Mode and ACI • Redundant half-width system controller • • • Offload supervisor from switch “control plane” tasks Increased System Resiliency Increased Scale • Common for 4, 8 and 16 slot chassis • Performance/ Scale focused • • • • Dual Core ARM Processor, 1.3GHz Central Point of Chassis Control Ethernet Out-of-Band Channel (EOBC) switch between Supervisors and Line cards Ethernet Protocol Channel (EPC) switch • 1Gbps switch for Intra-node Data Plane communication (Protocol Packets) © 2013 Cisco and/or its affiliates. All rights reserved. • Manages / Monitors • Power Supplies via SMB (System Management Bus) • Fan Trays
    • 18. NXOS Mode and ACI • Chassis is complete Front- to-Back Airflow • Airflow direction is NOT Reversible Front View • Fan Trays are fully redundant • Fan Trays must be removed in order to service Fabric Modules • Designed for speed Rear View Fan Tray Removed Fabric Modules © 2013 Cisco and/or its affiliates. All rights reserved. increase in multiple next gen ASICs
    • 19. NXOS Mode and ACI • • 3 Fan Trays Up to 6 Fabric Modules • (3) dual fans per tray • Different cost points for 1/10G access and 40G aggregation • Dynamic speed control driven by temperature sensors • Flexibility for future generation of fabric modules • Straight Airflow across Line Cards and Fabric Modules • Quad Core ARM CPU 1.3 GHz for Supervisor offload • N+1 Redundancy per Tray • Hot Swappable • All Modules Forward Traffic • Smooth degradation during replacement © 2013 Cisco and/or its affiliates. All rights reserved. Fabric Module Fan Tray
    • 20. NXOS Mode and ACI • 8-Slot Chassis Fabric Module: • (32) 40G Hi-gig2 links per Packet Processor Line Card • Unified Forwarding Table Line Card Line Card Packet Processor Line Card To Line Cards © 2013 Cisco and/or its affiliates. All rights reserved. Line Card To Fabric • 128 LPM (Longest Prefix Match) routing entries Line Card 32 x 40G Hi-Gig2 on Packet Processor is programmed to support Line Card 64 x 40G Hi-Gig2 • (8) 40G Hi-gig2 links assigned to each I/O slot Packet Processor Line Card 32 x 40G Hi-Gig2 • (2) Packet Processors Fabric Module
    • 21. NXOS Mode and ACI • A Fabric Module in an 8-Slot Chassis can provide up to 320Gbps to each Line Card slot. • With 6 Fabric Modules, each Line Card slot can have up to 1.92Tbps forwarding bandwidth in both directions. Fabric 1 PP PP 320 Gbps (8x 40Gbps) Fabric 2 PP PP 320 Gbps (8x 40Gbps) Fabric 3 PP PP 320 Gbps (8x 40Gbps) Fabric 4 PP PP 320 Gbps (8x 40Gbps) Fabric 5 PP PP 320 Gbps (8x 40Gbps) Fabric 6 PP PP 320 Gbps (8x 40Gbps) 320 Gbps 640 Gbps 960 Gbps 1.28 Tbps Line Card Slot © 2013 Cisco and/or its affiliates. All rights reserved. 1.60 Tbps 1.92 Tbps
    • 22. 40G Aggregation 36 ports 40G QSFP+ (Non Blocking) NXOS Mode 1/10G Access and 10/40G Aggregation 48 ports 10G SFP+ & 4 ports 40G QSFP+ 48 ports 1/10G-T & 4 ports 40G QSFP+ (non blocking) ACI Access 36 ports 40G QSFP+ ((1.5:1 oversubscribed) ACI Access Ready 40G Fabric Spine ACI Spine 36 ports 40G QSFP+ (Non Blocking) ACI Access Ready © 2013 Cisco and/or its affiliates. All rights reserved.
    • 23. NXOS Mode Only • (36) 40G QSFP ports • 1.44 Tbps full-duplex fabric connectivity • L2/L3 line rate performance for all packet sizes • Distributed multicast replication • Supported in 4 and 8 slot chassis • Part Number: N9K- X9636PQ © 2013 Cisco and/or its affiliates. All rights reserved. A 36-Port 40G QSFP+ I/O Module needs 6 Fabric Modules to operate at line rate.
    • 24. NXOS Mode and ACI • • 48p 1G SFP/10G /SFP+ and 4p 40G QSFP • 48p 1G/10G Base-T ports and 4p 40G QSFP • • 1280 Gbps full-duplex fabric connectivity 36p 40G 1.5:1 oversubscribed option 1.92 Tbps full-duplex fabric connectivity • Distributed multicast replication • Requires 3 Fabric Modules • Supported in 4, 8, 16 slot Chassis (4) 40G Uplinks N9K-X9564PX 48-Port 1/10G F + 4-Port 40G N9K-X9564TX 48-Port 1/10G T + 4-Port 40G N9K-X9636PQ 36-Port 40G © 2013 Cisco and/or its affiliates. All rights reserved. (4) 40G Uplinks
    • 25. ACI Spine Only • (36) 40G QSFP ports • 1.44 Tbps full-duplex fabric connectivity • L2/L3 line rate performance for all packet sizes • Distributed multicast replication • Supported in 4, 8 and 16 slot chassis © 2013 Cisco and/or its affiliates. All rights reserved. A 36-Port 40G QSFP+ I/O Module needs 6 Fabric Modules to operate at line rate.
    • 26. NXOS Mode and ACI Fabric Extender Design Options Nexus 2248TP Nexus 2248TP-E Nexus 2232PP-10G Nexus 2232TM-10G Supported Fabric Extenders © 2013 Cisco and/or its affiliates. All rights reserved. Nexus B22-HP Nexus B22-Dell
    • 27. Unprecedented, Full Line-Rate Performance: Proven with RFC 2544, RFC 2889, RFC 3918 throughput tests Results were taken using a fully loaded Nexus® 9508 switch with 288 40 GE ports: • All ports are line rate at 100% unicast traffic load • All ports are line rate at 100% multicast traffic load (frames) • Full line rate for all packet sizes (64 to approx. 9216 Bytes) 1.2E+012 1.0E+012 8.0E+012 6.0E+012 4.0E+012 2.0E+012 0.0E+012 TX RX 1- 64-1 1- 128-1 1- 256-1 1- 512-1 1- 1024-1 1- 1280-1 1- 1518-1 1- 9216-1 C97-730019​-01 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 27
    • 28. Low Latency (Same for Both Unicast and Multicast) Proven with RFC 2544, RFC 2889, RFC 3918 throughput tests Results were taken using a fully-loaded Nexus® 9508 switch with 288 40 GE ports: • Unicast latency at 100% traffic load: ̶ ̶ 1.6 usec (64-Byte packets) 3.5 usec (9216-Byte packets) • Multicast latency at 100% traffic load: ̶ ̶ 1.6 usec (64-Byte packets) 3.5 usec (9216-Byte packets) C97-730019​-01 © 2013 Cisco and/or its affiliates. All rights reserved. Cisco Confidential 28
    • 29. • Overall Design Goals • Nexus 9500 Modular Switch • Airflow and Power • Out of Band Management Paths • Fabric Scale and Connectivity • Nexus 9300 Fixed Switch • Cost Effective 40Gb Optics • Use Cases and Designs APIC Check us out Online Agility Simplicity Automation and Visibility © 2013 Cisco and/or its affiliates. All rights reserved. Performance and Scale Security Open www.cisco.com/go/aci
    • 30. NXOS Mode and ACI Nexus 9300 – Common Nexus 9396PQ • • • (48) port 10G SFP+ (12) port 40G QSFP+ 2-RU Nexus 93128TX • • • (96) port 1/10G-T (8) port 40G QSFP+ 3-RU © 2013 Cisco and/or its affiliates. All rights reserved. • • • • • Redundant Fan and Power Supply Front-to-Back and Back-to-Front airflow Dual or Quad Gladden 1.6 GHz Core CPU 16 GB DRAM Default 64GB SDD Nexus ACI Ready Access GEM • (12) port 40G QSFP+ • Additional 40MB buffer • 3.5x of merchant Packet Processor • Full VxLAN Bridging and Routing Capability
    • 31. NXOS Mode and ACI Non-Blocking 3:1 Oversubscribed Uplinks Nexus 9396PQ Nexus 93128TX Generic Expansion Module (GEM) • • 12 port 40G QSFP (FCS) 6 port 40G QSFP (Post FCS) Fan and Power Supply • • • • Redundant (1+1) Dual 650W and 1200W AC PS option Redundant (2+1) Fan Trays Hot Swappable Cold air intake (blue) and Hot air exhaust (red) options * 80 Plus Platinum is equivalent to Climate Saver/ Green Grid Platinum rating © 2013 Cisco and/or its affiliates. All rights reserved.
    • 32. Cisco Nexus® 9396PX GEM module with 12 40 Gbps QSFP+ ports • 2 RU height • 48 1 Gb SFP/10 Gbps SFP+ ports • 12 40 Gbps QSFP ports (on GEM module) • 1 100/1000baseT management port • 1 RS232 console port Console Management Port USB Ports 48 1Gbps SFP/10Gbps SFP+ ports • 2 USB 2.0 ports • Front-to-back and back-to-front airflow options • 1+1 redundant power supply options • 2+1 redundant fans • No-blocking architecture with line-rate performance on all ports for all packet sizes C97-730019​-01 © 2013 Cisco and/or its affiliates. All rights reserved. Power supply (2+1) fan trays Power supply Cisco Confidential 32
    • 33. GEM module with 12 40 Gbps QSFP+ ports (8 active uplinks) Cisco Nexus® 93128TX • 3 RU height • 96 1/10 Gbps BaseT ports • 8 40 Gbps QSFP ports (on GEM module) • 1 100/1000baseT management port • 1 RS232 console port • 2 USB 2.0 ports Console management port USB ports 96 1 GBaseT/10 GBaseT ports • Front-to-back and back-to-front airflow options • 1+1 redundant power supply options • 2+1 redundant fans C97-730019​-01 © 2013 Cisco and/or its affiliates. All rights reserved. Power supply (2+1) fan trays Power supply Cisco Confidential 33
    • 34. • 12-port 40 Gbps QSFP (FCS) • Additional 40 MB buffer (3.5 times of BCOM NFE) • Full VXLAN gateway, bridging, and routing capability • Common for Nexus® 9396 and Nexus 93128 Switches • Four ports will be disabled when installed in a Cisco® Nexus 93128 Switch. • A white LED under each QSFP port pair indicates port-pair availability. • The LED will be on if the port pair is available. Generic Expansion Module • Redundant (1+1) 650 W and 1200 W AC PS options • 80-Plus-Platinum-certified power supplies* • Redundant (2+1) hot-swappable fan trays • Cold-air intake (blue) and hot-air exhaust (red) options to support front-to-back or back-to-front airflow * 80 Plus Platinum is equivalent to a Climate Saver or Green Grid Platinum rating C97-730019​-01 © 2013 Cisco and/or its affiliates. All rights reserved. Fan and Power Supply Cisco Confidential 34
    • 35. • Overall Design Goals • Nexus 9500 Modular Switch • Airflow and Power • Out of Band Management Paths • Fabric Scale and Connectivity • Nexus 9300 Fixed Switch • Cost Effective 40Gb Optics • Use Cases and Designs APIC Check us out Online Agility Simplicity Automation and Visibility © 2013 Cisco and/or its affiliates. All rights reserved. Performance and Scale Security Open www.cisco.com/go/aci
    • 36. NXOS Mode and ACI Challenge • 40G Optics are significant portion of CAPEX • 40G Optics require new cabling Solution • Re-use existing 10G MMF cabling infrastructure • Re-use patch cables (same LC connector) Cisco 40G SR-BiDi QSFP • QSFP pluggable, MSA compliant • Dual LC Connector • Support for 100m on OM3 and 125m+ on OM4 • Transmit/Receive on 2 wavelengths at 20G each Available end of CY13 and supported across all Cisco QSFP ports © 2013 Cisco and/or its affiliates. All rights reserved.
    • 37. • Overall Design Goals • Nexus 9500 Modular Switch • Airflow and Power • Out of Band Management Paths • Fabric Scale and Connectivity • Nexus 9300 Fixed Switch • Cost Effective 40Gb Optics • Use Cases and Designs APIC Check us out Online Agility Simplicity Automation and Visibility © 2013 Cisco and/or its affiliates. All rights reserved. Performance and Scale Security Open www.cisco.com/go/aci
    • 38. NXOS Mode Only DC Edge DC Core DC Aggr. DC Aggr. Pod 1 DC Access Row Y Row 1 Racks N5548 (X) Racks (Y) Rows © 2013 Cisco and/or its affiliates. All rights reserved. DC Access Row Y Row 1 N5548 (X) N2000 N2000 Layer 3 Layer 2 Pod N Layer 3 Layer 2 (N) PoDs Racks N5548 (X) Racks N5548 (X) N2000 N2000 (Y) Rows Nexus 9000 Boot Camp
    • 39. NXOS Mode Only DC Edge DC Core N9500 N9500 DC Aggr. DC Aggr. Pod 1 N9500 N9500 Pod N Layer 3 Layer 2 DC Access Row Y Row 1 N9500 DC Access Row Y Row 1 (N) PoDs Racks Racks (X) (X) N2000 N2000 (Y) Rows Racks Layer 3 Layer 2 N9500 N9300 (X) Racks N9300 (X) N2000 N2000 (Y) Rows The same design can be built with Nexus9500 at the Core and Nexus9300 with Fabric Extenders at the edge. © 2013 Cisco and/or its affiliates. All rights reserved. Nexus 9000 Boot Camp
    • 40. NXOS Mode Only DC Edge DC Core DC Aggr. DC Aggr. Pod 1 UCS FI Layer 3 Layer 2 Pod N Layer 3 Layer 2 DC Access UCS Access (N) PoDs (Y) Rows © 2013 Cisco and/or its affiliates. All rights reserved. (Y) Rows Nexus 9000 Boot Camp
    • 41. NXOS Mode Only DC Edge DC Core N9500 N9500 DC Aggr. DC Aggr. Pod 1 UCS FI N9500 N9500 Pod N Layer 3 Layer 2 N9300 UCS Access (N) PoDs FEX (Y) Rows N9500 N9500 Layer 3 Layer 2 N9300 Blade Server Access FEX (Y) Rows The same design can be built with Nexus9500 at the Core and Nexus9300 with Fabric Extenders at the edge © 2013 Cisco and/or its affiliates. All rights reserved. Nexus 9000 Boot Camp
    • 42. • The Nexus 9500 Chassis provides scalable performance with consistent parts across the Nexus9504, 9508, 9516 platforms. • The platform is future proofed for higher power ASICs. • The Nexus 9500 has no mid-plane to further improve front- to-back airflow. • The Merchant+ strategy minimizes the number of ASICs and lowers power consumption while providing value-added network options. • The QSFP BiDi optics allow 40Gb connectivity over standard OM3/OM4 cable plants. © 2013 Cisco and/or its affiliates. All rights reserved.
    • 43. © 2013 Cisco and/or its affiliates. All rights reserved.

    ×