SlideShare a Scribd company logo
SOLUTION DESIGN GUIDE
Brocade Data Center Fabric Architectures
53-1004601-02
06 October 2016
© 2016, Brocade Communications Systems, Inc. All Rights Reserved.
Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other
countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at www.brocade.com/en/legal/
brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without
notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade
sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the
United States government.
The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this
document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it.
The product described by this document may contain open source software covered by the GNU General Public License or other open source license
agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and
obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd.
Brocade Data Center Fabric Architectures
2 53-1004601-02
Contents
Preface...................................................................................................................................................................................................................................5
Document History......................................................................................................................................................................................................................................5
About the Author........................................................................................................................................................................................................................................5
Overview........................................................................................................................................................................................................................................................5
Purpose of This Document....................................................................................................................................................................................................................5
About Brocade............................................................................................................................................................................................................................................ 6
Data Center Networking Architectures...........................................................................................................................................................................7
Throughput and Traffic Patterns...........................................................................................................................................................................................................7
Scale Requirements of Cloud Networks...........................................................................................................................................................................................9
Traffic Isolation, Segmentation, and Application Continuity..................................................................................................................................................... 9
Data Center Networks: Building Blocks.......................................................................................................................................................................11
Brocade VDX and SLX Platforms....................................................................................................................................................................................................11
VDX 6740........................................................................................................................................................................................................................................11
VDX 6940........................................................................................................................................................................................................................................12
VDX 8770........................................................................................................................................................................................................................................13
SLX 9850.........................................................................................................................................................................................................................................14
Networking Endpoints...........................................................................................................................................................................................................................15
Single-Tier Topology............................................................................................................................................................................................................................. 16
Design Considerations................................................................................................................................................................................................................ 17
Oversubscription Ratios..............................................................................................................................................................................................................17
Port Density and Speeds for Uplinks and Downlinks.....................................................................................................................................................17
Scale and Future Growth............................................................................................................................................................................................................ 17
Ports on Demand Licensing.....................................................................................................................................................................................................18
Leaf-Spine Topology (Two Tiers)......................................................................................................................................................................................................18
Design Considerations................................................................................................................................................................................................................ 19
Oversubscription Ratios..............................................................................................................................................................................................................19
Leaf and Spine Scale....................................................................................................................................................................................................................19
Port Speeds for Uplinks and Downlinks.............................................................................................................................................................................. 20
Scale and Future Growth............................................................................................................................................................................................................ 20
Ports on Demand Licensing.....................................................................................................................................................................................................20
Deployment Model....................................................................................................................................................................................................................... 20
Data Center Points of Delivery.................................................................................................................................................................................................21
Optimized 5-Stage Folded Clos Topology (Three Tiers)....................................................................................................................................................... 21
Design Considerations................................................................................................................................................................................................................ 22
Oversubscription Ratios..............................................................................................................................................................................................................23
Deployment Model....................................................................................................................................................................................................................... 23
Edge Services and Border Switches Topology.......................................................................................................................................................................... 23
Design Considerations................................................................................................................................................................................................................ 24
Oversubscription Ratios..............................................................................................................................................................................................................24
Data Center Core/WAN Edge Handoff.................................................................................................................................................................................24
Data Center Core and WAN Edge Routers.........................................................................................................................................................................25
Building Data Center Sites with Brocade VCS Fabric Technology........................................................................................................................27
Data Center Site with Leaf-Spine Topology.................................................................................................................................................................................28
Scale....................................................................................................................................................................................................................................................30
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics...........................................................................................................31
Brocade Data Center Fabric Architectures
53-1004601-02 3
Scale....................................................................................................................................................................................................................................................34
Building Data Center Sites with Brocade IP Fabric...................................................................................................................................................37
Data Center Site with Leaf-Spine Topology.................................................................................................................................................................................37
Scale....................................................................................................................................................................................................................................................39
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos............................................................................................................................40
Scale....................................................................................................................................................................................................................................................41
Building Data Center Sites with Layer 2 and Layer 3 Fabrics................................................................................................................................ 45
Scaling a Data Center Site with a Data Center Core................................................................................................................................................. 47
Control-Plane and Hardware-Scale Considerations.................................................................................................................................................49
Control-Plane Architectures................................................................................................................................................................................................................50
Single-Tier Data Center Sites....................................................................................................................................................................................................50
Brocade VCS Fabric.....................................................................................................................................................................................................................51
Multi-Fabric Topology Using VCS Technology................................................................................................................................................................ 54
Brocade IP Fabric..........................................................................................................................................................................................................................56
Routing Protocol Architecture for Brocade IP Fabric and Multi-Fabric Topology Using VCS Technology..................................................59
eBGP-based Brocade IP Fabric and Multi-Fabric Topology............................................................................................................................................... 59
iBGP-based Brocade IP Fabric and Multi-Fabric Topology.................................................................................................................................................60
Choosing an Architecture for Your Data Center.........................................................................................................................................................63
High-Level Comparison Table ......................................................................................................................................................................................................... 63
Deployment Scale Considerations...................................................................................................................................................................................................64
Fabric Architecture..................................................................................................................................................................................................................................65
Recommendations................................................................................................................................................................................................................................. 65
Brocade Data Center Fabric Architectures
4 53-1004601-02
Preface
• Document History................................................................................................................................................................................................5
• About the Author...................................................................................................................................................................................................5
• Overview...................................................................................................................................................................................................................5
• Purpose of This Document.............................................................................................................................................................................. 5
• About Brocade.......................................................................................................................................................................................................6
Document History
Date Part Number Description
February 9, 2016 Initial release with DC fabric architectures, network virtualization, Data Center Interconnect, and
automation content.
September 13, 2016 53-1004601-01 Initial release of solution design guide for DC fabric architectures.
October 06, 2016 53-1004601-02 Replaced the figures for the Brocade VDX 6940-36Q and the Brocade VDX 6940-144S.
About the Author
Anuj Dewangan is the lead Technical Marketing Engineer (TME) for Brocade's data center switching products. He holds a CCIE in
Routing and Switching and has several years of experience in the networking industry with roles in software development, solution
validation, technical marketing, and product management. At Brocade, his focus is creating reference architectures, working with
customers and account teams to address their challenges with data center networks, creating product and solution collateral, and helping
define products and solutions. He regularly speaks at industry events and has authored several white papers and solution design guides
on data center networking.
The author would like to acknowledge Jeni Lloyd and Patrick LaPorte for their in-depth review of this solution guide and for providing
valuable insight, edits, and feedback.
Overview
Based on the principles of the New IP, Brocade is building on Brocade
®
VDX
®
and Brocade
®
SLX
®
platforms by delivering cloud-
optimized network and network virtualization architectures and delivering new automation innovations to meet customer demand for
higher levels of scale, agility, and operational efficiency.
The scalable and highly automated Brocade data center fabric architectures described in this solution design guide make it easy for
infrastructure planners to architect, automate, and integrate with current and future data center technologies while they transition to their
own cloud-optimized data center on their own time and terms.
Purpose of This Document
This guide helps network architects, virtualization architects, and network engineers to make informed design, architecture, and
deployment decisions that best meet their technical and business objectives. Network architecture and deployment options for scaling
from tens to hundreds of thousands of servers are discussed in detail.
Brocade Data Center Fabric Architectures
53-1004601-02 5
About Brocade
Brocade® (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where
applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity,
non-stop networking, application optimization, and investment protection.
Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and
cost while enabling virtualization and cloud computing to increase business agility.
To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support,
and professional services offerings (www.brocade.com).
About Brocade
Brocade Data Center Fabric Architectures
6 53-1004601-02
Data Center Networking Architectures
• Throughput and Traffic Patterns..................................................................................................................................................................... 7
• Scale Requirements of Cloud Networks..................................................................................................................................................... 9
• Traffic Isolation, Segmentation, and Application Continuity................................................................................................................9
Data center networking architectures have evolved with the changing requirements of the modern data center and cloud environments.
This evolution has been triggered by a combination of industry technology trends like server virtualization as well as the architectural
changes of the applications being deployed in the data centers. These technological and architectural changes are affecting the way
private and public cloud networks are designed. As these changes proliferate in the traditional data centers, the need to adopt modern
data center architectures has been growing.
Throughput and Traffic Patterns
Traditional data center network architectures were a derivative of the three-tier topology, prevalent in enterprise campus environments.
The tiers are defined as Access, Aggregation, and Core. Figure 1 shows an example of a data center network built using a traditional
three-tier topology.
FIGURE 1 Three-Tier Data Center Architecture
The three-tier topology was architected with the requirements of an enterprise campus in mind. In a campus network, the basic
requirement of the access layer is to provide connectivity to workstations. These workstations exchange traffic either with an enterprise
Brocade Data Center Fabric Architectures
53-1004601-02 7
data center for business application access or with the Internet. As a result, most traffic in this network traverses in and out through the
tiers in the network. This traffic pattern is commonly referred to as north-south traffic.
The throughput requirements for traffic in a campus environment are less compared to those of a data center network where server
virtualization has increased the application density and subsequently the data throughput to and from the servers. In addition, cloud
applications are often multitiered and hosted at different endpoints connected to the network. The communication between these
application tiers is a major contributor to the overall traffic in a data center. The multitiered nature of the applications deployed in a data
center drives traffic patterns in a data center network to be more east-west than north-south. In fact, some of the very large data centers
hosting multitiered applications report that more than 90 percent of their overall traffic occurs between the application tiers.
Because of high throughput requirements and the east-west traffic patterns, the networking access layer that connects directly to the
servers exchanges a much higher proportion of traffic with the upper layers of the networking infrastructure, as compared to an enterprise
campus network.
These reasons have driven the data center network architecture evolution into scale-out architectures. Figure 2 illustrates a leaf-spine
topology, which is an example of a scale-out architecture. These scale-out architectures are built to maximize the throughput of traffic
exchange between the leaf layer and the spine layer.
FIGURE 2 Scale-Out Architecture: Ideal for East-West Traffic Patterns Common with Web-Based or Cloud-Based Application Designs
As compared to a three-tier network, where the aggregation layer is restricted to two devices—typically because of technologies like
Multi-Chassis Trunking (MCT) where exactly two devices can participate in the creation of port channels facing the access-layer switches
—the spine layer can have multiple devices and hence provide a higher port density to connect to the leaf-layer switches. This allows
more interfaces from each leaf to connect into the spine layer, providing higher throughput from each leaf to the spine layer. The
characteristics of a leaf-spine topology are discussed in more detail in subsequent sections.
The traditional three-tier datacenter architecture is still prevalent in environments where traffic throughput requirements between the
networking layers can be satisfied through high-density platforms at the aggregation layer. For certain use cases like co-location data
centers, where customer traffic is restricted to racks or managed areas within the data center, a three-tier architecture maybe more
suitable. Similarly, enterprises hosting nonvirtualized and single-tiered applications may find the three-tier datacenter architecture more
suitable.
Throughput and Traffic Patterns
Brocade Data Center Fabric Architectures
8 53-1004601-02
Scale Requirements of Cloud Networks
Another trend in recent years has been the consolidation of disaggregated infrastructures into larger central locations. With the changing
economics and processes of application delivery, there has also been a shift of application workloads to public cloud provider networks.
Enterprises have looked to consolidate and host private cloud services. Meanwhile, software cloud services, as well as infrastructure and
platform service providers, have grown at a rapid pace. With this increasing shift of applications to the private and public cloud, the scale
of the network deployment has increased drastically. Advanced scale-out architectures allow networks to be deployed at many multiples
of the scale of a leaf-spine topology. An example of Brocade's advanced scale-out architecture is shown in Figure 3.
FIGURE 3 Example of Brocade's Advanced Scale-Out Architecture (Optimized 5-Stage Clos)
Brocade's advanced scale-out architectures allow data centers to be built at very high scales of ports and racks. Advanced scale-out
architectures using an optimized 5-stage Clos topology are described later in more detail.
A consequence of server virtualization enabling physical servers to host several virtual machines (VMs) is that the scale requirement for
the control and data planes for networking parameters like MAC addresses, IP addresses, and Address Resolution Protocol (ARP) tables
has multiplied. Also, these virtualized servers must support much higher throughput than in a traditional enterprise environment, leading
to an evolution in Ethernet standards of 10 Gigabit Ethernet (10 GbE), 25GbE, 40 GbE, 50GbE, and 100 GbE.
Traffic Isolation, Segmentation, and Application Continuity
For multitenant cloud environments, providing traffic isolation between the network tenants is a priority. This isolation must be achieved at
all networking layers. In addition, many environments must support overlapping IP addresses and VLAN numbering for the tenants of
the network. Providing traffic segmentation through enforcement of security and traffic policies for each cloud tenant's application tiers is
a requirement as well.
In order to support application continuity and infrastructure high availability, it is commonly required that the underlying networking
infrastructure be extended within and across one or more data center sites. Extension of Layer 2 domains is a specific requirement in
many cases. Examples of this include virtual machine mobility across the infrastructure for high availability; resource load balancing and
fault tolerance needs; and creation of application-level clustering, which commonly relies on shared broadcast domains for clustering
Traffic Isolation, Segmentation, and Application Continuity
Brocade Data Center Fabric Architectures
53-1004601-02 9
operations like cluster node discovery and many-to-many communication. The need to extend tenant Layer 2 and Layer 3 domains
while still supporting a common infrastructure Layer 3 environment across the infrastructure and also across sites is creating new
challenges for network architects and administrators.
The remainder of this solution design guide describes data center networking architectures that meet the requirements identified above
for building cloud-optimized networks that address current and future needs for enterprises and service provider clouds. This guide
focuses on the design considerations and choices for building a data center site using Brocade platforms and technologies. Refer to the
Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide for a discussion on multitenant
infrastructures and overlay networking that builds on the architectural concepts defined here.
Traffic Isolation, Segmentation, and Application Continuity
Brocade Data Center Fabric Architectures
10 53-1004601-02
Data Center Networks: Building Blocks
• Brocade VDX and SLX Platforms...............................................................................................................................................................11
• Networking Endpoints..................................................................................................................................................................................... 15
• Single-Tier Topology........................................................................................................................................................................................16
• Leaf-Spine Topology (Two Tiers)................................................................................................................................................................ 18
• Optimized 5-Stage Folded Clos Topology (Three Tiers)..................................................................................................................21
• Edge Services and Border Switches Topology.....................................................................................................................................23
This section discusses the building blocks that are used to build the network and network virtualization architectures for a data center site.
These building blocks consist of the various elements that fit into an overall data center site deployment. The goal is to build fairly
independent elements that can be assembled together, depending on the scale requirements of the networking infrastructure.
Brocade VDX and SLX Platforms
The first building block for the networking infrastructure is the Brocade networking platforms, which include Brocade VDX
®
switches and
Brocade SLX
®
routers. This section provides a high-level summary of each of these two platform families.
Brocade VDX switches with IP fabrics and VCS fabrics provide automation, resiliency, and scalability. Industry-leading Brocade VDX
switches are the foundation for high-performance connectivity in data center fabric, storage, and IP network environments. Available in
fixed and modular forms, these highly reliable, scalable, and available switches are designed for a wide range of environments, enabling a
low Total Cost of Ownership (TCO) and fast Return on Investment (ROI).
VDX 6740
The Brocade VDX 6740 series of switches provides the advanced feature set that data centers require while delivering the high
performance and low latency that virtualized environments demand. Together with Brocade data center fabrics, these switches transform
data center networks to support the New IP by enabling cloud-based architectures that deliver new levels of scale, agility, and operational
efficiency. These highly automated, software-driven, and programmable data center fabric design solutions support a breadth of network
virtualization options and scale for data center environments ranging from tens to thousands of servers. Moreover, they make it easy for
organizations to architect, automate, and integrate current and future data center technologies while they transition to a cloud model that
addresses their needs, on their timetable and on their terms. The Brocade VDX 6740 Switch offers 48 10-Gigabit-Ethernet (GbE) Small
Form Factor Pluggable Plus (SFP+) ports and 4 40-GbE Quad SFP+ (QSFP+) ports in a 1U form factor. Each 40-GbE SFP+ port can
be broken out into four independent 10-GbE SFP+ ports, providing an additional 16 10-GbE SFP+ ports, which can be licensed with
Ports on Demand (PoD).
FIGURE 4 VDX 6740
Brocade Data Center Fabric Architectures
53-1004601-02 11
FIGURE 5 VDX 6740T
FIGURE 6 VDX 6740T-1G
VDX 6940
The Brocade VDX 6940-36Q is a fixed 40-Gigabit-Ethernet (GbE)optimized switch in a 1U form factor. It offers 36 40-GbE QSFP+
ports and can be deployed as a spine or leaf switch. Each 40-GbE port can be broken out into four independent 10-GbE SFP+ ports,
providing a total of 144 10-GbE SFP+ ports. Deployed as a spine, it provides options to connect 40-GbE or 10-GbE uplinks from leaf
switches. By deploying this high-density, compact switch, data center administrators can reduce their TCO through savings on power,
space, and cooling. In a leaf deployment, 10-GbE and 40-GbE ports can be mixed, offering flexible design options to cost-effectively
support demanding data center and service provider environments. As with other Brocade VDX platforms, the Brocade VDX 6940-36Q
offers a Ports on Demand (PoD) licensing model. The Brocade VDX 6940-36Q is available with 24 ports or 36 ports. The 24-port
model offers a lower entry point for organizations that want to start small and grow their networks over time. By installing a software
license, organizations can upgrade their 24-port switch to the maximum 36-port switch. The Brocade VDX 6940-144S Switch is
10 GbE optimized with 40-GbE or 100-GbE uplinks in a 2U form factor. It offers 96 native 1/10-GbE SFP/SFP+ ports and 12
40-GbE QSFP+ ports, or 4 100-GbE QSFP28 ports.
FIGURE 7 VDX 6940-36Q
Brocade VDX and SLX Platforms
Brocade Data Center Fabric Architectures
12 53-1004601-02
FIGURE 8 VDX 6940-144S
VDX 8770
The Brocade VDX 8770 switch is designed to scale and support complex environments with dense virtualization and dynamic traffic
patterns—where more automation is required for operational scalability. The 100-GbE-ready Brocade VDX 8770 dramatically increases
the scale that can be achieved in Brocade data center fabrics, with 10-GbE and 40-GbE wire-speed switching, numerous line card
options, and the ability to connect over 8,000 server ports in a single switching domain. Available in 4-slot and 8-slot versions, the
Brocade VDX 8770 is a highly scalable, low-latency modular switch that supports the most demanding data center networks.
FIGURE 9 VDX 8770-4
Brocade VDX and SLX Platforms
Brocade Data Center Fabric Architectures
53-1004601-02 13
FIGURE 10 VDX 8770-8
SLX 9850
The Brocade
®
SLX
™
9850 Router is designed to deliver the cost-effective density, scale, and performance needed to address the
ongoing explosion of network bandwidth, devices, and services today and in the future. This flexible platform powered by Brocade
SLX-OS provides carrier-class advanced features leveraging proven Brocade routing technology that is used in the most demanding
data center, service provider, and enterprise networks today and is delivered on best-in-class forwarding hardware. The extensible
architecture of the Brocade SLX 9850 is designed for investment protection to readily support future needs for greater bandwidth, scale,
and forwarding capabilities.
Additionally, the Brocade SLX 9850 helps address the increasing agility and analytics needs of digital businesses with network
automation and network visibility innovation supported through the Brocade Workflow Composer
™
and the Brocade SLX Insight
Architecture
™
.
FIGURE 11 Brocade SLX-9850-4
Brocade VDX and SLX Platforms
Brocade Data Center Fabric Architectures
14 53-1004601-02
FIGURE 12 Brocade SLX-9850-8
Networking Endpoints
The next building blocks are the networking endpoints to connect to the networking infrastructure. These endpoints include the compute
servers and storage devices, as well as network service appliances such as firewalls and load balancers.
FIGURE 13 Networking Endpoints and Racks
Figure 13 shows the different types of racks used in a data center infrastructure:
• Infrastructure and management racks—These racks host the management infrastructure, which includes any management
appliances or software used to manage the infrastructure. Examples of these are server virtualization management software like
VMware vCenter or Microsoft SCVMM, orchestration software like OpenStack or VMware vRealize Automation, network
controllers like the Brocade SDN Controller or VMware NSX, and network management and automation tools like Brocade
Network Advisor. Examples of infrastructure racks are IP physical or virtual storage appliances.
Networking Endpoints
Brocade Data Center Fabric Architectures
53-1004601-02 15
• Compute racks—Compute racks host the workloads for the data centers. These workloads can be physical servers, or they can
be virtualized servers when the workload is made up of virtual machines (VMs). The compute endpoints can be single or can be
multihomed to the network.
• Edge racks—Network services like perimeter firewalls, load balancers, and NAT devices connected to the network are
consolidated in edge racks. The role of the edge racks is to host the edge services, which can be physical appliances or virtual
machines.
These definitions of infrastructure/management, compute, and edge racks are used throughout this solution design guide.
Single-Tier Topology
The second building block is a single-tier network topology to connect endpoints to the network. Because of the existence of only one
tier, all endpoints connect to this tier of the network. An example of a single-tier topology is shown in Figure 14. The single-tier switches
are shown as a virtual Link Aggregation Group (vLAG) pair. However, the single-tier switches can also be part of a Multi-Chassis Trunking
(MCT) pair. The Brocade VDX supports vLAG pairs, whereas the Brocade SLX 9850 supports MCT.
The topology in Figure 14 shows the management/infrastructure, compute, and edge racks connected to a pair of switches participating
in multiswitch port channeling. This pair of switches is called a vLAG pair.
FIGURE 14 Single Networking Tier
The single-tier topology scales the least among all the topologies described in this guide, but it provides the best choice for smaller
deployments, as it reduces the Capital Expenditure (CapEx) costs for the network in terms of the size of the infrastructure deployed. It
also reduces the optics and cabling costs for the networking infrastructure.
Single-Tier Topology
Brocade Data Center Fabric Architectures
16 53-1004601-02
Design Considerations
The design considerations for deploying a single-tier topology are summarized in this section.
Oversubscription Ratios
It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios
at the vLAG pair/MCT should be well understood and planned for.
The north-south oversubscription at the vLAG pair/MCT is described as the ratio of the aggregate bandwidth of all downlinks from the
vLAG pair/MCT that are connected to the endpoints to the aggregate bandwidth of all uplinks that are connected to the data center
core/WAN edge router (described in a later section). The north-south oversubscription dictates the proportion of traffic between the
endpoints versus the traffic entering and exiting the single-tier topology.
It is also important to understand the bandwidth requirements for the inter-rack traffic. This is especially true for all north-south
communication through the services hosted in the edge racks. All such traffic flows through the vLAG pair/MCT to the edge racks, and if
the traffic needs to exit, it flows back to the vLAG/MCT switches. Thus, the aggregate ratio of bandwidth connecting the compute racks
to the aggregate ratio of bandwidth connecting the edge racks is an important consideration.
Another consideration is the bandwidth of the link that interconnects the vLAG pair/MCT. In case of multihomed endpoints and no failure,
this link should not be used for data-plane forwarding. However, if there are link failures in the network, this link may be used for data-
plane forwarding. The bandwidth requirement for this link depends on the redundancy design for link failures. For example, a design to
tolerate up to two 10-GbE link failures has a 20-GbE interconnection between the Top of Rack/End of Row (ToR/EoR) switches.
Port Density and Speeds for Uplinks and Downlinks
In a single-tier topology, the uplink and downlink port density of the vLAG pair/MCT determines the number of endpoints that can be
connected to the network, as well as the north-south oversubscription ratios.
Another key consideration for single-tier topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX and
SLX Series platforms support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks (25-GbE
interfaces will be supported in the future with the Brocade SLX 9850). The choice of the platform for the vLAG pair/MCT depends on
the interface speed and density requirements.
Scale and Future Growth
A design consideration for single-tier topologies is the need to plan for more capacity in the existing infrastructure and more endpoints in
the future.
Adding more capacity between existing endpoints and vLAG switches can be done by adding new links between them. Any future
expansion in the number of endpoints connected to the single-tier topology should be accounted for during the network design, as this
requires additional ports in the vLAG switches.
Other key considerations are whether to connect the vLAG/MCT pair to external networks through data center core/WAN edge routers
and whether to add a networking tier for higher scale. These designs require additional ports at the ToR/EoR. Multitier designs are
described in a later section of this guide.
Single-Tier Topology
Brocade Data Center Fabric Architectures
53-1004601-02 17
Ports on Demand Licensing
Ports on Demand licensing allows you to expand your capacity at your own pace, in that you can invest in a higher port density platform,
yet license only a subset of the available ports—the ports that you are using for current needs. This allows for an extensible and future-
proof network architecture without the additional upfront cost for unused ports on the switches.
Leaf-Spine Topology (Two Tiers)
The two-tier leaf-spine topology has become the de facto standard for networking topologies when building medium- to large-scale
data center infrastructures. An example of leaf-spine topology is shown in Figure 15.
FIGURE 15 Leaf-Spine Topology
The leaf-spine topology is adapted from traditional Clos telecommunications networks. This topology is also known as the "3-stage
folded Clos," with the ingress and egress stages proposed in the original Clos architecture folding together at the spine to form the leafs.
The role of the leaf is to provide connectivity to the endpoints in the network. These endpoints include compute servers and storage
devices, as well as other networking devices like routers and switches, load balancers, firewalls, or any other networking endpoint—
physical or virtual. As all endpoints connect only to the leafs, policy enforcement including security, traffic path selection, Quality of
Service (QoS) markings, traffic scheduling, policing, shaping, and traffic redirection are implemented at the leafs. The Brocade VDX 6740
and 6940 family of switches is used as leaf switches.
The role of the spine is to provide interconnectivity between the leafs. Network endpoints do not connect to the spines. As most policy
implementation is performed at the leafs, the major role of the spine is to participate in the control-plane and data-plane operations for
traffic forwarding between the leafs. Brocade VDX or SLX platform families are used as the spine switches depending on the scale and
feature requirements.
As a design principle, the following requirements apply to the leaf-spine topology:
• Each leaf connects to all spines in the network.
• The spines are not interconnected with each other.
• The leafs are not interconnected with each other for data-plane purposes. (The leafs may be interconnected for control-plane
operations such as forming a server-facing vLAG.)
The following are some of the key benefits of a leaf-spine topology:
• Because each leaf is connected to every spine, there are multiple redundant paths available for traffic between any pair of leafs.
Link failures cause other paths in the network to be used.
• Because of the existence of multiple paths, Equal-Cost Multipathing (ECMP) can be leveraged for flows traversing between
pairs of leafs. With ECMP, each leaf has equal-cost routes to reach destinations in other leafs, equal to the number of spines in
the network.
Leaf-Spine Topology (Two Tiers)
Brocade Data Center Fabric Architectures
18 53-1004601-02
• The leaf-spine topology provides a basis for a scale-out architecture. New leafs can be added to the network without affecting
the provisioned east-west capacity for the existing infrastructure.
• New spines and new uplink ports on the leafs can be provisioned to increase the capacity of the leaf-spine fabric.
• The role of each tier in the network is well defined (as discussed previously), providing modularity in the networking functions
and reducing architectural and deployment complexities.
• The leaf-spine topology provides granular control over subscription ratios for traffic flowing within a rack, between racks, and
outside the leaf-spine topology.
Design Considerations
There are several design considerations for deploying a leaf-spine topology. This section summarizes the key considerations.
Oversubscription Ratios
It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios
at each layer should be well understood and planned for.
For a leaf switch, the ports connecting to the endpoints are defined as downlink ports, and the ports connecting to the spines are defined
as uplink ports. The north-south oversubscription ratio at the leafs is the ratio of the aggregate bandwidth for the downlink ports and the
aggregate bandwidth for the uplink ports.
For a spine switch in a leaf-spine topology, the east-west oversubscription ratio is defined per pair of leaf switches connecting to the
spine switch. For a given pair of leaf switches connecting to the spine switch, the east-west oversubscription ratio at the spine is the ratio
of the aggregate bandwidth of the uplinks of the first switch and the aggregate bandwidth of the uplinks of the second switch. In a
majority of deployments, this ratio is 1:1, making the east-west oversubscription ratio at the spine nonblocking. Exceptions to the
nonblocking east-west oversubscriptions should be well understood and depend on the traffic patterns of the endpoints that are
connected to the respective leafs.
The oversubscription ratios described here govern the ratio of the traffic bandwidth between endpoints connected to the same leaf switch
and the traffic bandwidth between endpoints connected to different leaf switches. For example, if the north-south oversubscription ratio is
3:1 at the leafs and 1:1 at the spines, then the bandwidth of traffic between endpoints connected to the same leaf switch should be three
times the bandwidth between endpoints connected to different leafs. From a network endpoint perspective, the network
oversubscriptions should be planned so that the endpoints connected to the network have the required bandwidth for communications.
Specifically, endpoints that are expected to use higher bandwidth should be localized to the same leaf switch (or the same leaf switch pair
when endpoints are multihomed).
The ratio of the aggregate bandwidth of all spine downlinks connected to the leafs and the aggregate bandwidth of all downlinks
connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south
oversubscription at the spine. The north-south oversubscription dictates the traffic destined to the services that are connected to the
border leaf switches and that exit the data center site.
Leaf and Spine Scale
Because the endpoints in the network connect only to the leaf switches, the number of leaf switches in the network depends on the
number of interfaces required to connect all the endpoints. The port count requirement should also account for multihomed endpoints.
Because each leaf switch connects to all spines, the port density on the spine switch determines the maximum number of leaf switches
in the topology. A higher oversubscription ratio at the leafs reduces the leaf scale requirements, as well.
Leaf-Spine Topology (Two Tiers)
Brocade Data Center Fabric Architectures
53-1004601-02 19
The number of spine switches in the network is governed by a combination of the throughput required between the leaf switches, the
number of redundant/ECMP paths between the leafs, and the port density in the spine switches. Higher throughput in the uplinks from
the leaf switches to the spine switches can be achieved by increasing the number of spine switches or bundling the uplinks together in
port-channel interfaces between the leafs and the spines.
Port Speeds for Uplinks and Downlinks
Another consideration for leaf-spine topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX
switches support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks. The choice of platform for
the leaf and spine depends on the interface speed and density requirements.
Scale and Future Growth
Another design consideration for leaf-spine topologies is the need to plan for more capacity in the existing infrastructure and to plan for
more endpoints in the future.
Adding more capacity between existing leaf and spine switches can be done by adding spine switches or adding new interfaces between
existing leaf and spine switches. In either case, the port density requirements for the leaf and the spine switches should be accounted for
during the network design process.
If new leaf switches need to be added to accommodate new endpoints in the network, ports at the spine switches are required to connect
the new leaf switches.
In addition, you must decide whether to connect the leaf-spine topology to external networks through border leaf switches or whether to
add an additional networking tier for higher scale. Such designs require additional ports at the spine. These designs are described in
another section of this guide.
Ports on Demand Licensing
Remember that Ports on Demand licensing allows you to expand your capacity at your own pace in that you can invest in a higher port
density platform, yet license only the ports on the Brocade VDX switch that you are using for current needs. This allows for an extensible
and future-proof network architecture without additional cost.
Deployment Model
The links between the leaf and spine can be either Layer 2 or Layer 3 links.
If the links between the leaf and spine are Layer 2 links, the deployment is known as a Layer 2 (L2) leaf-spine deployment or a Layer 2
Clos deployment. You can deploy Brocade VDX switches in a Layer 2 deployment by using Brocade VCS
®
Fabric technology. With
Brocade VCS Fabric technology, the switches in the leaf-spine topology cluster together and form a fabric that provides a single point for
management, a distributed control plane, embedded automation, and multipathing capabilities from Layer 1 to Layer 3. The benefits of
deploying a VCS fabric are described later in this design guide.
If the links between the leaf and spine are Layer 3 links, the deployment is known as a Layer 3 (L3) leaf-spine deployment or a Layer 3
Clos deployment. You can deploy Brocade VDX and SLX platforms in a Layer 3 deployment by using Brocade IP fabric technology.
Brocade VDX switches can be deployed in spine and leaf Place in the Networks (PINs), whereas the Brocade SLX 9850 can be
deployed in the spine PIN. Brocade IP fabrics provide a highly scalable, programmable, standards-based, and interoperable networking
infrastructure. The benefits of Brocade IP fabrics are described later in this guide.
Leaf-Spine Topology (Two Tiers)
Brocade Data Center Fabric Architectures
20 53-1004601-02
Data Center Points of Delivery
Figure 16 shows a building block for a data center site. This building block is called a data center point of delivery (PoD). The data center
PoD consists of the networking infrastructure in a leaf-spine topology along with the endpoints grouped together in management/
infrastructure and compute racks. The idea of a PoD is to create a simple, repeatable, and scalable unit for building a data center site at
scale.
FIGURE 16 A Data Center PoD
Optimized 5-Stage Folded Clos Topology (Three Tiers)
Multiple leaf-spine topologies can be aggregated for higher scale in an optimized 5-stage folded Clos topology. This topology adds a
new tier to the network known as the super-spine. The role of the super-spine is to provide connectivity between the spine switches
across multiple data center PoDs. Figure 17 shows four super-spine switches connecting the spine switches across multiple data center
PoDs.
Optimized 5-Stage Folded Clos Topology (Three Tiers)
Brocade Data Center Fabric Architectures
53-1004601-02 21
FIGURE 17 An Optimized 5-Stage Folded Clos with Data Center PoDs
The connection between the spines and the super-spines follows the Clos principles:
• Each spine connects to all super-spines in the network.
• Neither the spines nor the super-spines are interconnected with each other.
Similarly, all the benefits of a leaf-spine topology—namely, multiple redundant paths, ECMP, scale-out architecture, and control over
traffic patterns—are realized in the optimized 5-stage folded Clos topology as well.
With an optimized 5-stage Clos topology, a PoD is a simple and replicable unit. Each PoD can be managed independently, including
firmware versions and network configurations. This topology also allows the data center site capacity to scale up by adding new PoDs or
to scale down by removing existing PoDs, without affecting the existing infrastructure, providing elasticity in scale and isolation of failure
domains.
Brocade VDX switches are used for the leaf PIN, whereas depending on scale and features being deployed, either Brocade VDX or SLX
platforms can be deployed at the spine and super-spine PINs.
This topology also provides a basis for interoperation of different deployment models of Brocade VCS fabrics and IP fabrics. This is
described later in this guide.
Design Considerations
The design considerations of oversubscription ratios, port speeds and density, spine and super-spine scale, planning for future growth,
and Brocade Ports on Demand licensing, which were described for the leaf-spine topology, apply to the optimized 5-stage folded Clos
topology as well. Some key considerations are highlighted below.
Optimized 5-Stage Folded Clos Topology (Three Tiers)
Brocade Data Center Fabric Architectures
22 53-1004601-02
Oversubscription Ratios
Because the spine switches now have uplinks connecting to the super-spine switches, the north-south oversubscription ratios for the
spine switches dictate the ratio of aggregate bandwidth of traffic switched east-west within a data center PoD to the aggregate bandwidth
of traffic exiting the data center PoD. This is a key consideration from the perspective of network infrastructure and services placement,
application tiers, and (in the case of service providers) tenant placement. In cases of north-south oversubscription at the spines,
endpoints should be placed to optimize traffic within a data center PoD.
At the super-spine switch, the east-west oversubscription defines the ratio of the bandwidth of the downlink connections for a pair of data
center PoDs. In most cases, this ratio is 1:1.
The ratio of the aggregate bandwidth of all super-spine downlinks connected to the spines and the aggregate bandwidth of all downlinks
connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south
oversubscription at the super-spine. The north-south oversubscription dictates the traffic destined to the services connected to the
border leaf switches and exiting the data center site.
Deployment Model
The Layer 3 gateways for the endpoints connecting to the networking infrastructure can be at the leaf, at the spine, or at the super-spine.
With Brocade IP fabric architecture (described later in this guide), the Layer 3 gateways are present at the leaf layer. So the links between
the leafs, spines, and super-spines are Layer 3.
With Brocade multi-fabric topology using VCS fabric architecture (described later in this guide), there is a choice of the Layer 3 gateway
at the spine layer or at the super-spine layer. In either case, the links between the leafs and spines will be Layer 2 links. If the Layer 3
gateway is at the spine layer, the links between the spine and super-spine are Layer 3. Else, those links are Layer 2 as well. These Layer
2 links are IEEE-802.1Q-VLAN-based optionally over Link Aggregation Control Protocol (LACP) aggregated links. These architectures
are described later in this guide.
Edge Services and Border Switches Topology
For two-tier and three-tier data center topologies, the role of the border switches in the network is to provide external connectivity to the
data center site. In addition, as all traffic enters and exits the data center through the border leaf switches, they present the ideal location in
the network to connect network services like firewalls, load balancers, and edge VPN routers.
The topology for interconnecting the border switches depends on the number of network services that need to be attached and the
oversubscription ratio at the border switches. Figure 18 shows a simple topology for border switches, where the service endpoints
connect directly to the border switches. Border switches in this simple topology are referred to as "border leaf switches" because the
service endpoints connect to them directly.
Edge Services and Border Switches Topology
Brocade Data Center Fabric Architectures
53-1004601-02 23
FIGURE 18 Edge Services PoD
If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The
border switches and the edge racks together form the edge services PoD.
Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge
service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput.
Design Considerations
The following sections describe the design considerations for border switches.
Oversubscription Ratios
The border leaf switches have uplink connections to spines in the leaf-spine topology and to super-spines in the 3-tier topology. They
also have uplink connections to the data center core/WAN edge routers as described in the next section.
The ratio of the aggregate bandwidth of the uplinks connecting to the spines/super-spines and the aggregate bandwidth of the uplink
connecting to the core/edge routers determines the oversubscription ratio for traffic exiting the data center site.
The north-south oversubscription ratios for the services connected to the border leafs are another consideration. Because many of the
services connected to the border leafs may have public interfaces that face external entities like core/edge routers and internal interfaces
that face the internal network, the north-south oversubscription for each of these connections is an important design consideration.
Data Center Core/WAN Edge Handoff
The uplinks to the data center core/WAN edge routers from the border leafs carry the traffic entering and exiting the data center site. The
data center core/WAN edge handoff can be Layer 2 and/or Layer 3 in combination with overlay protocols.
Edge Services and Border Switches Topology
Brocade Data Center Fabric Architectures
24 53-1004601-02
The handoff between the border leafs and the data center core/WAN edge may provide domain isolation for the control- and data-plane
protocols running in the internal network and built using one-tier, two-tier, or three-tier topologies. This helps in providing independent
administrative, fault-isolation, and control-plane domains for isolation, scale, and security between the different domains of a data center
site.
Data Center Core and WAN Edge Routers
The border leaf switches connect to the data center core/WAN edge devices in the network to provide external connectivity to the data
center site. Figure 19 shows an example of the connectivity between the vLAG/MCT pair from a single-tier topology, spine switches
from a two-tier topology, border leafs, a collapsed data center core/WAN edge tier, and external networks for Internet and data center
interconnection.
FIGURE 19 Collapsed Data Center Core and WAN Edge Routers Connecting Internet and DCI Fabric to the Border Leaf in the Data
Center Site
If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The
border switches and the edge racks together form the edge services PoD.
Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge
service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput.
Edge Services and Border Switches Topology
Brocade Data Center Fabric Architectures
53-1004601-02 25
Brocade Data Center Fabric Architectures
26 53-1004601-02
Building Data Center Sites with Brocade
VCS Fabric Technology
• Data Center Site with Leaf-Spine Topology........................................................................................................................................... 28
• Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics......................................................................31
Brocade VCS fabrics are Ethernet fabrics built for modern data center infrastructure needs. With Brocade VCS Fabric technology, up to
48 Brocade VDX switches can participate in a VCS fabric. The data plane of the VCS fabric is based on the Transparent Interconnection
of Lots of Links (TRILL) standard, supported by Layer 2 routing protocols that propagate topology information within the fabrics. This
ensures that there are no loops in the fabrics, and there is no need to run Spanning Tree Protocol (STP). Also, none of the links are
blocked. Brocade VCS Fabric technology provides a compelling solution for deploying a Layer 2 Clos topology.
Brocade VCS Fabric technology provides the following benefits:
• TRILL-based Ethernet fabric—Brocade VCS Fabric technology, which is based on the TRILL standard, uses a Layer 2 routing
protocol within the fabric. This ensures that all links are always utilized within the VCS fabric, and there is no need for loop-
prevention protocols like Spanning Tree that block links and provide inefficient utilization of the networking infrastructure.
• Active-Active vLAG—VCS fabrics allow for active-active port channels between networking endpoints and multiple VDX
switches participating in a VCS fabric, enabling redundancy and increased throughput.
• Single point of management—With all switches in a VCS fabric participating in a logical chassis, the entire topology can be
managed as a single switch. This drastically reduces the configuration, validation, monitoring, and troubleshooting complexity of
the fabric.
• Distributed MAC address learning—With Brocade VCS Fabric technology, the MAC addresses that are learned at the edge
ports of the fabric are distributed to all nodes participating within the fabric. This means that the MAC address learning within
the fabric does not rely on flood-and-learn mechanisms, and flooding related to unknown unicast frames is avoided.
• Multipathing from Layer 1 to Layer 3—Brocade VCS Fabric technology provides efficiency and resiliency through the use of
multipathing from Layer 1 to Layer 3:
– At Layer 1, Brocade Trunking (BTRUNK) enables frame-based load balancing between a pair of switches that are part of
the VCS fabric. This provides near identical link utilization for links participating in a BTRUNK. This ensures that thick(or
“elephant”) flows do not congest an inter-switch link (ISL).
– Because of the existence of a Layer 2 routing protocol, Layer 2 ECMP is performed between multiple next hops. This is
critical in a Clos topology, where all spines are ECMP next hops for a leaf that sends traffic to an endpoint connected to
another leaf.
– Layer 3 ECMP using Layer 3 routing protocols ensures that traffic is load-balanced between Layer 3 next hops.
• Distributed control plane—Control-plane and data-plane state information is shared across devices in the VCS fabric, which
enables fabric-wide MAC address learning, multiswitch port channels (vLAG), Distributed Spanning Tree (DiST), and gateway
redundancy protocols like Virtual Router Redundancy Protocol–Extended (VRRP-E) and Fabric Virtual Gateway (FVG), among
others. These enable the VCS fabric to function like a single switch to interface with other entities in the infrastructure—thus
appearing as a single control-plane entity to other devices in the network.
• Embedded automation—Brocade VCS Fabric technology provides embedded turnkey automation built into Brocade Network
OS. These automation features enable zero-touch provisioning of new switches into an existing fabric. Brocade VDX switches
also provide multiple management methods, including the command-line interface (CLI), Simple Network Management
Protocol (SNMP), REST, and Network Configuration Protocol (NETCONF) interfaces.
• Multitenancy at Layers 2 and 3—With Brocade VCS Fabric technology, multitenancy features at Layers 2 and 3 enable traffic
isolation and segmentation across the fabric. Brocade VCS Fabric technology allows an extended range of up to 8,000 Layer 2
Brocade Data Center Fabric Architectures
53-1004601-02 27
domains within the fabric, while isolating overlapping IEEE-802.1Q-based tenant networks into separate Layer 2 domains.
Layer 3 multitenancy using Virtual Routing and Forwarding (VRF) protocols, multi-VRF routing protocols, and BGP-EVPN
enables large-scale Layer 3 multitenancy.
• Ecosystem integration and virtualization features—Brocade VCS Fabric technology integrates with leading industry solutions
and products like OpenStack; VMware products like vSphere, NSX, and vRealize; common infrastructure programming tools like
Python; and Brocade tools like Brocade Network Advisor. Brocade VCS Fabric technology is virtualization-aware and helps
dramatically reduce administrative tasks and enable seamless VM migration with features like Automatic Migration of Port
Profiles (AMPP), which automatically adjusts port-profile information as a VM moves from one server to another.
• Advanced storage features—Brocade VDX switches provide rich storage protocols and features like Fibre Channel over
Ethernet (FCoE), Data Center Bridging (DCB), Monitoring and Alerting Policy Suite (MAPS), and Auto-NAS (Network Attached
Storage), among others, to enable advanced storage networking.
The benefits and features listed simplify Layer 2 Clos deployment by using Brocade VDX switches and Brocade VCS Fabric technology.
The next section describes data center site designs that use Layer 2 Clos built with Brocade VCS Fabric technology.
Data Center Site with Leaf-Spine Topology
Figure 20 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology, the
spines are connected to the data center core/WAN edge devices directly. The spine PIN in this topology is sometimes referred to as the
"border spine" because it performs both the spine function of east-west traffic switches and the border function of providing an interface
to the data center core/WAN edge.
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
28 53-1004601-02
FIGURE 20 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Spine Switches
Figure 21 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology,
border leaf switches are added along with the edge services PoD for external connectivity and hosting edge services.
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
53-1004601-02 29
FIGURE 21 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Leaf Switches
The border leafs in the edge services PoD are built using a separate VCS fabric. The border leafs are connected to the spine switches in
the data center PoD and also to the data center core/WAN edge routers. These links can be either Layer 2 or Layer 3 links, depending on
the requirements of the deployment and the handoff required to the data center core/WAN edge routers. There can be more than one
edge services PoD in the network, depending on the service needs and the bandwidth requirement for connecting to the data center
core/WAN edge routers.
As an alternative to the topology shown in Figure 21, the border leaf switches in the edge services PoD and the data center PoD can be
part of the same VCS fabric, to extend the fabric benefits to the entire data center site. This model is shown in Brocade VCS Fabric on
page 51.
The data center PoDs shown in Figure 20 and Figure 21 are built using Brocade VCS fabric technology. With Brocade VCS fabric
technology, we recommend interconnecting the spines with each other (not shown in the figures) to ensure the best traffic path during
failure scenarios.
Scale
Table 1 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX platforms at the leaf and spine Places
in the Network (PINs) in a Brocade VCS fabric.
TABLE 1 Scale Numbers for a Data Center Site with a Leaf-Spine Topology Implemented with Brocade VCS Fabric Technology
Leaf Switch Spine Switch Leaf
Oversubscription
Ratio
Leaf Count Spine Count VCS Fabric Size
(Number of
Switches)
10-GbE Port
Count
6740, 6740T,
6740T-1G
6940-36Q 3:1 36 4 40 1,728
6740, 6740T,
6740T-1G
8770-4 3:1 44 4 48 2,112
6940-144S 6940-36Q 2:1 36 12 48 3,456
6940-144S 8770-4 2:1 36 12 48 3,456
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
30 53-1004601-02
The following assumptions are made:
• Links between the leafs and the spines are 40 GbE.
• The Brocade VDX 6740 Switch platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the
Brocade VDX 6740 Switch, the Brocade VDX 6740T Switch, and the Brocade VDX 6740T-1G Switch. (The Brocade VDX
6740T-1G requires a Capacity on Demand license to upgrade to 10GBASE-T ports.)
• The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks.
• The Brocade VDX 8770-4 Switch uses 27 × 40-GbE line cards with 40-GbE interfaces.
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS
Fabrics
If multiple VCS fabrics are needed at a data center site, the optimized 5-stage Clos topology is used to increase scale by interconnecting
the data center PoDs built using leaf-spine topology with Brocade VCS Fabric technology. This deployment architecture is referred to as
a multi-fabric topology using VCS fabrics.
In a multi-fabric topology using VCS fabrics, individual data center PoDs resemble a leaf-spine topology deployed using Brocade VCS
Fabric technology. Note that we recommend that the spines be interconnected in a data center PoD built using Brocade VCS Fabric
technology.
A new super-spine tier is used to interconnect the spine switches in the data center PoD. In addition, the border leaf switches are also
connected to the super-spine switches. There are two deployment options available to build multi-fabric topology using VCS fabrics.
In the first deployment option, the links between the spine and super-spine are Layer 2. In order to achieve a loop-free environment and
avoid loop-prevention protocols between the spine and super-spine tiers, the super-spine devices participate in a VCS fabric as well. The
connections between the spine and the super-spines are bundled together in (dual-sided) vLAGs to create a loop-free topology. The
standard VLAN range of 1 to 4094 can be extended between the DC PoDs using IEEE 802.1Q tags over the dual-sided vLAGs. This is
illustrated in Figure 22.
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
53-1004601-02 31
FIGURE 22 Multi-Fabric Topology with VCS Technology—With L2 Links Between Spine and Super-Spine and DC Core/WAN Edge
Connected to Super-Spine
In this topology, the super-spines connect directly into the data center core/WAN edge, which provides external connectivity to the
network. Alternately, Figure 23 shows the border leafs connecting directly to the data center core/WAN edge. In this topology, if the
Layer 3 boundary is at the super-spine, the links between the super-spine and the border leafs carry Layer 3 traffic as well.
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
32 53-1004601-02
FIGURE 23 Multi-Fabric Topology with VCS Technology—With L2 Links Between Spine and Super-Spine and DC Core/WAN Edge
Connected to Border Leafs
In the second deployment option, the links between the spine and super-spine are Layer 3. In cases where the Layer 3 gateways for the
VLANs in the VCS fabrics are at the spine layer, this model provides routing between the data center PoDs. As a consequence of the
links being Layer3, a loop-free topology is achieved. Here the Brocade SLX 9850 is an option for the super-spine PIN. This is illustrated
in Figure 24.
FIGURE 24 Multi-Fabric Topology with VCS Technology—With L3 Links Between Spine and Super-Spine
If Layer 2 extension is required between the DC PoDs, Virtual Fabric Extension (VF-Extension) technology can be used. With VF-
Extension, the spine switches (VDX 6740 and VDX 6940 only) can be configured as VXLAN Tunnel Endpoints (VTEPs). Subsequently,
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
53-1004601-02 33
the VXLAN protocol can be used to extend the Layer 2 VLANs as well as the virtual fabrics between the VCS fabrics of the DC PoDs.
This is described in more detail in the Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide.
Figure 23 and Figure 24 show only one edge services PoD, but there can be multiple such PoDs depending on the edge service
endpoint requirements, the oversubscription for traffic that is exchanged with the data center core/WAN edge, and the related handoff
mechanisms.
Scale
Table 2 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos built with Brocade VCS fabrics. The following assumptions are made:
• Links between the leafs and the spines are 40 GbE. Links between the spines and super-spines are also 40 GbE.
• The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on
Demand license to upgrade to 10GBASE-T ports.) Four spines are used to connect the uplinks.
• The Brocade 6940-144S platforms use 12 × 40-GbE uplinks. Twelve spines are used to connect the uplinks.
• The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. A larger port scale can be realized with a higher oversubscription ratio at the spines.
However, a 1:1 oversubscription ratio is used here and is also recommended.
• One spine plane is used for the scale calculations. This means that all spine switches in each data center PoD connect to all
super-spine switches in the topology. This topology is consistent with the optimized 5-stage Clos topology.
• Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (uses 18 × 40-GbE per line card) for
connections between spines and super-spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode.
The Brocade VDX 8770-8 supports 144 × 40-GbE ports in performance mode.
• The link between the spines and the super-spines is assumed to be Layer 3, and 32-way Layer 3 ECMP is utilized for spine to
super-spine connections. This gives a maximum of 32 super-spines for the multi-fabric topology using Brocade VCS Fabric
technology. Refer to the release notes for your platform to check the ECMP support scale.
NOTE
For a larger port scale for the multi-fabric topology using Brocade VCS Fabric technology, multiple spine planes are used.
Architectures with multiple spine planes are described later.
TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center PoD
Spine Count
per Data
Center PoD
Number of
Super-
Spines
Number of
Data Center
PoDs
10-GbE
Port Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 6940-36Q 3:1 18 4 18 9 7,776
VDX 6940-144S VDX 6940-36Q VDX 6940-36Q 2:1 18 12 18 3 5,184
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 6940-36Q 3:1 32 4 32 9 13,824
VDX 6940-144S VDX 8770-4 VDX 6940-36Q 2:1 32 12 32 3 9,216
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 8770-4 3:1 18 4 18 18 15,552
VDX 6940-144S VDX 6940-36Q VDX 8770-4 2:1 18 12 18 6 10,368
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
34 53-1004601-02
TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology
(continued)
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center PoD
Spine Count
per Data
Center PoD
Number of
Super-
Spines
Number of
Data Center
PoDs
10-GbE
Port Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 8770-4 3:1 32 4 32 18 27,648
VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 32 6 18,432
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 8770-8 3:1 18 4 18 36 31,104
VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 18 12 20,736
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 8770-8 3:1 32 4 32 36 55,296
VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 32 12 36,864
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q SLX 9850-4 3:1 18 4 18 60 51,840
VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 18 20 34,560
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 SLX 9850-4 3:1 32 4 32 60 92,160
VDX 6940-144S VDX 8770-4 SLX 9850-4 2:1 32 12 32 20 61,440
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q SLX 9850-8 3:1 18 4 18 120 103,680
VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 18 40 69,120
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 SLX 9850-8 3:1 32 4 32 120 184,320
VDX 6940-144S VDX 8770-4 SLX 9850-8 2:1 32 12 32 40 122,880
Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics
Brocade Data Center Fabric Architectures
53-1004601-02 35
Brocade Data Center Fabric Architectures
36 53-1004601-02
Building Data Center Sites with Brocade
IP Fabric
• Data Center Site with Leaf-Spine Topology........................................................................................................................................... 37
• Scaling the Data Center Site with an Optimized 5-Stage Folded Clos.......................................................................................40
Brocade IP fabric provides a Layer 3 Clos deployment architecture for data center sites. With Brocade IP fabric, all links in the Clos
topology are Layer 3 links. The Brocade IP fabric includes the networking architecture, the protocols used to build the network, turnkey
automation features used to provision, validate, remediate, troubleshoot, and monitor the networking infrastructure, and the hardware
differentiation with Brocade VDX and SLX platforms. The following sections describe these aspects of building data center sites with
Brocade IP fabrics.
Because the infrastructure is built on IP, advantages like loop-free communication using industry-standard routing protocols, ECMP, very
high solution scale, and standards-based interoperability are leveraged.
The following are some of the key benefits of deploying a data center site with Brocade IP fabrics:
• Highly scalable infrastructure—Because the Clos topology is built using IP protocols, the scale of the infrastructure is very high.
These port and rack scales are documented with descriptions of the Brocade IP fabric deployment topologies.
• Standards-based and interoperable protocols—Brocade IP fabric is built using industry-standard protocols like the Border
Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols are well understood and provide a solid
foundation for a highly scalable solution. In addition, industry-standard overlay control- and data-plane protocols like BGP-
EVPN and Virtual Extensible Local Area Network (VXLAN) are used to extend the Layer 2 domain and extend tenancy domains
by enabling Layer 2 communications and VM mobility.
• Active-active vLAG pairs—By supporting vLAG pairs on leaf switches, dual-homing of the networking endpoints is supported.
This provides higher redundancy. Also, because the links are active-active, vLAG pairs provide higher throughput to the
endpoints. vLAG pairs are supported for all 10-GbE, 40-GbE, and 100-GbE interface speeds, and up to 32 links can
participate in a vLAG.
• Support for unnumbered interfaces—Using Brocade Network OS support for IP unnumbered interfaces available in Brocade
VDX switches, only one IP address per switch is required to configure the routing protocol peering. This significantly reduces the
planning and use of IP addresses, and it simplifies operations.
• Turnkey automation—Brocade automated provisioning dramatically reduces the deployment time of network devices and
network virtualization. Prepackaged, server-based automation scripts provision Brocade IP fabric devices for service with
minimal effort.
• Programmable automation—Brocade server-based automation provides support for common industry automation tools such
as Python Ansible, Puppet, and YANG model-based REST and NETCONF APIs. The prepackaged PyNOS scripting library
and editable automation scripts execute predefined provisioning tasks, while allowing customization for addressing unique
requirements to meet technical or business objectives when the organization is ready.
• Ecosystem integration—The Brocade IP fabric integrates with leading industry solutions and products like VMware vCenter,
NSX, and vRealize. Cloud orchestration and control are provided through OpenStack and OpenDaylight-based Brocade SDN
Controller support.
Data Center Site with Leaf-Spine Topology
A data center PoD built with IP fabrics supports dual-homing of network endpoints using multiswitch port channel interfaces formed
between a pair of Brocade VDX switches participating in a vLAG. This pair of leaf switches is called a vLAG pair (see Figure 25).
Brocade Data Center Fabric Architectures
53-1004601-02 37
FIGURE 25 An IP Fabric Data Center PoD Built with Leaf-Spine Topology and vLAG Pairs for Dual-Homed Network Endpoint
The Brocade VDX switches in a vLAG pair have a link between them for control-plane purposes to create and manage the multiswitch
port-channel interfaces. When network virtualization with BGP EVPN is used, these links also carry switched traffic in case of downlink
failures or single-homed endpoints. Oversubscription of the inter-switch link (ISL) is an important consideration for these scenarios.
Figure 26 shows a data center site deployed using a leaf-spine topology and an edge services PoD. Here the network endpoints are
illustrated as single-homed, but dual homing is enabled through vLAG pairs where required.
FIGURE 26 Data Center Site Built with Leaf-Spine Topology and an Edge Services PoD
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
38 53-1004601-02
The links between the leafs, spines, and border leafs are all Layer 3 links. The border leafs are connected to the spine switches in the data
center PoD and also to the data center core/WAN edge routers. The uplinks from the border leaf to the data center core/WAN edge can
be either Layer 2 or Layer 3, depending on the requirements of the deployment and the handoff required to the data center core/WAN
edge routers.
There can be more than one edge services PoD in the network, depending on service needs and the bandwidth requirement for
connecting to the data center core/WAN edge routers.
Scale
Table 3 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and
spine PINs in a Brocade IP fabric with 40-GbE links between leafs and spines.
The following assumptions are made:
• Links between the leafs and the spines are 40 GbE.
• The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on
Demand license to upgrade to 10GBASE-T ports.)
• The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks.
• The Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (18 × 40-GbE per line card) for
connections between leafs and spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode. The
Brocade VDX 8770-8 supports 144 × 40-GbE ports in performance mode.
NOTE
For a larger port scale in Brocade IP fabrics in a 3-stage folded Clos, the Brocade VDX 8770-4 or 8770-8 can be used as a
leaf switch.
TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs
and Spines
Leaf Switch Spine Switch Leaf
Oversubscription
Ratio
Leaf Count Spine Count IP Fabric Size
(Number of
Switches)
10-GbE Port
Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q 3:1 36 4 40 1,728
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 3:1 72 4 76 3,456
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-8 3:1 144 4 148 6,912
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 3:1 240 4 244 11,520
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-8 3:1 480 4 484 23,040
VDX 6940-144S VDX 6940-36Q 2:1 36 12 48 3,456
VDX 6940-144S VDX 8770-4 2:1 72 12 84 6,912
VDX 6940-144S VDX 8770-8 2:1 144 12 156 13,824
VDX 6940-144S SLX 9850-4 2:1 240 12 252 23,040
Data Center Site with Leaf-Spine Topology
Brocade Data Center Fabric Architectures
53-1004601-02 39
TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs
and Spines (continued)
Leaf Switch Spine Switch Leaf
Oversubscription
Ratio
Leaf Count Spine Count IP Fabric Size
(Number of
Switches)
10-GbE Port
Count
VDX 6940-144S SLX 9850-8 2:1 480 12 492 46,080
Table 4 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and
spine PINs in a Brocade IP fabric with 100-GbE links between leafs and spines.
The following assumptions are made:
• Links between the leafs and the spines are 100 GbE.
• The Brocade VDX 6940-144S platforms use 4 × 100-GbE uplinks.
TABLE 4 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 100-GbE Links Between Leafs
and Spines
Leaf Switch Spine Switch Leaf
Oversubscription
Ratio
Leaf Count Spine Count IP Fabric Size
(Number of
Switches)
10-GbE Port
Count
VDX 6940-144S VDX 8770-4 2.4:1 24 12 36 2,304
VDX 6940-144S VDX 8770-8 2.4:1 48 12 60 4,608
VDX 6940-144S SLX 9850-4 2.4:1 144 12 156 13,824
VDX 6940-144S SLX 9850-8 2.4:1 288 12 300 27,648
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
If a higher scale is required, the optimized 5-stage folded Clos topology is used to interconnect the data center PoDs built using a
Layer 3 leaf-spine topology. An example topology is shown in Figure 27.
FIGURE 27 Data Center Site Built with an Optimized 5-Stage Folded Clos Topology and IP Fabric PoDs
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
40 53-1004601-02
Figure 27 shows only one edge services PoD, but there can be multiple such PoDs, depending on the edge service endpoint
requirements, the amount of oversubscription for traffic exchanged with the data center core/WAN edge, and the related handoff
mechanisms.
Scale
Figure 28 shows a variation of the optimized 5-stage Clos. This variation includes multiple super-spine planes. Each spine in a data
center PoD connects to a separate super-spine plane.
FIGURE 28 Optimized 5-Stage Clos with Multiple Super-Spine Planes
The number of super-spine planes is equal to the number of spines in the data center PoDs. The number of uplink ports on the spine
switch is equal to the number of switches in a super-spine plane. Also, the number of data center PoDs is equal to the port density of the
super-spine switches. Introducing super-spine planes to the optimized 5-stage Clos topology greatly increases the number of data
center PoDs that can be supported. For the purposes of port scale calculations of the Brocade IP fabric in this section, the optimized
5-stage Clos with multiple super-spine plane topology is considered.
Table 5 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 40-GbE
interfaces between leafs, spines, and super-spines. The following assumptions are made:
• Links between the leafs and the spines are 40 GbE. Links between spines and super-spines are also 40 GbE.
• The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade
VDX 6740, the Brocade VDX 6740T, and the Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity
on Demand license to upgrade to 10GBASE-T ports.) Four spines are used for connecting the uplinks.
• The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks. Twelve spines are used for connecting the uplinks.
• The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. The number of physical ports utilized from the spine toward the super-spine is equal
to the number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended.
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
53-1004601-02 41
• The Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (18 × 40 GbE) for connections between
spines and super-spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode. The Brocade VDX
8770-8 supports 144 × 40-GbE ports in performance mode.
• 32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the
Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale.
TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 40 GbE Between Leaf, Spine, and Super-Spine
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf
Count per
Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number
of Super-
Spine
Planes
Number
of Super-
Spines in
Each
Super-
Spine
Plane
Number
of Data
Center
PoDs
10-GbE
Port Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 6940-36Q 3:1 18 4 4 18 36 31,104
VDX 6940-144S VDX 6940-36Q VDX 6940-36Q 2:1 18 12 12 18 36 62,208
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 8770-4 3:1 18 4 4 18 72 62,208
VDX 6940-144S VDX 6940-36Q VDX 8770-4 2:1 18 12 12 18 72 124,416
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q VDX 8770-8 3:1 18 4 4 18 144 124,416
VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 12 18 144 248,832
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q SLX 9850-4 3:1 18 4 4 18 240 207,360
VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 12 18 240 414,720
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 6940-36Q SLX 9850-8 3:1 18 4 4 18 480 414,720
VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 12 18 480 829,440
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 8770-4 3:1 32 4 4 32 72 110,592
VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 12 32 72 221,184
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-4 VDX 8770-8 3:1 32 4 4 32 144 221,184
VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 12 32 144 442,368
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-8 VDX 8770-8 3:1 32 4 4 32 144 221,184
VDX 6940-144S VDX 8770-8 VDX 8770-8 2:1 32 12 12 32 144 442,368
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 SLX 9850-4 3:1 32 4 4 32 240 368,640
VDX 6940-144S SLX 9850-4 SLX 9850-4 2:1 32 12 12 32 240 737,280
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
42 53-1004601-02
TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 40 GbE Between Leaf, Spine, and Super-Spine (continued)
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf
Count per
Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number
of Super-
Spine
Planes
Number
of Super-
Spines in
Each
Super-
Spine
Plane
Number
of Data
Center
PoDs
10-GbE
Port Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 SLX 9850-8 3:1 32 4 4 32 480 737,280
VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 32 12 12 32 480 1,474,560
Table 6 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine,
and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 100-GbE
interfaces between the leafs, spines, and super spines. The following assumptions are made:
• Links between the leafs and the spines are 100 GbE. Links between spines and super-spines are also 100 GbE.
• The Brocade VDX 6940-144S platforms use 4 × 100-GbE uplinks. Four spines are used for connecting the uplinks.
• The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the
bandwidth of downlink ports at the spines. The number of physical ports utilized from spine toward super-spine is equal to the
number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended.
• 32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the
Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale.
TABLE 6 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP
Fabric with 100 GbE Between Leaf, Spine, and Super-Spine
Leaf Switch Spine- Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number of
Super-
Spine
Planes
Number of
Super-
Spines in
Each
Super-
Spine
Plane
Number of
Data
Center
PoDs
10-GbE
Port Count
VDX 6940-144S VDX 8770-4 VDX 8770-4 2.4:1 12 4 4 12 24 27,648
VDX 6940-144S VDX 8770-4 VDX 8770-8 2.4:1 12 4 4 12 48 55,296
VDX 6940-144S VDX 8770-8 VDX 8770-8 2.4:1 24 4 4 24 48 110,592
VDX 6940-144S SLX 9850-4 SLX 9850-4 2.4:1 32 4 4 32 144 442,368
VDX 6940-144S SLX 9850-4 SLX 9850-8 2.4:1 32 4 4 32 288 884,736
Further higher scale can be achieved by physically connecting all available ports on the switching platform and using BGP policies to
enforce the maximum ECMP scale as limited by the platform. This provides higher port scale for the topology, while still ensuring that
maximum ECMP scale is used. It should be noted that this arrangement provides a nonblocking 1:1 north-south subscription at the
spine in most scenarios.
In Table 7, the scale for a 5-stage folded Clos with 40-GbE interfaces between leaf, spine, and super-spine is shown assuming that BGP
policies are used to enforce the ECMP maximum scale.
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
53-1004601-02 43
TABLE 7 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced
ECMP Maximum, and 100 GbE Between Leafs, Spines, and Super-Spines
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf
Count per
Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number of
Super-
Spine
Planes
Number of
Super-
Spines in
Each
Super-
Spine
Plane
Number of
Data
Center
PoDs
10-GbE Port
Count
VDX 6740,
VDX 6740T,
VDX 6740T-1G
VDX 8770-8 VDX 8770-8 3:1 72 4 4 72 144 497,664
VDX 6940-144S VDX 8770-8 VDX 8770-8 2:1 72 4 4 72 144 995,328
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 SLX 9850-4 3:1 120 4 4 120 240 1,382,400
VDX 6940-144S SLX 9850-4 SLX 9850-4 2:1 120 12 12 120 240 2,764,800
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-4 SLX 9850-8 3:1 120 4 4 120 480 2,764,800
VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 120 12 12 120 480 5,529,600
VDX 6740,
VDX 6740T,
VDX 6740T-1G
SLX 9850-8 SLX 9850-8 3:1 240 4 4 240 480 5,529,600
VDX 6940-144S SLX 9850-8 SLX 9850-8 2:1 240 12 12 240 480 11,059,200
In Table 8, the scale for a 5-stage folded Clos with 100-GbE interfaces between leaf, spine, and super spine is shown assuming that
BGP policies are used to enforce the ECMP maximum scale.
TABLE 8 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced
ECMP Maximum, and 100 GbE Between Leaf, Spines, and Super Spines
Leaf Switch Spine Switch Super-Spine
Switch
Leaf Over-
subscription
Ratio
Leaf Count
per Data
Center
PoD
Spine
Count per
Data
Center
PoD
Number of
Super-
Spine
Planes
Number of
Super-
Spines in
Each
Super-
Spine
Plane
Number of
Data
Center
PoDs
10-GbE
Port Count
VDX 6940-144S SLX 9850-4 SLX 9850-4 2.4:1 72 4 4 72 144 995,328
VDX 6940-144S SLX 9850-4 SLX 9850-8 2.4:1 72 4 4 72 288 1,990,656
VDX 6940-144S SLX 9850-8 SLX 9850-8 2.4:1 144 4 4 144 288 3,981,312
Scaling the Data Center Site with an Optimized 5-Stage Folded Clos
Brocade Data Center Fabric Architectures
44 53-1004601-02
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg
brocade-dc-fabric-architectures-sdg

More Related Content

Viewers also liked

Seminario 7
Seminario 7Seminario 7
Seminario 7
edunavrod
 
A/B тестирование в eCommerce
A/B тестирование в eCommerceA/B тестирование в eCommerce
A/B тестирование в eCommerce
Michail Гаркунов
 
STATUS: Strategic Agendas Template
STATUS: Strategic Agendas TemplateSTATUS: Strategic Agendas Template
STATUS: Strategic Agendas Template
URBASOFIA
 
2010 State of the company recruiting
2010 State of the company recruiting2010 State of the company recruiting
2010 State of the company recruiting
Jivko Jekov
 
Que suis-je ?
Que suis-je ?Que suis-je ?
Que suis-je ?
Pierrot Caron
 
консультация театр
консультация театрконсультация театр
консультация театр
virtualtaganrog
 
20161222 srws第五回 Risk of Bias 2.0 toolを用いた文献評価
20161222 srws第五回 Risk of Bias 2.0 toolを用いた文献評価20161222 srws第五回 Risk of Bias 2.0 toolを用いた文献評価
20161222 srws第五回 Risk of Bias 2.0 toolを用いた文献評価
SR WS
 
eco -ITSM Presentation .ppt
eco -ITSM Presentation .ppteco -ITSM Presentation .ppt
eco -ITSM Presentation .ppt
Macanta Consulting Pty Ltd
 
Pak Afghan relations
Pak Afghan relationsPak Afghan relations
Pak Afghan relations
Areej Fatima
 
Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)
Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)
Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)
Ontico
 
Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...
Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...
Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...
Ontico
 
Das Geheimnis Re-inkarnation
Das Geheimnis Re-inkarnationDas Geheimnis Re-inkarnation
Das Geheimnis Re-inkarnation
Gerold Szonn
 
DC/OS – больше чем PAAS, Никита Борзых (Express 42)
DC/OS – больше чем PAAS, Никита Борзых (Express 42)DC/OS – больше чем PAAS, Никита Борзых (Express 42)
DC/OS – больше чем PAAS, Никита Борзых (Express 42)
Ontico
 
Metodologias ativas
Metodologias ativas Metodologias ativas
Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...
Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...
Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...
Ontico
 
Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)
Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)
Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)
Ontico
 
Angular vs Angular 2 vs React. Сергей Александров
Angular vs Angular 2 vs React. Сергей АлександровAngular vs Angular 2 vs React. Сергей Александров
Angular vs Angular 2 vs React. Сергей Александров
EatDog
 
An Introduction to Biofuels
An Introduction to BiofuelsAn Introduction to Biofuels
An Introduction to Biofuels
Hawkesdale P12 College
 
Biodiesel Presentation
Biodiesel PresentationBiodiesel Presentation
Biodiesel Presentation
guest25c2e72
 
Getting Started with Socialfave
Getting Started with SocialfaveGetting Started with Socialfave
Getting Started with Socialfave
socialfave
 

Viewers also liked (20)

Seminario 7
Seminario 7Seminario 7
Seminario 7
 
A/B тестирование в eCommerce
A/B тестирование в eCommerceA/B тестирование в eCommerce
A/B тестирование в eCommerce
 
STATUS: Strategic Agendas Template
STATUS: Strategic Agendas TemplateSTATUS: Strategic Agendas Template
STATUS: Strategic Agendas Template
 
2010 State of the company recruiting
2010 State of the company recruiting2010 State of the company recruiting
2010 State of the company recruiting
 
Que suis-je ?
Que suis-je ?Que suis-je ?
Que suis-je ?
 
консультация театр
консультация театрконсультация театр
консультация театр
 
20161222 srws第五回 Risk of Bias 2.0 toolを用いた文献評価
20161222 srws第五回 Risk of Bias 2.0 toolを用いた文献評価20161222 srws第五回 Risk of Bias 2.0 toolを用いた文献評価
20161222 srws第五回 Risk of Bias 2.0 toolを用いた文献評価
 
eco -ITSM Presentation .ppt
eco -ITSM Presentation .ppteco -ITSM Presentation .ppt
eco -ITSM Presentation .ppt
 
Pak Afghan relations
Pak Afghan relationsPak Afghan relations
Pak Afghan relations
 
Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)
Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)
Опыт международных продаж видеостримера Flussonic / Максим Лапшин (Erlyvideo)
 
Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...
Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...
Радости и гадости регрессионного тестирования вёрстки / Алексей Малейков (HTM...
 
Das Geheimnis Re-inkarnation
Das Geheimnis Re-inkarnationDas Geheimnis Re-inkarnation
Das Geheimnis Re-inkarnation
 
DC/OS – больше чем PAAS, Никита Борзых (Express 42)
DC/OS – больше чем PAAS, Никита Борзых (Express 42)DC/OS – больше чем PAAS, Никита Борзых (Express 42)
DC/OS – больше чем PAAS, Никита Борзых (Express 42)
 
Metodologias ativas
Metodologias ativas Metodologias ativas
Metodologias ativas
 
Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...
Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...
Angular 2 не так уж и плох... А если задуматься, то и просто хорош / Алексей ...
 
Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)
Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)
Vue.js и его брат-близнец Vue-server.js / Андрей Солодовников (НГС)
 
Angular vs Angular 2 vs React. Сергей Александров
Angular vs Angular 2 vs React. Сергей АлександровAngular vs Angular 2 vs React. Сергей Александров
Angular vs Angular 2 vs React. Сергей Александров
 
An Introduction to Biofuels
An Introduction to BiofuelsAn Introduction to Biofuels
An Introduction to Biofuels
 
Biodiesel Presentation
Biodiesel PresentationBiodiesel Presentation
Biodiesel Presentation
 
Getting Started with Socialfave
Getting Started with SocialfaveGetting Started with Socialfave
Getting Started with Socialfave
 

Similar to brocade-dc-fabric-architectures-sdg

Opm costing
Opm costingOpm costing
Opm costing
Madhuri Uppala
 
Primavera help 2012
Primavera help 2012Primavera help 2012
Primavera help 2012
Dr Ezzat Mansour
 
Oracle 10g release 1
Oracle 10g release  1Oracle 10g release  1
Oracle 10g release 1
Rakesh Kumar Pandey
 
Shipping execution user guide r12
Shipping execution user guide r12Shipping execution user guide r12
Shipping execution user guide r12
aruna777
 
Oracle9
Oracle9Oracle9
Oracle9
Guille Anaya
 
Oracle® business intelligence
Oracle® business intelligenceOracle® business intelligence
Oracle® business intelligence
George Heretis
 
Developer’s guide for oracle data integrator
Developer’s guide for oracle data integratorDeveloper’s guide for oracle data integrator
Developer’s guide for oracle data integrator
Abhishek Srivastava
 
Web logic installation document
Web logic installation documentWeb logic installation document
Web logic installation document
Taoqir Hassan
 
using-advanced-controls (1).pdf
using-advanced-controls (1).pdfusing-advanced-controls (1).pdf
using-advanced-controls (1).pdf
Hussein Abdelrahman
 
E13882== ORACLE SOA COOK BOOK
E13882== ORACLE SOA COOK BOOKE13882== ORACLE SOA COOK BOOK
E13882== ORACLE SOA COOK BOOK
prathap kumar
 
Engineering user guide
Engineering user guideEngineering user guide
Engineering user guide
Rajesh Kumar
 
Oracle® application server
Oracle® application serverOracle® application server
Oracle® application server
FITSFSd
 
Oracle® application server forms and reports services installation guide
Oracle® application server forms and reports services installation guideOracle® application server forms and reports services installation guide
Oracle® application server forms and reports services installation guide
FITSFSd
 
Oracle_10g_PLSQL_Guia_Ref.pdf
Oracle_10g_PLSQL_Guia_Ref.pdfOracle_10g_PLSQL_Guia_Ref.pdf
Oracle_10g_PLSQL_Guia_Ref.pdf
Carlos Valente Albarracin
 
E29632
E29632E29632
E29632
ssfdsdsf
 
Whats new in Primavera Prime 15.2?
Whats new in Primavera Prime 15.2?Whats new in Primavera Prime 15.2?
Whats new in Primavera Prime 15.2?
p6academy
 
E10132
E10132E10132
Instalacion de apex
Instalacion de apexInstalacion de apex
Instalacion de apex
Diego Barrios
 
A73073
A73073A73073
Oracle database gateway 11g r2 installation and configuration guide
Oracle database gateway 11g r2 installation and configuration guideOracle database gateway 11g r2 installation and configuration guide
Oracle database gateway 11g r2 installation and configuration guide
Farrukh Muhammad
 

Similar to brocade-dc-fabric-architectures-sdg (20)

Opm costing
Opm costingOpm costing
Opm costing
 
Primavera help 2012
Primavera help 2012Primavera help 2012
Primavera help 2012
 
Oracle 10g release 1
Oracle 10g release  1Oracle 10g release  1
Oracle 10g release 1
 
Shipping execution user guide r12
Shipping execution user guide r12Shipping execution user guide r12
Shipping execution user guide r12
 
Oracle9
Oracle9Oracle9
Oracle9
 
Oracle® business intelligence
Oracle® business intelligenceOracle® business intelligence
Oracle® business intelligence
 
Developer’s guide for oracle data integrator
Developer’s guide for oracle data integratorDeveloper’s guide for oracle data integrator
Developer’s guide for oracle data integrator
 
Web logic installation document
Web logic installation documentWeb logic installation document
Web logic installation document
 
using-advanced-controls (1).pdf
using-advanced-controls (1).pdfusing-advanced-controls (1).pdf
using-advanced-controls (1).pdf
 
E13882== ORACLE SOA COOK BOOK
E13882== ORACLE SOA COOK BOOKE13882== ORACLE SOA COOK BOOK
E13882== ORACLE SOA COOK BOOK
 
Engineering user guide
Engineering user guideEngineering user guide
Engineering user guide
 
Oracle® application server
Oracle® application serverOracle® application server
Oracle® application server
 
Oracle® application server forms and reports services installation guide
Oracle® application server forms and reports services installation guideOracle® application server forms and reports services installation guide
Oracle® application server forms and reports services installation guide
 
Oracle_10g_PLSQL_Guia_Ref.pdf
Oracle_10g_PLSQL_Guia_Ref.pdfOracle_10g_PLSQL_Guia_Ref.pdf
Oracle_10g_PLSQL_Guia_Ref.pdf
 
E29632
E29632E29632
E29632
 
Whats new in Primavera Prime 15.2?
Whats new in Primavera Prime 15.2?Whats new in Primavera Prime 15.2?
Whats new in Primavera Prime 15.2?
 
E10132
E10132E10132
E10132
 
Instalacion de apex
Instalacion de apexInstalacion de apex
Instalacion de apex
 
A73073
A73073A73073
A73073
 
Oracle database gateway 11g r2 installation and configuration guide
Oracle database gateway 11g r2 installation and configuration guideOracle database gateway 11g r2 installation and configuration guide
Oracle database gateway 11g r2 installation and configuration guide
 

brocade-dc-fabric-architectures-sdg

  • 1. SOLUTION DESIGN GUIDE Brocade Data Center Fabric Architectures 53-1004601-02 06 October 2016
  • 2. © 2016, Brocade Communications Systems, Inc. All Rights Reserved. Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at www.brocade.com/en/legal/ brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it. The product described by this document may contain open source software covered by the GNU General Public License or other open source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd. Brocade Data Center Fabric Architectures 2 53-1004601-02
  • 3. Contents Preface...................................................................................................................................................................................................................................5 Document History......................................................................................................................................................................................................................................5 About the Author........................................................................................................................................................................................................................................5 Overview........................................................................................................................................................................................................................................................5 Purpose of This Document....................................................................................................................................................................................................................5 About Brocade............................................................................................................................................................................................................................................ 6 Data Center Networking Architectures...........................................................................................................................................................................7 Throughput and Traffic Patterns...........................................................................................................................................................................................................7 Scale Requirements of Cloud Networks...........................................................................................................................................................................................9 Traffic Isolation, Segmentation, and Application Continuity..................................................................................................................................................... 9 Data Center Networks: Building Blocks.......................................................................................................................................................................11 Brocade VDX and SLX Platforms....................................................................................................................................................................................................11 VDX 6740........................................................................................................................................................................................................................................11 VDX 6940........................................................................................................................................................................................................................................12 VDX 8770........................................................................................................................................................................................................................................13 SLX 9850.........................................................................................................................................................................................................................................14 Networking Endpoints...........................................................................................................................................................................................................................15 Single-Tier Topology............................................................................................................................................................................................................................. 16 Design Considerations................................................................................................................................................................................................................ 17 Oversubscription Ratios..............................................................................................................................................................................................................17 Port Density and Speeds for Uplinks and Downlinks.....................................................................................................................................................17 Scale and Future Growth............................................................................................................................................................................................................ 17 Ports on Demand Licensing.....................................................................................................................................................................................................18 Leaf-Spine Topology (Two Tiers)......................................................................................................................................................................................................18 Design Considerations................................................................................................................................................................................................................ 19 Oversubscription Ratios..............................................................................................................................................................................................................19 Leaf and Spine Scale....................................................................................................................................................................................................................19 Port Speeds for Uplinks and Downlinks.............................................................................................................................................................................. 20 Scale and Future Growth............................................................................................................................................................................................................ 20 Ports on Demand Licensing.....................................................................................................................................................................................................20 Deployment Model....................................................................................................................................................................................................................... 20 Data Center Points of Delivery.................................................................................................................................................................................................21 Optimized 5-Stage Folded Clos Topology (Three Tiers)....................................................................................................................................................... 21 Design Considerations................................................................................................................................................................................................................ 22 Oversubscription Ratios..............................................................................................................................................................................................................23 Deployment Model....................................................................................................................................................................................................................... 23 Edge Services and Border Switches Topology.......................................................................................................................................................................... 23 Design Considerations................................................................................................................................................................................................................ 24 Oversubscription Ratios..............................................................................................................................................................................................................24 Data Center Core/WAN Edge Handoff.................................................................................................................................................................................24 Data Center Core and WAN Edge Routers.........................................................................................................................................................................25 Building Data Center Sites with Brocade VCS Fabric Technology........................................................................................................................27 Data Center Site with Leaf-Spine Topology.................................................................................................................................................................................28 Scale....................................................................................................................................................................................................................................................30 Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics...........................................................................................................31 Brocade Data Center Fabric Architectures 53-1004601-02 3
  • 4. Scale....................................................................................................................................................................................................................................................34 Building Data Center Sites with Brocade IP Fabric...................................................................................................................................................37 Data Center Site with Leaf-Spine Topology.................................................................................................................................................................................37 Scale....................................................................................................................................................................................................................................................39 Scaling the Data Center Site with an Optimized 5-Stage Folded Clos............................................................................................................................40 Scale....................................................................................................................................................................................................................................................41 Building Data Center Sites with Layer 2 and Layer 3 Fabrics................................................................................................................................ 45 Scaling a Data Center Site with a Data Center Core................................................................................................................................................. 47 Control-Plane and Hardware-Scale Considerations.................................................................................................................................................49 Control-Plane Architectures................................................................................................................................................................................................................50 Single-Tier Data Center Sites....................................................................................................................................................................................................50 Brocade VCS Fabric.....................................................................................................................................................................................................................51 Multi-Fabric Topology Using VCS Technology................................................................................................................................................................ 54 Brocade IP Fabric..........................................................................................................................................................................................................................56 Routing Protocol Architecture for Brocade IP Fabric and Multi-Fabric Topology Using VCS Technology..................................................59 eBGP-based Brocade IP Fabric and Multi-Fabric Topology............................................................................................................................................... 59 iBGP-based Brocade IP Fabric and Multi-Fabric Topology.................................................................................................................................................60 Choosing an Architecture for Your Data Center.........................................................................................................................................................63 High-Level Comparison Table ......................................................................................................................................................................................................... 63 Deployment Scale Considerations...................................................................................................................................................................................................64 Fabric Architecture..................................................................................................................................................................................................................................65 Recommendations................................................................................................................................................................................................................................. 65 Brocade Data Center Fabric Architectures 4 53-1004601-02
  • 5. Preface • Document History................................................................................................................................................................................................5 • About the Author...................................................................................................................................................................................................5 • Overview...................................................................................................................................................................................................................5 • Purpose of This Document.............................................................................................................................................................................. 5 • About Brocade.......................................................................................................................................................................................................6 Document History Date Part Number Description February 9, 2016 Initial release with DC fabric architectures, network virtualization, Data Center Interconnect, and automation content. September 13, 2016 53-1004601-01 Initial release of solution design guide for DC fabric architectures. October 06, 2016 53-1004601-02 Replaced the figures for the Brocade VDX 6940-36Q and the Brocade VDX 6940-144S. About the Author Anuj Dewangan is the lead Technical Marketing Engineer (TME) for Brocade's data center switching products. He holds a CCIE in Routing and Switching and has several years of experience in the networking industry with roles in software development, solution validation, technical marketing, and product management. At Brocade, his focus is creating reference architectures, working with customers and account teams to address their challenges with data center networks, creating product and solution collateral, and helping define products and solutions. He regularly speaks at industry events and has authored several white papers and solution design guides on data center networking. The author would like to acknowledge Jeni Lloyd and Patrick LaPorte for their in-depth review of this solution guide and for providing valuable insight, edits, and feedback. Overview Based on the principles of the New IP, Brocade is building on Brocade ® VDX ® and Brocade ® SLX ® platforms by delivering cloud- optimized network and network virtualization architectures and delivering new automation innovations to meet customer demand for higher levels of scale, agility, and operational efficiency. The scalable and highly automated Brocade data center fabric architectures described in this solution design guide make it easy for infrastructure planners to architect, automate, and integrate with current and future data center technologies while they transition to their own cloud-optimized data center on their own time and terms. Purpose of This Document This guide helps network architects, virtualization architects, and network engineers to make informed design, architecture, and deployment decisions that best meet their technical and business objectives. Network architecture and deployment options for scaling from tens to hundreds of thousands of servers are discussed in detail. Brocade Data Center Fabric Architectures 53-1004601-02 5
  • 6. About Brocade Brocade® (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection. Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility. To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings (www.brocade.com). About Brocade Brocade Data Center Fabric Architectures 6 53-1004601-02
  • 7. Data Center Networking Architectures • Throughput and Traffic Patterns..................................................................................................................................................................... 7 • Scale Requirements of Cloud Networks..................................................................................................................................................... 9 • Traffic Isolation, Segmentation, and Application Continuity................................................................................................................9 Data center networking architectures have evolved with the changing requirements of the modern data center and cloud environments. This evolution has been triggered by a combination of industry technology trends like server virtualization as well as the architectural changes of the applications being deployed in the data centers. These technological and architectural changes are affecting the way private and public cloud networks are designed. As these changes proliferate in the traditional data centers, the need to adopt modern data center architectures has been growing. Throughput and Traffic Patterns Traditional data center network architectures were a derivative of the three-tier topology, prevalent in enterprise campus environments. The tiers are defined as Access, Aggregation, and Core. Figure 1 shows an example of a data center network built using a traditional three-tier topology. FIGURE 1 Three-Tier Data Center Architecture The three-tier topology was architected with the requirements of an enterprise campus in mind. In a campus network, the basic requirement of the access layer is to provide connectivity to workstations. These workstations exchange traffic either with an enterprise Brocade Data Center Fabric Architectures 53-1004601-02 7
  • 8. data center for business application access or with the Internet. As a result, most traffic in this network traverses in and out through the tiers in the network. This traffic pattern is commonly referred to as north-south traffic. The throughput requirements for traffic in a campus environment are less compared to those of a data center network where server virtualization has increased the application density and subsequently the data throughput to and from the servers. In addition, cloud applications are often multitiered and hosted at different endpoints connected to the network. The communication between these application tiers is a major contributor to the overall traffic in a data center. The multitiered nature of the applications deployed in a data center drives traffic patterns in a data center network to be more east-west than north-south. In fact, some of the very large data centers hosting multitiered applications report that more than 90 percent of their overall traffic occurs between the application tiers. Because of high throughput requirements and the east-west traffic patterns, the networking access layer that connects directly to the servers exchanges a much higher proportion of traffic with the upper layers of the networking infrastructure, as compared to an enterprise campus network. These reasons have driven the data center network architecture evolution into scale-out architectures. Figure 2 illustrates a leaf-spine topology, which is an example of a scale-out architecture. These scale-out architectures are built to maximize the throughput of traffic exchange between the leaf layer and the spine layer. FIGURE 2 Scale-Out Architecture: Ideal for East-West Traffic Patterns Common with Web-Based or Cloud-Based Application Designs As compared to a three-tier network, where the aggregation layer is restricted to two devices—typically because of technologies like Multi-Chassis Trunking (MCT) where exactly two devices can participate in the creation of port channels facing the access-layer switches —the spine layer can have multiple devices and hence provide a higher port density to connect to the leaf-layer switches. This allows more interfaces from each leaf to connect into the spine layer, providing higher throughput from each leaf to the spine layer. The characteristics of a leaf-spine topology are discussed in more detail in subsequent sections. The traditional three-tier datacenter architecture is still prevalent in environments where traffic throughput requirements between the networking layers can be satisfied through high-density platforms at the aggregation layer. For certain use cases like co-location data centers, where customer traffic is restricted to racks or managed areas within the data center, a three-tier architecture maybe more suitable. Similarly, enterprises hosting nonvirtualized and single-tiered applications may find the three-tier datacenter architecture more suitable. Throughput and Traffic Patterns Brocade Data Center Fabric Architectures 8 53-1004601-02
  • 9. Scale Requirements of Cloud Networks Another trend in recent years has been the consolidation of disaggregated infrastructures into larger central locations. With the changing economics and processes of application delivery, there has also been a shift of application workloads to public cloud provider networks. Enterprises have looked to consolidate and host private cloud services. Meanwhile, software cloud services, as well as infrastructure and platform service providers, have grown at a rapid pace. With this increasing shift of applications to the private and public cloud, the scale of the network deployment has increased drastically. Advanced scale-out architectures allow networks to be deployed at many multiples of the scale of a leaf-spine topology. An example of Brocade's advanced scale-out architecture is shown in Figure 3. FIGURE 3 Example of Brocade's Advanced Scale-Out Architecture (Optimized 5-Stage Clos) Brocade's advanced scale-out architectures allow data centers to be built at very high scales of ports and racks. Advanced scale-out architectures using an optimized 5-stage Clos topology are described later in more detail. A consequence of server virtualization enabling physical servers to host several virtual machines (VMs) is that the scale requirement for the control and data planes for networking parameters like MAC addresses, IP addresses, and Address Resolution Protocol (ARP) tables has multiplied. Also, these virtualized servers must support much higher throughput than in a traditional enterprise environment, leading to an evolution in Ethernet standards of 10 Gigabit Ethernet (10 GbE), 25GbE, 40 GbE, 50GbE, and 100 GbE. Traffic Isolation, Segmentation, and Application Continuity For multitenant cloud environments, providing traffic isolation between the network tenants is a priority. This isolation must be achieved at all networking layers. In addition, many environments must support overlapping IP addresses and VLAN numbering for the tenants of the network. Providing traffic segmentation through enforcement of security and traffic policies for each cloud tenant's application tiers is a requirement as well. In order to support application continuity and infrastructure high availability, it is commonly required that the underlying networking infrastructure be extended within and across one or more data center sites. Extension of Layer 2 domains is a specific requirement in many cases. Examples of this include virtual machine mobility across the infrastructure for high availability; resource load balancing and fault tolerance needs; and creation of application-level clustering, which commonly relies on shared broadcast domains for clustering Traffic Isolation, Segmentation, and Application Continuity Brocade Data Center Fabric Architectures 53-1004601-02 9
  • 10. operations like cluster node discovery and many-to-many communication. The need to extend tenant Layer 2 and Layer 3 domains while still supporting a common infrastructure Layer 3 environment across the infrastructure and also across sites is creating new challenges for network architects and administrators. The remainder of this solution design guide describes data center networking architectures that meet the requirements identified above for building cloud-optimized networks that address current and future needs for enterprises and service provider clouds. This guide focuses on the design considerations and choices for building a data center site using Brocade platforms and technologies. Refer to the Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide for a discussion on multitenant infrastructures and overlay networking that builds on the architectural concepts defined here. Traffic Isolation, Segmentation, and Application Continuity Brocade Data Center Fabric Architectures 10 53-1004601-02
  • 11. Data Center Networks: Building Blocks • Brocade VDX and SLX Platforms...............................................................................................................................................................11 • Networking Endpoints..................................................................................................................................................................................... 15 • Single-Tier Topology........................................................................................................................................................................................16 • Leaf-Spine Topology (Two Tiers)................................................................................................................................................................ 18 • Optimized 5-Stage Folded Clos Topology (Three Tiers)..................................................................................................................21 • Edge Services and Border Switches Topology.....................................................................................................................................23 This section discusses the building blocks that are used to build the network and network virtualization architectures for a data center site. These building blocks consist of the various elements that fit into an overall data center site deployment. The goal is to build fairly independent elements that can be assembled together, depending on the scale requirements of the networking infrastructure. Brocade VDX and SLX Platforms The first building block for the networking infrastructure is the Brocade networking platforms, which include Brocade VDX ® switches and Brocade SLX ® routers. This section provides a high-level summary of each of these two platform families. Brocade VDX switches with IP fabrics and VCS fabrics provide automation, resiliency, and scalability. Industry-leading Brocade VDX switches are the foundation for high-performance connectivity in data center fabric, storage, and IP network environments. Available in fixed and modular forms, these highly reliable, scalable, and available switches are designed for a wide range of environments, enabling a low Total Cost of Ownership (TCO) and fast Return on Investment (ROI). VDX 6740 The Brocade VDX 6740 series of switches provides the advanced feature set that data centers require while delivering the high performance and low latency that virtualized environments demand. Together with Brocade data center fabrics, these switches transform data center networks to support the New IP by enabling cloud-based architectures that deliver new levels of scale, agility, and operational efficiency. These highly automated, software-driven, and programmable data center fabric design solutions support a breadth of network virtualization options and scale for data center environments ranging from tens to thousands of servers. Moreover, they make it easy for organizations to architect, automate, and integrate current and future data center technologies while they transition to a cloud model that addresses their needs, on their timetable and on their terms. The Brocade VDX 6740 Switch offers 48 10-Gigabit-Ethernet (GbE) Small Form Factor Pluggable Plus (SFP+) ports and 4 40-GbE Quad SFP+ (QSFP+) ports in a 1U form factor. Each 40-GbE SFP+ port can be broken out into four independent 10-GbE SFP+ ports, providing an additional 16 10-GbE SFP+ ports, which can be licensed with Ports on Demand (PoD). FIGURE 4 VDX 6740 Brocade Data Center Fabric Architectures 53-1004601-02 11
  • 12. FIGURE 5 VDX 6740T FIGURE 6 VDX 6740T-1G VDX 6940 The Brocade VDX 6940-36Q is a fixed 40-Gigabit-Ethernet (GbE)optimized switch in a 1U form factor. It offers 36 40-GbE QSFP+ ports and can be deployed as a spine or leaf switch. Each 40-GbE port can be broken out into four independent 10-GbE SFP+ ports, providing a total of 144 10-GbE SFP+ ports. Deployed as a spine, it provides options to connect 40-GbE or 10-GbE uplinks from leaf switches. By deploying this high-density, compact switch, data center administrators can reduce their TCO through savings on power, space, and cooling. In a leaf deployment, 10-GbE and 40-GbE ports can be mixed, offering flexible design options to cost-effectively support demanding data center and service provider environments. As with other Brocade VDX platforms, the Brocade VDX 6940-36Q offers a Ports on Demand (PoD) licensing model. The Brocade VDX 6940-36Q is available with 24 ports or 36 ports. The 24-port model offers a lower entry point for organizations that want to start small and grow their networks over time. By installing a software license, organizations can upgrade their 24-port switch to the maximum 36-port switch. The Brocade VDX 6940-144S Switch is 10 GbE optimized with 40-GbE or 100-GbE uplinks in a 2U form factor. It offers 96 native 1/10-GbE SFP/SFP+ ports and 12 40-GbE QSFP+ ports, or 4 100-GbE QSFP28 ports. FIGURE 7 VDX 6940-36Q Brocade VDX and SLX Platforms Brocade Data Center Fabric Architectures 12 53-1004601-02
  • 13. FIGURE 8 VDX 6940-144S VDX 8770 The Brocade VDX 8770 switch is designed to scale and support complex environments with dense virtualization and dynamic traffic patterns—where more automation is required for operational scalability. The 100-GbE-ready Brocade VDX 8770 dramatically increases the scale that can be achieved in Brocade data center fabrics, with 10-GbE and 40-GbE wire-speed switching, numerous line card options, and the ability to connect over 8,000 server ports in a single switching domain. Available in 4-slot and 8-slot versions, the Brocade VDX 8770 is a highly scalable, low-latency modular switch that supports the most demanding data center networks. FIGURE 9 VDX 8770-4 Brocade VDX and SLX Platforms Brocade Data Center Fabric Architectures 53-1004601-02 13
  • 14. FIGURE 10 VDX 8770-8 SLX 9850 The Brocade ® SLX ™ 9850 Router is designed to deliver the cost-effective density, scale, and performance needed to address the ongoing explosion of network bandwidth, devices, and services today and in the future. This flexible platform powered by Brocade SLX-OS provides carrier-class advanced features leveraging proven Brocade routing technology that is used in the most demanding data center, service provider, and enterprise networks today and is delivered on best-in-class forwarding hardware. The extensible architecture of the Brocade SLX 9850 is designed for investment protection to readily support future needs for greater bandwidth, scale, and forwarding capabilities. Additionally, the Brocade SLX 9850 helps address the increasing agility and analytics needs of digital businesses with network automation and network visibility innovation supported through the Brocade Workflow Composer ™ and the Brocade SLX Insight Architecture ™ . FIGURE 11 Brocade SLX-9850-4 Brocade VDX and SLX Platforms Brocade Data Center Fabric Architectures 14 53-1004601-02
  • 15. FIGURE 12 Brocade SLX-9850-8 Networking Endpoints The next building blocks are the networking endpoints to connect to the networking infrastructure. These endpoints include the compute servers and storage devices, as well as network service appliances such as firewalls and load balancers. FIGURE 13 Networking Endpoints and Racks Figure 13 shows the different types of racks used in a data center infrastructure: • Infrastructure and management racks—These racks host the management infrastructure, which includes any management appliances or software used to manage the infrastructure. Examples of these are server virtualization management software like VMware vCenter or Microsoft SCVMM, orchestration software like OpenStack or VMware vRealize Automation, network controllers like the Brocade SDN Controller or VMware NSX, and network management and automation tools like Brocade Network Advisor. Examples of infrastructure racks are IP physical or virtual storage appliances. Networking Endpoints Brocade Data Center Fabric Architectures 53-1004601-02 15
  • 16. • Compute racks—Compute racks host the workloads for the data centers. These workloads can be physical servers, or they can be virtualized servers when the workload is made up of virtual machines (VMs). The compute endpoints can be single or can be multihomed to the network. • Edge racks—Network services like perimeter firewalls, load balancers, and NAT devices connected to the network are consolidated in edge racks. The role of the edge racks is to host the edge services, which can be physical appliances or virtual machines. These definitions of infrastructure/management, compute, and edge racks are used throughout this solution design guide. Single-Tier Topology The second building block is a single-tier network topology to connect endpoints to the network. Because of the existence of only one tier, all endpoints connect to this tier of the network. An example of a single-tier topology is shown in Figure 14. The single-tier switches are shown as a virtual Link Aggregation Group (vLAG) pair. However, the single-tier switches can also be part of a Multi-Chassis Trunking (MCT) pair. The Brocade VDX supports vLAG pairs, whereas the Brocade SLX 9850 supports MCT. The topology in Figure 14 shows the management/infrastructure, compute, and edge racks connected to a pair of switches participating in multiswitch port channeling. This pair of switches is called a vLAG pair. FIGURE 14 Single Networking Tier The single-tier topology scales the least among all the topologies described in this guide, but it provides the best choice for smaller deployments, as it reduces the Capital Expenditure (CapEx) costs for the network in terms of the size of the infrastructure deployed. It also reduces the optics and cabling costs for the networking infrastructure. Single-Tier Topology Brocade Data Center Fabric Architectures 16 53-1004601-02
  • 17. Design Considerations The design considerations for deploying a single-tier topology are summarized in this section. Oversubscription Ratios It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios at the vLAG pair/MCT should be well understood and planned for. The north-south oversubscription at the vLAG pair/MCT is described as the ratio of the aggregate bandwidth of all downlinks from the vLAG pair/MCT that are connected to the endpoints to the aggregate bandwidth of all uplinks that are connected to the data center core/WAN edge router (described in a later section). The north-south oversubscription dictates the proportion of traffic between the endpoints versus the traffic entering and exiting the single-tier topology. It is also important to understand the bandwidth requirements for the inter-rack traffic. This is especially true for all north-south communication through the services hosted in the edge racks. All such traffic flows through the vLAG pair/MCT to the edge racks, and if the traffic needs to exit, it flows back to the vLAG/MCT switches. Thus, the aggregate ratio of bandwidth connecting the compute racks to the aggregate ratio of bandwidth connecting the edge racks is an important consideration. Another consideration is the bandwidth of the link that interconnects the vLAG pair/MCT. In case of multihomed endpoints and no failure, this link should not be used for data-plane forwarding. However, if there are link failures in the network, this link may be used for data- plane forwarding. The bandwidth requirement for this link depends on the redundancy design for link failures. For example, a design to tolerate up to two 10-GbE link failures has a 20-GbE interconnection between the Top of Rack/End of Row (ToR/EoR) switches. Port Density and Speeds for Uplinks and Downlinks In a single-tier topology, the uplink and downlink port density of the vLAG pair/MCT determines the number of endpoints that can be connected to the network, as well as the north-south oversubscription ratios. Another key consideration for single-tier topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX and SLX Series platforms support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks (25-GbE interfaces will be supported in the future with the Brocade SLX 9850). The choice of the platform for the vLAG pair/MCT depends on the interface speed and density requirements. Scale and Future Growth A design consideration for single-tier topologies is the need to plan for more capacity in the existing infrastructure and more endpoints in the future. Adding more capacity between existing endpoints and vLAG switches can be done by adding new links between them. Any future expansion in the number of endpoints connected to the single-tier topology should be accounted for during the network design, as this requires additional ports in the vLAG switches. Other key considerations are whether to connect the vLAG/MCT pair to external networks through data center core/WAN edge routers and whether to add a networking tier for higher scale. These designs require additional ports at the ToR/EoR. Multitier designs are described in a later section of this guide. Single-Tier Topology Brocade Data Center Fabric Architectures 53-1004601-02 17
  • 18. Ports on Demand Licensing Ports on Demand licensing allows you to expand your capacity at your own pace, in that you can invest in a higher port density platform, yet license only a subset of the available ports—the ports that you are using for current needs. This allows for an extensible and future- proof network architecture without the additional upfront cost for unused ports on the switches. Leaf-Spine Topology (Two Tiers) The two-tier leaf-spine topology has become the de facto standard for networking topologies when building medium- to large-scale data center infrastructures. An example of leaf-spine topology is shown in Figure 15. FIGURE 15 Leaf-Spine Topology The leaf-spine topology is adapted from traditional Clos telecommunications networks. This topology is also known as the "3-stage folded Clos," with the ingress and egress stages proposed in the original Clos architecture folding together at the spine to form the leafs. The role of the leaf is to provide connectivity to the endpoints in the network. These endpoints include compute servers and storage devices, as well as other networking devices like routers and switches, load balancers, firewalls, or any other networking endpoint— physical or virtual. As all endpoints connect only to the leafs, policy enforcement including security, traffic path selection, Quality of Service (QoS) markings, traffic scheduling, policing, shaping, and traffic redirection are implemented at the leafs. The Brocade VDX 6740 and 6940 family of switches is used as leaf switches. The role of the spine is to provide interconnectivity between the leafs. Network endpoints do not connect to the spines. As most policy implementation is performed at the leafs, the major role of the spine is to participate in the control-plane and data-plane operations for traffic forwarding between the leafs. Brocade VDX or SLX platform families are used as the spine switches depending on the scale and feature requirements. As a design principle, the following requirements apply to the leaf-spine topology: • Each leaf connects to all spines in the network. • The spines are not interconnected with each other. • The leafs are not interconnected with each other for data-plane purposes. (The leafs may be interconnected for control-plane operations such as forming a server-facing vLAG.) The following are some of the key benefits of a leaf-spine topology: • Because each leaf is connected to every spine, there are multiple redundant paths available for traffic between any pair of leafs. Link failures cause other paths in the network to be used. • Because of the existence of multiple paths, Equal-Cost Multipathing (ECMP) can be leveraged for flows traversing between pairs of leafs. With ECMP, each leaf has equal-cost routes to reach destinations in other leafs, equal to the number of spines in the network. Leaf-Spine Topology (Two Tiers) Brocade Data Center Fabric Architectures 18 53-1004601-02
  • 19. • The leaf-spine topology provides a basis for a scale-out architecture. New leafs can be added to the network without affecting the provisioned east-west capacity for the existing infrastructure. • New spines and new uplink ports on the leafs can be provisioned to increase the capacity of the leaf-spine fabric. • The role of each tier in the network is well defined (as discussed previously), providing modularity in the networking functions and reducing architectural and deployment complexities. • The leaf-spine topology provides granular control over subscription ratios for traffic flowing within a rack, between racks, and outside the leaf-spine topology. Design Considerations There are several design considerations for deploying a leaf-spine topology. This section summarizes the key considerations. Oversubscription Ratios It is important for network architects to understand the expected traffic patterns in the network. To this effect, the oversubscription ratios at each layer should be well understood and planned for. For a leaf switch, the ports connecting to the endpoints are defined as downlink ports, and the ports connecting to the spines are defined as uplink ports. The north-south oversubscription ratio at the leafs is the ratio of the aggregate bandwidth for the downlink ports and the aggregate bandwidth for the uplink ports. For a spine switch in a leaf-spine topology, the east-west oversubscription ratio is defined per pair of leaf switches connecting to the spine switch. For a given pair of leaf switches connecting to the spine switch, the east-west oversubscription ratio at the spine is the ratio of the aggregate bandwidth of the uplinks of the first switch and the aggregate bandwidth of the uplinks of the second switch. In a majority of deployments, this ratio is 1:1, making the east-west oversubscription ratio at the spine nonblocking. Exceptions to the nonblocking east-west oversubscriptions should be well understood and depend on the traffic patterns of the endpoints that are connected to the respective leafs. The oversubscription ratios described here govern the ratio of the traffic bandwidth between endpoints connected to the same leaf switch and the traffic bandwidth between endpoints connected to different leaf switches. For example, if the north-south oversubscription ratio is 3:1 at the leafs and 1:1 at the spines, then the bandwidth of traffic between endpoints connected to the same leaf switch should be three times the bandwidth between endpoints connected to different leafs. From a network endpoint perspective, the network oversubscriptions should be planned so that the endpoints connected to the network have the required bandwidth for communications. Specifically, endpoints that are expected to use higher bandwidth should be localized to the same leaf switch (or the same leaf switch pair when endpoints are multihomed). The ratio of the aggregate bandwidth of all spine downlinks connected to the leafs and the aggregate bandwidth of all downlinks connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south oversubscription at the spine. The north-south oversubscription dictates the traffic destined to the services that are connected to the border leaf switches and that exit the data center site. Leaf and Spine Scale Because the endpoints in the network connect only to the leaf switches, the number of leaf switches in the network depends on the number of interfaces required to connect all the endpoints. The port count requirement should also account for multihomed endpoints. Because each leaf switch connects to all spines, the port density on the spine switch determines the maximum number of leaf switches in the topology. A higher oversubscription ratio at the leafs reduces the leaf scale requirements, as well. Leaf-Spine Topology (Two Tiers) Brocade Data Center Fabric Architectures 53-1004601-02 19
  • 20. The number of spine switches in the network is governed by a combination of the throughput required between the leaf switches, the number of redundant/ECMP paths between the leafs, and the port density in the spine switches. Higher throughput in the uplinks from the leaf switches to the spine switches can be achieved by increasing the number of spine switches or bundling the uplinks together in port-channel interfaces between the leafs and the spines. Port Speeds for Uplinks and Downlinks Another consideration for leaf-spine topologies is the choice of port speeds for the uplink and downlink interfaces. Brocade VDX switches support 10-GbE, 40-GbE, and 100-GbE interfaces, which can be used for uplinks and downlinks. The choice of platform for the leaf and spine depends on the interface speed and density requirements. Scale and Future Growth Another design consideration for leaf-spine topologies is the need to plan for more capacity in the existing infrastructure and to plan for more endpoints in the future. Adding more capacity between existing leaf and spine switches can be done by adding spine switches or adding new interfaces between existing leaf and spine switches. In either case, the port density requirements for the leaf and the spine switches should be accounted for during the network design process. If new leaf switches need to be added to accommodate new endpoints in the network, ports at the spine switches are required to connect the new leaf switches. In addition, you must decide whether to connect the leaf-spine topology to external networks through border leaf switches or whether to add an additional networking tier for higher scale. Such designs require additional ports at the spine. These designs are described in another section of this guide. Ports on Demand Licensing Remember that Ports on Demand licensing allows you to expand your capacity at your own pace in that you can invest in a higher port density platform, yet license only the ports on the Brocade VDX switch that you are using for current needs. This allows for an extensible and future-proof network architecture without additional cost. Deployment Model The links between the leaf and spine can be either Layer 2 or Layer 3 links. If the links between the leaf and spine are Layer 2 links, the deployment is known as a Layer 2 (L2) leaf-spine deployment or a Layer 2 Clos deployment. You can deploy Brocade VDX switches in a Layer 2 deployment by using Brocade VCS ® Fabric technology. With Brocade VCS Fabric technology, the switches in the leaf-spine topology cluster together and form a fabric that provides a single point for management, a distributed control plane, embedded automation, and multipathing capabilities from Layer 1 to Layer 3. The benefits of deploying a VCS fabric are described later in this design guide. If the links between the leaf and spine are Layer 3 links, the deployment is known as a Layer 3 (L3) leaf-spine deployment or a Layer 3 Clos deployment. You can deploy Brocade VDX and SLX platforms in a Layer 3 deployment by using Brocade IP fabric technology. Brocade VDX switches can be deployed in spine and leaf Place in the Networks (PINs), whereas the Brocade SLX 9850 can be deployed in the spine PIN. Brocade IP fabrics provide a highly scalable, programmable, standards-based, and interoperable networking infrastructure. The benefits of Brocade IP fabrics are described later in this guide. Leaf-Spine Topology (Two Tiers) Brocade Data Center Fabric Architectures 20 53-1004601-02
  • 21. Data Center Points of Delivery Figure 16 shows a building block for a data center site. This building block is called a data center point of delivery (PoD). The data center PoD consists of the networking infrastructure in a leaf-spine topology along with the endpoints grouped together in management/ infrastructure and compute racks. The idea of a PoD is to create a simple, repeatable, and scalable unit for building a data center site at scale. FIGURE 16 A Data Center PoD Optimized 5-Stage Folded Clos Topology (Three Tiers) Multiple leaf-spine topologies can be aggregated for higher scale in an optimized 5-stage folded Clos topology. This topology adds a new tier to the network known as the super-spine. The role of the super-spine is to provide connectivity between the spine switches across multiple data center PoDs. Figure 17 shows four super-spine switches connecting the spine switches across multiple data center PoDs. Optimized 5-Stage Folded Clos Topology (Three Tiers) Brocade Data Center Fabric Architectures 53-1004601-02 21
  • 22. FIGURE 17 An Optimized 5-Stage Folded Clos with Data Center PoDs The connection between the spines and the super-spines follows the Clos principles: • Each spine connects to all super-spines in the network. • Neither the spines nor the super-spines are interconnected with each other. Similarly, all the benefits of a leaf-spine topology—namely, multiple redundant paths, ECMP, scale-out architecture, and control over traffic patterns—are realized in the optimized 5-stage folded Clos topology as well. With an optimized 5-stage Clos topology, a PoD is a simple and replicable unit. Each PoD can be managed independently, including firmware versions and network configurations. This topology also allows the data center site capacity to scale up by adding new PoDs or to scale down by removing existing PoDs, without affecting the existing infrastructure, providing elasticity in scale and isolation of failure domains. Brocade VDX switches are used for the leaf PIN, whereas depending on scale and features being deployed, either Brocade VDX or SLX platforms can be deployed at the spine and super-spine PINs. This topology also provides a basis for interoperation of different deployment models of Brocade VCS fabrics and IP fabrics. This is described later in this guide. Design Considerations The design considerations of oversubscription ratios, port speeds and density, spine and super-spine scale, planning for future growth, and Brocade Ports on Demand licensing, which were described for the leaf-spine topology, apply to the optimized 5-stage folded Clos topology as well. Some key considerations are highlighted below. Optimized 5-Stage Folded Clos Topology (Three Tiers) Brocade Data Center Fabric Architectures 22 53-1004601-02
  • 23. Oversubscription Ratios Because the spine switches now have uplinks connecting to the super-spine switches, the north-south oversubscription ratios for the spine switches dictate the ratio of aggregate bandwidth of traffic switched east-west within a data center PoD to the aggregate bandwidth of traffic exiting the data center PoD. This is a key consideration from the perspective of network infrastructure and services placement, application tiers, and (in the case of service providers) tenant placement. In cases of north-south oversubscription at the spines, endpoints should be placed to optimize traffic within a data center PoD. At the super-spine switch, the east-west oversubscription defines the ratio of the bandwidth of the downlink connections for a pair of data center PoDs. In most cases, this ratio is 1:1. The ratio of the aggregate bandwidth of all super-spine downlinks connected to the spines and the aggregate bandwidth of all downlinks connected to the border leafs (described in Edge Services and Border Switches Topology on page 23) defines the north-south oversubscription at the super-spine. The north-south oversubscription dictates the traffic destined to the services connected to the border leaf switches and exiting the data center site. Deployment Model The Layer 3 gateways for the endpoints connecting to the networking infrastructure can be at the leaf, at the spine, or at the super-spine. With Brocade IP fabric architecture (described later in this guide), the Layer 3 gateways are present at the leaf layer. So the links between the leafs, spines, and super-spines are Layer 3. With Brocade multi-fabric topology using VCS fabric architecture (described later in this guide), there is a choice of the Layer 3 gateway at the spine layer or at the super-spine layer. In either case, the links between the leafs and spines will be Layer 2 links. If the Layer 3 gateway is at the spine layer, the links between the spine and super-spine are Layer 3. Else, those links are Layer 2 as well. These Layer 2 links are IEEE-802.1Q-VLAN-based optionally over Link Aggregation Control Protocol (LACP) aggregated links. These architectures are described later in this guide. Edge Services and Border Switches Topology For two-tier and three-tier data center topologies, the role of the border switches in the network is to provide external connectivity to the data center site. In addition, as all traffic enters and exits the data center through the border leaf switches, they present the ideal location in the network to connect network services like firewalls, load balancers, and edge VPN routers. The topology for interconnecting the border switches depends on the number of network services that need to be attached and the oversubscription ratio at the border switches. Figure 18 shows a simple topology for border switches, where the service endpoints connect directly to the border switches. Border switches in this simple topology are referred to as "border leaf switches" because the service endpoints connect to them directly. Edge Services and Border Switches Topology Brocade Data Center Fabric Architectures 53-1004601-02 23
  • 24. FIGURE 18 Edge Services PoD If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The border switches and the edge racks together form the edge services PoD. Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput. Design Considerations The following sections describe the design considerations for border switches. Oversubscription Ratios The border leaf switches have uplink connections to spines in the leaf-spine topology and to super-spines in the 3-tier topology. They also have uplink connections to the data center core/WAN edge routers as described in the next section. The ratio of the aggregate bandwidth of the uplinks connecting to the spines/super-spines and the aggregate bandwidth of the uplink connecting to the core/edge routers determines the oversubscription ratio for traffic exiting the data center site. The north-south oversubscription ratios for the services connected to the border leafs are another consideration. Because many of the services connected to the border leafs may have public interfaces that face external entities like core/edge routers and internal interfaces that face the internal network, the north-south oversubscription for each of these connections is an important design consideration. Data Center Core/WAN Edge Handoff The uplinks to the data center core/WAN edge routers from the border leafs carry the traffic entering and exiting the data center site. The data center core/WAN edge handoff can be Layer 2 and/or Layer 3 in combination with overlay protocols. Edge Services and Border Switches Topology Brocade Data Center Fabric Architectures 24 53-1004601-02
  • 25. The handoff between the border leafs and the data center core/WAN edge may provide domain isolation for the control- and data-plane protocols running in the internal network and built using one-tier, two-tier, or three-tier topologies. This helps in providing independent administrative, fault-isolation, and control-plane domains for isolation, scale, and security between the different domains of a data center site. Data Center Core and WAN Edge Routers The border leaf switches connect to the data center core/WAN edge devices in the network to provide external connectivity to the data center site. Figure 19 shows an example of the connectivity between the vLAG/MCT pair from a single-tier topology, spine switches from a two-tier topology, border leafs, a collapsed data center core/WAN edge tier, and external networks for Internet and data center interconnection. FIGURE 19 Collapsed Data Center Core and WAN Edge Routers Connecting Internet and DCI Fabric to the Border Leaf in the Data Center Site If more services or higher bandwidth for exiting the data center site is needed, multiple sets of border leaf switches can be deployed. The border switches and the edge racks together form the edge services PoD. Brocade VDX switches are used for the border leaf PIN. The border leaf switches can also participate in a vLAG pair. This allows the edge service appliances and servers to dual-home into the border leaf switches for redundancy and higher throughput. Edge Services and Border Switches Topology Brocade Data Center Fabric Architectures 53-1004601-02 25
  • 26. Brocade Data Center Fabric Architectures 26 53-1004601-02
  • 27. Building Data Center Sites with Brocade VCS Fabric Technology • Data Center Site with Leaf-Spine Topology........................................................................................................................................... 28 • Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics......................................................................31 Brocade VCS fabrics are Ethernet fabrics built for modern data center infrastructure needs. With Brocade VCS Fabric technology, up to 48 Brocade VDX switches can participate in a VCS fabric. The data plane of the VCS fabric is based on the Transparent Interconnection of Lots of Links (TRILL) standard, supported by Layer 2 routing protocols that propagate topology information within the fabrics. This ensures that there are no loops in the fabrics, and there is no need to run Spanning Tree Protocol (STP). Also, none of the links are blocked. Brocade VCS Fabric technology provides a compelling solution for deploying a Layer 2 Clos topology. Brocade VCS Fabric technology provides the following benefits: • TRILL-based Ethernet fabric—Brocade VCS Fabric technology, which is based on the TRILL standard, uses a Layer 2 routing protocol within the fabric. This ensures that all links are always utilized within the VCS fabric, and there is no need for loop- prevention protocols like Spanning Tree that block links and provide inefficient utilization of the networking infrastructure. • Active-Active vLAG—VCS fabrics allow for active-active port channels between networking endpoints and multiple VDX switches participating in a VCS fabric, enabling redundancy and increased throughput. • Single point of management—With all switches in a VCS fabric participating in a logical chassis, the entire topology can be managed as a single switch. This drastically reduces the configuration, validation, monitoring, and troubleshooting complexity of the fabric. • Distributed MAC address learning—With Brocade VCS Fabric technology, the MAC addresses that are learned at the edge ports of the fabric are distributed to all nodes participating within the fabric. This means that the MAC address learning within the fabric does not rely on flood-and-learn mechanisms, and flooding related to unknown unicast frames is avoided. • Multipathing from Layer 1 to Layer 3—Brocade VCS Fabric technology provides efficiency and resiliency through the use of multipathing from Layer 1 to Layer 3: – At Layer 1, Brocade Trunking (BTRUNK) enables frame-based load balancing between a pair of switches that are part of the VCS fabric. This provides near identical link utilization for links participating in a BTRUNK. This ensures that thick(or “elephant”) flows do not congest an inter-switch link (ISL). – Because of the existence of a Layer 2 routing protocol, Layer 2 ECMP is performed between multiple next hops. This is critical in a Clos topology, where all spines are ECMP next hops for a leaf that sends traffic to an endpoint connected to another leaf. – Layer 3 ECMP using Layer 3 routing protocols ensures that traffic is load-balanced between Layer 3 next hops. • Distributed control plane—Control-plane and data-plane state information is shared across devices in the VCS fabric, which enables fabric-wide MAC address learning, multiswitch port channels (vLAG), Distributed Spanning Tree (DiST), and gateway redundancy protocols like Virtual Router Redundancy Protocol–Extended (VRRP-E) and Fabric Virtual Gateway (FVG), among others. These enable the VCS fabric to function like a single switch to interface with other entities in the infrastructure—thus appearing as a single control-plane entity to other devices in the network. • Embedded automation—Brocade VCS Fabric technology provides embedded turnkey automation built into Brocade Network OS. These automation features enable zero-touch provisioning of new switches into an existing fabric. Brocade VDX switches also provide multiple management methods, including the command-line interface (CLI), Simple Network Management Protocol (SNMP), REST, and Network Configuration Protocol (NETCONF) interfaces. • Multitenancy at Layers 2 and 3—With Brocade VCS Fabric technology, multitenancy features at Layers 2 and 3 enable traffic isolation and segmentation across the fabric. Brocade VCS Fabric technology allows an extended range of up to 8,000 Layer 2 Brocade Data Center Fabric Architectures 53-1004601-02 27
  • 28. domains within the fabric, while isolating overlapping IEEE-802.1Q-based tenant networks into separate Layer 2 domains. Layer 3 multitenancy using Virtual Routing and Forwarding (VRF) protocols, multi-VRF routing protocols, and BGP-EVPN enables large-scale Layer 3 multitenancy. • Ecosystem integration and virtualization features—Brocade VCS Fabric technology integrates with leading industry solutions and products like OpenStack; VMware products like vSphere, NSX, and vRealize; common infrastructure programming tools like Python; and Brocade tools like Brocade Network Advisor. Brocade VCS Fabric technology is virtualization-aware and helps dramatically reduce administrative tasks and enable seamless VM migration with features like Automatic Migration of Port Profiles (AMPP), which automatically adjusts port-profile information as a VM moves from one server to another. • Advanced storage features—Brocade VDX switches provide rich storage protocols and features like Fibre Channel over Ethernet (FCoE), Data Center Bridging (DCB), Monitoring and Alerting Policy Suite (MAPS), and Auto-NAS (Network Attached Storage), among others, to enable advanced storage networking. The benefits and features listed simplify Layer 2 Clos deployment by using Brocade VDX switches and Brocade VCS Fabric technology. The next section describes data center site designs that use Layer 2 Clos built with Brocade VCS Fabric technology. Data Center Site with Leaf-Spine Topology Figure 20 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology, the spines are connected to the data center core/WAN edge devices directly. The spine PIN in this topology is sometimes referred to as the "border spine" because it performs both the spine function of east-west traffic switches and the border function of providing an interface to the data center core/WAN edge. Data Center Site with Leaf-Spine Topology Brocade Data Center Fabric Architectures 28 53-1004601-02
  • 29. FIGURE 20 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Spine Switches Figure 21 shows a data center site built using a leaf-spine topology deployed using Brocade VCS Fabric technology. In this topology, border leaf switches are added along with the edge services PoD for external connectivity and hosting edge services. Data Center Site with Leaf-Spine Topology Brocade Data Center Fabric Architectures 53-1004601-02 29
  • 30. FIGURE 21 Data Center Site Built with a Leaf-Spine Topology and Brocade VCS Fabric Technology with Border Leaf Switches The border leafs in the edge services PoD are built using a separate VCS fabric. The border leafs are connected to the spine switches in the data center PoD and also to the data center core/WAN edge routers. These links can be either Layer 2 or Layer 3 links, depending on the requirements of the deployment and the handoff required to the data center core/WAN edge routers. There can be more than one edge services PoD in the network, depending on the service needs and the bandwidth requirement for connecting to the data center core/WAN edge routers. As an alternative to the topology shown in Figure 21, the border leaf switches in the edge services PoD and the data center PoD can be part of the same VCS fabric, to extend the fabric benefits to the entire data center site. This model is shown in Brocade VCS Fabric on page 51. The data center PoDs shown in Figure 20 and Figure 21 are built using Brocade VCS fabric technology. With Brocade VCS fabric technology, we recommend interconnecting the spines with each other (not shown in the figures) to ensure the best traffic path during failure scenarios. Scale Table 1 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX platforms at the leaf and spine Places in the Network (PINs) in a Brocade VCS fabric. TABLE 1 Scale Numbers for a Data Center Site with a Leaf-Spine Topology Implemented with Brocade VCS Fabric Technology Leaf Switch Spine Switch Leaf Oversubscription Ratio Leaf Count Spine Count VCS Fabric Size (Number of Switches) 10-GbE Port Count 6740, 6740T, 6740T-1G 6940-36Q 3:1 36 4 40 1,728 6740, 6740T, 6740T-1G 8770-4 3:1 44 4 48 2,112 6940-144S 6940-36Q 2:1 36 12 48 3,456 6940-144S 8770-4 2:1 36 12 48 3,456 Data Center Site with Leaf-Spine Topology Brocade Data Center Fabric Architectures 30 53-1004601-02
  • 31. The following assumptions are made: • Links between the leafs and the spines are 40 GbE. • The Brocade VDX 6740 Switch platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade VDX 6740 Switch, the Brocade VDX 6740T Switch, and the Brocade VDX 6740T-1G Switch. (The Brocade VDX 6740T-1G requires a Capacity on Demand license to upgrade to 10GBASE-T ports.) • The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks. • The Brocade VDX 8770-4 Switch uses 27 × 40-GbE line cards with 40-GbE interfaces. Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics If multiple VCS fabrics are needed at a data center site, the optimized 5-stage Clos topology is used to increase scale by interconnecting the data center PoDs built using leaf-spine topology with Brocade VCS Fabric technology. This deployment architecture is referred to as a multi-fabric topology using VCS fabrics. In a multi-fabric topology using VCS fabrics, individual data center PoDs resemble a leaf-spine topology deployed using Brocade VCS Fabric technology. Note that we recommend that the spines be interconnected in a data center PoD built using Brocade VCS Fabric technology. A new super-spine tier is used to interconnect the spine switches in the data center PoD. In addition, the border leaf switches are also connected to the super-spine switches. There are two deployment options available to build multi-fabric topology using VCS fabrics. In the first deployment option, the links between the spine and super-spine are Layer 2. In order to achieve a loop-free environment and avoid loop-prevention protocols between the spine and super-spine tiers, the super-spine devices participate in a VCS fabric as well. The connections between the spine and the super-spines are bundled together in (dual-sided) vLAGs to create a loop-free topology. The standard VLAN range of 1 to 4094 can be extended between the DC PoDs using IEEE 802.1Q tags over the dual-sided vLAGs. This is illustrated in Figure 22. Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics Brocade Data Center Fabric Architectures 53-1004601-02 31
  • 32. FIGURE 22 Multi-Fabric Topology with VCS Technology—With L2 Links Between Spine and Super-Spine and DC Core/WAN Edge Connected to Super-Spine In this topology, the super-spines connect directly into the data center core/WAN edge, which provides external connectivity to the network. Alternately, Figure 23 shows the border leafs connecting directly to the data center core/WAN edge. In this topology, if the Layer 3 boundary is at the super-spine, the links between the super-spine and the border leafs carry Layer 3 traffic as well. Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics Brocade Data Center Fabric Architectures 32 53-1004601-02
  • 33. FIGURE 23 Multi-Fabric Topology with VCS Technology—With L2 Links Between Spine and Super-Spine and DC Core/WAN Edge Connected to Border Leafs In the second deployment option, the links between the spine and super-spine are Layer 3. In cases where the Layer 3 gateways for the VLANs in the VCS fabrics are at the spine layer, this model provides routing between the data center PoDs. As a consequence of the links being Layer3, a loop-free topology is achieved. Here the Brocade SLX 9850 is an option for the super-spine PIN. This is illustrated in Figure 24. FIGURE 24 Multi-Fabric Topology with VCS Technology—With L3 Links Between Spine and Super-Spine If Layer 2 extension is required between the DC PoDs, Virtual Fabric Extension (VF-Extension) technology can be used. With VF- Extension, the spine switches (VDX 6740 and VDX 6940 only) can be configured as VXLAN Tunnel Endpoints (VTEPs). Subsequently, Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics Brocade Data Center Fabric Architectures 53-1004601-02 33
  • 34. the VXLAN protocol can be used to extend the Layer 2 VLANs as well as the virtual fabrics between the VCS fabrics of the DC PoDs. This is described in more detail in the Brocade Data Center Fabric Architectures for Network Virtualization Solution Design Guide. Figure 23 and Figure 24 show only one edge services PoD, but there can be multiple such PoDs depending on the edge service endpoint requirements, the oversubscription for traffic that is exchanged with the data center core/WAN edge, and the related handoff mechanisms. Scale Table 2 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine, and super-spine PINs for an optimized 5-stage Clos built with Brocade VCS fabrics. The following assumptions are made: • Links between the leafs and the spines are 40 GbE. Links between the spines and super-spines are also 40 GbE. • The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on Demand license to upgrade to 10GBASE-T ports.) Four spines are used to connect the uplinks. • The Brocade 6940-144S platforms use 12 × 40-GbE uplinks. Twelve spines are used to connect the uplinks. • The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the bandwidth of downlink ports at the spines. A larger port scale can be realized with a higher oversubscription ratio at the spines. However, a 1:1 oversubscription ratio is used here and is also recommended. • One spine plane is used for the scale calculations. This means that all spine switches in each data center PoD connect to all super-spine switches in the topology. This topology is consistent with the optimized 5-stage Clos topology. • Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (uses 18 × 40-GbE per line card) for connections between spines and super-spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode. The Brocade VDX 8770-8 supports 144 × 40-GbE ports in performance mode. • The link between the spines and the super-spines is assumed to be Layer 3, and 32-way Layer 3 ECMP is utilized for spine to super-spine connections. This gives a maximum of 32 super-spines for the multi-fabric topology using Brocade VCS Fabric technology. Refer to the release notes for your platform to check the ECMP support scale. NOTE For a larger port scale for the multi-fabric topology using Brocade VCS Fabric technology, multiple spine planes are used. Architectures with multiple spine planes are described later. TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology Leaf Switch Spine Switch Super-Spine Switch Leaf Over- subscription Ratio Leaf Count per Data Center PoD Spine Count per Data Center PoD Number of Super- Spines Number of Data Center PoDs 10-GbE Port Count VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q VDX 6940-36Q 3:1 18 4 18 9 7,776 VDX 6940-144S VDX 6940-36Q VDX 6940-36Q 2:1 18 12 18 3 5,184 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-4 VDX 6940-36Q 3:1 32 4 32 9 13,824 VDX 6940-144S VDX 8770-4 VDX 6940-36Q 2:1 32 12 32 3 9,216 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q VDX 8770-4 3:1 18 4 18 18 15,552 VDX 6940-144S VDX 6940-36Q VDX 8770-4 2:1 18 12 18 6 10,368 Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics Brocade Data Center Fabric Architectures 34 53-1004601-02
  • 35. TABLE 2 Sample Scale Numbers for a Data Center Site Built as a Multi-Fabric Topology Using Brocade VCS Fabric Technology (continued) Leaf Switch Spine Switch Super-Spine Switch Leaf Over- subscription Ratio Leaf Count per Data Center PoD Spine Count per Data Center PoD Number of Super- Spines Number of Data Center PoDs 10-GbE Port Count VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-4 VDX 8770-4 3:1 32 4 32 18 27,648 VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 32 6 18,432 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q VDX 8770-8 3:1 18 4 18 36 31,104 VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 18 12 20,736 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-4 VDX 8770-8 3:1 32 4 32 36 55,296 VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 32 12 36,864 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q SLX 9850-4 3:1 18 4 18 60 51,840 VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 18 20 34,560 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-4 SLX 9850-4 3:1 32 4 32 60 92,160 VDX 6940-144S VDX 8770-4 SLX 9850-4 2:1 32 12 32 20 61,440 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q SLX 9850-8 3:1 18 4 18 120 103,680 VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 18 40 69,120 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-4 SLX 9850-8 3:1 32 4 32 120 184,320 VDX 6940-144S VDX 8770-4 SLX 9850-8 2:1 32 12 32 40 122,880 Scaling the Data Center Site with a Multi-Fabric Topology Using VCS Fabrics Brocade Data Center Fabric Architectures 53-1004601-02 35
  • 36. Brocade Data Center Fabric Architectures 36 53-1004601-02
  • 37. Building Data Center Sites with Brocade IP Fabric • Data Center Site with Leaf-Spine Topology........................................................................................................................................... 37 • Scaling the Data Center Site with an Optimized 5-Stage Folded Clos.......................................................................................40 Brocade IP fabric provides a Layer 3 Clos deployment architecture for data center sites. With Brocade IP fabric, all links in the Clos topology are Layer 3 links. The Brocade IP fabric includes the networking architecture, the protocols used to build the network, turnkey automation features used to provision, validate, remediate, troubleshoot, and monitor the networking infrastructure, and the hardware differentiation with Brocade VDX and SLX platforms. The following sections describe these aspects of building data center sites with Brocade IP fabrics. Because the infrastructure is built on IP, advantages like loop-free communication using industry-standard routing protocols, ECMP, very high solution scale, and standards-based interoperability are leveraged. The following are some of the key benefits of deploying a data center site with Brocade IP fabrics: • Highly scalable infrastructure—Because the Clos topology is built using IP protocols, the scale of the infrastructure is very high. These port and rack scales are documented with descriptions of the Brocade IP fabric deployment topologies. • Standards-based and interoperable protocols—Brocade IP fabric is built using industry-standard protocols like the Border Gateway Protocol (BGP) and Open Shortest Path First (OSPF). These protocols are well understood and provide a solid foundation for a highly scalable solution. In addition, industry-standard overlay control- and data-plane protocols like BGP- EVPN and Virtual Extensible Local Area Network (VXLAN) are used to extend the Layer 2 domain and extend tenancy domains by enabling Layer 2 communications and VM mobility. • Active-active vLAG pairs—By supporting vLAG pairs on leaf switches, dual-homing of the networking endpoints is supported. This provides higher redundancy. Also, because the links are active-active, vLAG pairs provide higher throughput to the endpoints. vLAG pairs are supported for all 10-GbE, 40-GbE, and 100-GbE interface speeds, and up to 32 links can participate in a vLAG. • Support for unnumbered interfaces—Using Brocade Network OS support for IP unnumbered interfaces available in Brocade VDX switches, only one IP address per switch is required to configure the routing protocol peering. This significantly reduces the planning and use of IP addresses, and it simplifies operations. • Turnkey automation—Brocade automated provisioning dramatically reduces the deployment time of network devices and network virtualization. Prepackaged, server-based automation scripts provision Brocade IP fabric devices for service with minimal effort. • Programmable automation—Brocade server-based automation provides support for common industry automation tools such as Python Ansible, Puppet, and YANG model-based REST and NETCONF APIs. The prepackaged PyNOS scripting library and editable automation scripts execute predefined provisioning tasks, while allowing customization for addressing unique requirements to meet technical or business objectives when the organization is ready. • Ecosystem integration—The Brocade IP fabric integrates with leading industry solutions and products like VMware vCenter, NSX, and vRealize. Cloud orchestration and control are provided through OpenStack and OpenDaylight-based Brocade SDN Controller support. Data Center Site with Leaf-Spine Topology A data center PoD built with IP fabrics supports dual-homing of network endpoints using multiswitch port channel interfaces formed between a pair of Brocade VDX switches participating in a vLAG. This pair of leaf switches is called a vLAG pair (see Figure 25). Brocade Data Center Fabric Architectures 53-1004601-02 37
  • 38. FIGURE 25 An IP Fabric Data Center PoD Built with Leaf-Spine Topology and vLAG Pairs for Dual-Homed Network Endpoint The Brocade VDX switches in a vLAG pair have a link between them for control-plane purposes to create and manage the multiswitch port-channel interfaces. When network virtualization with BGP EVPN is used, these links also carry switched traffic in case of downlink failures or single-homed endpoints. Oversubscription of the inter-switch link (ISL) is an important consideration for these scenarios. Figure 26 shows a data center site deployed using a leaf-spine topology and an edge services PoD. Here the network endpoints are illustrated as single-homed, but dual homing is enabled through vLAG pairs where required. FIGURE 26 Data Center Site Built with Leaf-Spine Topology and an Edge Services PoD Data Center Site with Leaf-Spine Topology Brocade Data Center Fabric Architectures 38 53-1004601-02
  • 39. The links between the leafs, spines, and border leafs are all Layer 3 links. The border leafs are connected to the spine switches in the data center PoD and also to the data center core/WAN edge routers. The uplinks from the border leaf to the data center core/WAN edge can be either Layer 2 or Layer 3, depending on the requirements of the deployment and the handoff required to the data center core/WAN edge routers. There can be more than one edge services PoD in the network, depending on service needs and the bandwidth requirement for connecting to the data center core/WAN edge routers. Scale Table 3 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and spine PINs in a Brocade IP fabric with 40-GbE links between leafs and spines. The following assumptions are made: • Links between the leafs and the spines are 40 GbE. • The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade VDX 6740, Brocade VDX 6740T, and Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on Demand license to upgrade to 10GBASE-T ports.) • The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks. • The Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (18 × 40-GbE per line card) for connections between leafs and spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode. The Brocade VDX 8770-8 supports 144 × 40-GbE ports in performance mode. NOTE For a larger port scale in Brocade IP fabrics in a 3-stage folded Clos, the Brocade VDX 8770-4 or 8770-8 can be used as a leaf switch. TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs and Spines Leaf Switch Spine Switch Leaf Oversubscription Ratio Leaf Count Spine Count IP Fabric Size (Number of Switches) 10-GbE Port Count VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q 3:1 36 4 40 1,728 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-4 3:1 72 4 76 3,456 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-8 3:1 144 4 148 6,912 VDX 6740, VDX 6740T, VDX 6740T-1G SLX 9850-4 3:1 240 4 244 11,520 VDX 6740, VDX 6740T, VDX 6740T-1G SLX 9850-8 3:1 480 4 484 23,040 VDX 6940-144S VDX 6940-36Q 2:1 36 12 48 3,456 VDX 6940-144S VDX 8770-4 2:1 72 12 84 6,912 VDX 6940-144S VDX 8770-8 2:1 144 12 156 13,824 VDX 6940-144S SLX 9850-4 2:1 240 12 252 23,040 Data Center Site with Leaf-Spine Topology Brocade Data Center Fabric Architectures 53-1004601-02 39
  • 40. TABLE 3 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 40-GbE Links Between Leafs and Spines (continued) Leaf Switch Spine Switch Leaf Oversubscription Ratio Leaf Count Spine Count IP Fabric Size (Number of Switches) 10-GbE Port Count VDX 6940-144S SLX 9850-8 2:1 480 12 492 46,080 Table 4 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf and spine PINs in a Brocade IP fabric with 100-GbE links between leafs and spines. The following assumptions are made: • Links between the leafs and the spines are 100 GbE. • The Brocade VDX 6940-144S platforms use 4 × 100-GbE uplinks. TABLE 4 Scale Numbers for a Leaf-Spine Topology with Brocade IP Fabrics in a Data Center Site with 100-GbE Links Between Leafs and Spines Leaf Switch Spine Switch Leaf Oversubscription Ratio Leaf Count Spine Count IP Fabric Size (Number of Switches) 10-GbE Port Count VDX 6940-144S VDX 8770-4 2.4:1 24 12 36 2,304 VDX 6940-144S VDX 8770-8 2.4:1 48 12 60 4,608 VDX 6940-144S SLX 9850-4 2.4:1 144 12 156 13,824 VDX 6940-144S SLX 9850-8 2.4:1 288 12 300 27,648 Scaling the Data Center Site with an Optimized 5-Stage Folded Clos If a higher scale is required, the optimized 5-stage folded Clos topology is used to interconnect the data center PoDs built using a Layer 3 leaf-spine topology. An example topology is shown in Figure 27. FIGURE 27 Data Center Site Built with an Optimized 5-Stage Folded Clos Topology and IP Fabric PoDs Scaling the Data Center Site with an Optimized 5-Stage Folded Clos Brocade Data Center Fabric Architectures 40 53-1004601-02
  • 41. Figure 27 shows only one edge services PoD, but there can be multiple such PoDs, depending on the edge service endpoint requirements, the amount of oversubscription for traffic exchanged with the data center core/WAN edge, and the related handoff mechanisms. Scale Figure 28 shows a variation of the optimized 5-stage Clos. This variation includes multiple super-spine planes. Each spine in a data center PoD connects to a separate super-spine plane. FIGURE 28 Optimized 5-Stage Clos with Multiple Super-Spine Planes The number of super-spine planes is equal to the number of spines in the data center PoDs. The number of uplink ports on the spine switch is equal to the number of switches in a super-spine plane. Also, the number of data center PoDs is equal to the port density of the super-spine switches. Introducing super-spine planes to the optimized 5-stage Clos topology greatly increases the number of data center PoDs that can be supported. For the purposes of port scale calculations of the Brocade IP fabric in this section, the optimized 5-stage Clos with multiple super-spine plane topology is considered. Table 5 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine, and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 40-GbE interfaces between leafs, spines, and super-spines. The following assumptions are made: • Links between the leafs and the spines are 40 GbE. Links between spines and super-spines are also 40 GbE. • The Brocade VDX 6740 platforms use 4 × 40-GbE uplinks. The Brocade VDX 6740 platform family includes the Brocade VDX 6740, the Brocade VDX 6740T, and the Brocade VDX 6740T-1G. (The Brocade VDX 6740T-1G requires a Capacity on Demand license to upgrade to 10GBASE-T ports.) Four spines are used for connecting the uplinks. • The Brocade VDX 6940-144S platforms use 12 × 40-GbE uplinks. Twelve spines are used for connecting the uplinks. • The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the bandwidth of downlink ports at the spines. The number of physical ports utilized from the spine toward the super-spine is equal to the number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended. Scaling the Data Center Site with an Optimized 5-Stage Folded Clos Brocade Data Center Fabric Architectures 53-1004601-02 41
  • 42. • The Brocade VDX 8770 platforms use 27 × 40-GbE line cards in performance mode (18 × 40 GbE) for connections between spines and super-spines. The Brocade VDX 8770-4 supports 72 × 40-GbE ports in performance mode. The Brocade VDX 8770-8 supports 144 × 40-GbE ports in performance mode. • 32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale. TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP Fabric with 40 GbE Between Leaf, Spine, and Super-Spine Leaf Switch Spine Switch Super-Spine Switch Leaf Over- subscription Ratio Leaf Count per Data Center PoD Spine Count per Data Center PoD Number of Super- Spine Planes Number of Super- Spines in Each Super- Spine Plane Number of Data Center PoDs 10-GbE Port Count VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q VDX 6940-36Q 3:1 18 4 4 18 36 31,104 VDX 6940-144S VDX 6940-36Q VDX 6940-36Q 2:1 18 12 12 18 36 62,208 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q VDX 8770-4 3:1 18 4 4 18 72 62,208 VDX 6940-144S VDX 6940-36Q VDX 8770-4 2:1 18 12 12 18 72 124,416 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q VDX 8770-8 3:1 18 4 4 18 144 124,416 VDX 6940-144S VDX 6940-36Q VDX 8770-8 2:1 18 12 12 18 144 248,832 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q SLX 9850-4 3:1 18 4 4 18 240 207,360 VDX 6940-144S VDX 6940-36Q SLX 9850-4 2:1 18 12 12 18 240 414,720 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 6940-36Q SLX 9850-8 3:1 18 4 4 18 480 414,720 VDX 6940-144S VDX 6940-36Q SLX 9850-8 2:1 18 12 12 18 480 829,440 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-4 VDX 8770-4 3:1 32 4 4 32 72 110,592 VDX 6940-144S VDX 8770-4 VDX 8770-4 2:1 32 12 12 32 72 221,184 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-4 VDX 8770-8 3:1 32 4 4 32 144 221,184 VDX 6940-144S VDX 8770-4 VDX 8770-8 2:1 32 12 12 32 144 442,368 VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-8 VDX 8770-8 3:1 32 4 4 32 144 221,184 VDX 6940-144S VDX 8770-8 VDX 8770-8 2:1 32 12 12 32 144 442,368 VDX 6740, VDX 6740T, VDX 6740T-1G SLX 9850-4 SLX 9850-4 3:1 32 4 4 32 240 368,640 VDX 6940-144S SLX 9850-4 SLX 9850-4 2:1 32 12 12 32 240 737,280 Scaling the Data Center Site with an Optimized 5-Stage Folded Clos Brocade Data Center Fabric Architectures 42 53-1004601-02
  • 43. TABLE 5 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP Fabric with 40 GbE Between Leaf, Spine, and Super-Spine (continued) Leaf Switch Spine Switch Super-Spine Switch Leaf Over- subscription Ratio Leaf Count per Data Center PoD Spine Count per Data Center PoD Number of Super- Spine Planes Number of Super- Spines in Each Super- Spine Plane Number of Data Center PoDs 10-GbE Port Count VDX 6740, VDX 6740T, VDX 6740T-1G SLX 9850-4 SLX 9850-8 3:1 32 4 4 32 480 737,280 VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 32 12 12 32 480 1,474,560 Table 6 provides sample scale numbers for 10-GbE ports with key combinations of Brocade VDX and SLX platforms at the leaf, spine, and super-spine PINs for an optimized 5-stage Clos with multiple super-spine planes built with Brocade IP fabric with 100-GbE interfaces between the leafs, spines, and super spines. The following assumptions are made: • Links between the leafs and the spines are 100 GbE. Links between spines and super-spines are also 100 GbE. • The Brocade VDX 6940-144S platforms use 4 × 100-GbE uplinks. Four spines are used for connecting the uplinks. • The north-south oversubscription ratio at the spines is 1:1. In other words, the bandwidth of uplink ports is equal to the bandwidth of downlink ports at the spines. The number of physical ports utilized from spine toward super-spine is equal to the number of ECMP paths supported. However, a 1:1 subscription ratio is used here and is also recommended. • 32-way Layer 3 ECMP is utilized for spine-to-super-spine connections. This gives a maximum of 32 super-spines for the Brocade IP fabric. Refer to the platform release notes to check the ECMP support scale. TABLE 6 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes Built with Brocade IP Fabric with 100 GbE Between Leaf, Spine, and Super-Spine Leaf Switch Spine- Switch Super-Spine Switch Leaf Over- subscription Ratio Leaf Count per Data Center PoD Spine Count per Data Center PoD Number of Super- Spine Planes Number of Super- Spines in Each Super- Spine Plane Number of Data Center PoDs 10-GbE Port Count VDX 6940-144S VDX 8770-4 VDX 8770-4 2.4:1 12 4 4 12 24 27,648 VDX 6940-144S VDX 8770-4 VDX 8770-8 2.4:1 12 4 4 12 48 55,296 VDX 6940-144S VDX 8770-8 VDX 8770-8 2.4:1 24 4 4 24 48 110,592 VDX 6940-144S SLX 9850-4 SLX 9850-4 2.4:1 32 4 4 32 144 442,368 VDX 6940-144S SLX 9850-4 SLX 9850-8 2.4:1 32 4 4 32 288 884,736 Further higher scale can be achieved by physically connecting all available ports on the switching platform and using BGP policies to enforce the maximum ECMP scale as limited by the platform. This provides higher port scale for the topology, while still ensuring that maximum ECMP scale is used. It should be noted that this arrangement provides a nonblocking 1:1 north-south subscription at the spine in most scenarios. In Table 7, the scale for a 5-stage folded Clos with 40-GbE interfaces between leaf, spine, and super-spine is shown assuming that BGP policies are used to enforce the ECMP maximum scale. Scaling the Data Center Site with an Optimized 5-Stage Folded Clos Brocade Data Center Fabric Architectures 53-1004601-02 43
  • 44. TABLE 7 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced ECMP Maximum, and 100 GbE Between Leafs, Spines, and Super-Spines Leaf Switch Spine Switch Super-Spine Switch Leaf Over- subscription Ratio Leaf Count per Data Center PoD Spine Count per Data Center PoD Number of Super- Spine Planes Number of Super- Spines in Each Super- Spine Plane Number of Data Center PoDs 10-GbE Port Count VDX 6740, VDX 6740T, VDX 6740T-1G VDX 8770-8 VDX 8770-8 3:1 72 4 4 72 144 497,664 VDX 6940-144S VDX 8770-8 VDX 8770-8 2:1 72 4 4 72 144 995,328 VDX 6740, VDX 6740T, VDX 6740T-1G SLX 9850-4 SLX 9850-4 3:1 120 4 4 120 240 1,382,400 VDX 6940-144S SLX 9850-4 SLX 9850-4 2:1 120 12 12 120 240 2,764,800 VDX 6740, VDX 6740T, VDX 6740T-1G SLX 9850-4 SLX 9850-8 3:1 120 4 4 120 480 2,764,800 VDX 6940-144S SLX 9850-4 SLX 9850-8 2:1 120 12 12 120 480 5,529,600 VDX 6740, VDX 6740T, VDX 6740T-1G SLX 9850-8 SLX 9850-8 3:1 240 4 4 240 480 5,529,600 VDX 6940-144S SLX 9850-8 SLX 9850-8 2:1 240 12 12 240 480 11,059,200 In Table 8, the scale for a 5-stage folded Clos with 100-GbE interfaces between leaf, spine, and super spine is shown assuming that BGP policies are used to enforce the ECMP maximum scale. TABLE 8 Scale Numbers for an Optimized 5-Stage Folded Clos Topology with Multiple Super-Spine Planes, BGP Policy-Enforced ECMP Maximum, and 100 GbE Between Leaf, Spines, and Super Spines Leaf Switch Spine Switch Super-Spine Switch Leaf Over- subscription Ratio Leaf Count per Data Center PoD Spine Count per Data Center PoD Number of Super- Spine Planes Number of Super- Spines in Each Super- Spine Plane Number of Data Center PoDs 10-GbE Port Count VDX 6940-144S SLX 9850-4 SLX 9850-4 2.4:1 72 4 4 72 144 995,328 VDX 6940-144S SLX 9850-4 SLX 9850-8 2.4:1 72 4 4 72 288 1,990,656 VDX 6940-144S SLX 9850-8 SLX 9850-8 2.4:1 144 4 4 144 288 3,981,312 Scaling the Data Center Site with an Optimized 5-Stage Folded Clos Brocade Data Center Fabric Architectures 44 53-1004601-02