SlideShare a Scribd company logo
BROCADE VALIDATED DESIGN
Brocade VCS Fabric with IP Storage
53-1004936-01
21 December 2016
© 2016, Brocade Communications Systems, Inc. All Rights Reserved.
Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other
countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at www.brocade.com/en/legal/
brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties.
Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment,
equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without
notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade
sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the
United States government.
The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this
document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it.
The product described by this document may contain open source software covered by the GNU General Public License or other open source license
agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and
obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd.
Brocade VCS Fabric with IP Storage
2 53-1004936-01
Contents
List of Figures...................................................................................................................................................................................................................... 5
Preface...................................................................................................................................................................................................................................7
Brocade Validated Designs....................................................................................................................................................................................................................7
Purpose of This Document....................................................................................................................................................................................................................7
Target Audience..........................................................................................................................................................................................................................................7
About the Author........................................................................................................................................................................................................................................8
Document History......................................................................................................................................................................................................................................8
About Brocade............................................................................................................................................................................................................................................ 8
Introduction.......................................................................................................................................................................................................................... 9
Technology Overview...................................................................................................................................................................................................... 11
Benefits....................................................................................................................................................................................................................................................... 11
Terminology...............................................................................................................................................................................................................................................12
Virtual Cluster Switching...................................................................................................................................................................................................................... 13
VCS in Brief......................................................................................................................................................................................................................................13
VCS Fabric........................................................................................................................................................................................................................................15
Data Frames Inside a VCS Fabric...........................................................................................................................................................................................20
VCS Traffic Forwarding................................................................................................................................................................................................................21
VCS Services...................................................................................................................................................................................................................................27
VCS Deployment Models....................................................................................................................................................................................................................33
Single-POD VCS Fabric.............................................................................................................................................................................................................34
Multi-VCS Fabric............................................................................................................................................................................................................................35
IP Storage ................................................................................................................................................................................................................................................. 39
Data Center Bridging....................................................................................................................................................................................................................40
Auto-NAS......................................................................................................................................................................................................................................... 41
CEE-Map.......................................................................................................................................................................................................................................... 41
Dynamic Packet Buffering.........................................................................................................................................................................................................42
iSCSI Initiators and Targets........................................................................................................................................................................................................42
IP Storage Deployment Models........................................................................................................................................................................................................43
Dedicated IP Storage...................................................................................................................................................................................................................43
Hybrid IP Storage..........................................................................................................................................................................................................................45
Shared IP Storage..........................................................................................................................................................................................................................46
VCS Validated Designs ...................................................................................................................................................................................................51
Hardware and Software Matrix...........................................................................................................................................................................................................51
VCS Fabric Configuration....................................................................................................................................................................................................................52
Fabric Bringup.................................................................................................................................................................................................................................52
Edge Ports........................................................................................................................................................................................................................................58
First Hop Redundancy.................................................................................................................................................................................................................62
Multitenancy.....................................................................................................................................................................................................................................66
IP Storage Configuration......................................................................................................................................................................................................................66
Dynamic Shared Buffer...............................................................................................................................................................................................................66
DCBX Configuration for iSCSI................................................................................................................................................................................................. 68
Priority Flow Control and ETS..................................................................................................................................................................................................70
Auto-NAS......................................................................................................................................................................................................................................... 73
CEE-MAP Configuration............................................................................................................................................................................................................77
Brocade VCS Fabric with IP Storage
53-1004936-01 3
Routed vs Layer 2 Switching for Storage Traffic.............................................................................................................................................................. 77
Jumbo MTU.....................................................................................................................................................................................................................................79
Storage Initiator/Target Configuration...................................................................................................................................................................................80
Edge Services Configuration..............................................................................................................................................................................................................85
Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage..................................................................................................................89
Fabric Wide Configuration..........................................................................................................................................................................................................90
Leaf Configuration.........................................................................................................................................................................................................................91
Spine Configuration...................................................................................................................................................................................................................... 93
Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS.....................................................................................................................97
Storage VCS.................................................................................................................................................................................................................................... 98
Server VCS....................................................................................................................................................................................................................................102
Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS.......................................................................................................................................106
Storage VCS.................................................................................................................................................................................................................................108
Multi-VCS Converged Fabric.................................................................................................................................................................................................112
Illustration Examples.....................................................................................................................................................................................................123
Example 1: FVG in a 3-Stage Clos Fabric................................................................................................................................................................................123
Configuration................................................................................................................................................................................................................................124
Verification.....................................................................................................................................................................................................................................125
Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos................................................................................................127
Configuration................................................................................................................................................................................................................................128
Verification.....................................................................................................................................................................................................................................130
Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos..................................................................................................131
Configuration................................................................................................................................................................................................................................132
Verification.....................................................................................................................................................................................................................................134
Example 4: VM-Aware Network Automation...........................................................................................................................................................................135
Configuration and Verification............................................................................................................................................................................................... 136
Virtual Machine Move...............................................................................................................................................................................................................138
Example 5: AMPP...............................................................................................................................................................................................................................139
Configuration and Verification for VLAN..........................................................................................................................................................................139
Virtual Machine Moves.............................................................................................................................................................................................................141
Configuration and Verification for Virtual Fabric VLAN..............................................................................................................................................142
Example 6: Virtual Fabric Extension............................................................................................................................................................................................143
Configuration................................................................................................................................................................................................................................144
Verification.....................................................................................................................................................................................................................................145
Example 7: Auto-Fabric....................................................................................................................................................................................................................147
Configuration................................................................................................................................................................................................................................148
Verification.....................................................................................................................................................................................................................................148
Design Considerations .................................................................................................................................................................................................151
References.......................................................................................................................................................................................................................153
Brocade VCS Fabric with IP Storage
4 53-1004936-01
List of Figures
Figure 1 on page 14—Components of a VCS Fabric
Figure 2 on page 18—vLAG Connectivity Options
Figure 3 on page 20—TRILL Data Format
Figure 4 on page 22—Layer 2 Unicast Forwarding
Figure 5 on page 24—Layer 3 Forwarding with VRRP-E
Figure 6 on page 26—Layer 3 Intersubnet Forwarding on VCS Fabric
Figure 7 on page 27—Multicast Tree for BUM Traffic
Figure 8 on page 28—VEB on Virtualized Server Environment
Figure 9 on page 30—SVF in a Cloud Service-Provider Environment
Figure 10 on page 31—SVF in a Highly Virtualized Environment
Figure 11 on page 32—VxLAN Packet Format
Figure 12 on page 33—Packet Forwarding over a VxLAN-Based DCI Implemented with VF-Extension
Figure 13 on page 34—Single-POD VCS Fabric
Figure 14 on page 36—Multi-VCS Fabric Interconnected Through Layer 2 vLAG
Figure 15 on page 38—Multi-VCS Fabric Interconnected Through VxLAN over L3 Links
Figure 16 on page 42—PFC and ETS in Action over a DCBX-Capable Edge Port on a VDX Switch
Figure 17 on page 44—Dedicated Storage Design with VCS
Figure 18 on page 45—Hybrid IP Storage with Single-Storage vLAG from ToR
Figure 19 on page 46—Hybrid IP Storage with Multiple-Storage vLAG from ToR
Figure 20 on page 47—Single-POD VCS with Attached IP Storage Device
Figure 21 on page 48—Single POD with Shared IP Storage VCS
Figure 22 on page 49—Multi-VCS Using vLAG with Shared IP Storage
Figure 23 on page 50—Multi-VCS Using VxLAN with Shared IP Storage
Figure 24 on page 54—VCS Fabric
Figure 26 on page 63—VRRP-E in 3-Stage Clos VCS Fabric
Figure 27 on page 89—Single-POD DC with Attached Storage
Figure 28 on page 98—Data Center with Dedicated Storage VCS
Figure 29 on page 107—Multi-VCS Fabric with Shared Storage VCS
Figure 30 on page 124—FVG Topology
Figure 31 on page 128—VF Across Disjoint VLANs
Figure 32 on page 132—VF Per-Interface VLAN Scope
Figure 33 on page 143—Virtual Fabric Extension
Brocade VCS Fabric with IP Storage
53-1004936-01 5
Brocade VCS Fabric with IP Storage
6 53-1004936-01
Preface
• Brocade Validated Designs.............................................................................................................................................................................. 7
• Purpose of This Document.............................................................................................................................................................................. 7
• Target Audience.....................................................................................................................................................................................................7
• About the Author...................................................................................................................................................................................................8
• Document History................................................................................................................................................................................................8
• About Brocade.......................................................................................................................................................................................................8
Brocade Validated Designs
Helping customers consider, select, and deploy network solutions for current and planned needs is our mission. Brocade Validated
Designs offer a fast track to success by accelerating that process.
Validated designs are repeatable reference network architectures that have been engineered and tested to address specific use cases and
deployment scenarios. They document systematic steps and best practices that help administrators, architects, and engineers plan,
design, and deploy physical and virtual network technologies. Leveraging these validated network architectures accelerates deployment
speed, increases reliability and predictability, and reduces risk.
Brocade Validated Designs incorporate network and security principles and technologies across the ecosystem of service provider, data
center, campus, and wireless networks. Each Brocade Validated Design provides a standardized network architecture for a specific use
case, incorporating technologies and feature sets across Brocade products and partner offerings.
All Brocade Validated Designs follow best-practice recommendations and allow for customer-specific network architecture variations that
deliver additional benefits. The variations are documented and supported to provide ongoing value, and all Brocade Validated Designs
are continuously maintained to ensure that every design remains supported as new products and software versions are introduced.
By accelerating time-to-value, reducing risk, and offering the freedom to incorporate creative, supported variations, these validated
network architectures provide a tremendous value-add for building and growing a flexible network infrastructure.
Purpose of This Document
This Brocade validated design provides guidance for designing and implementing Brocade VCS fabric with IP storage in a data center
network using Brocade hardware and software. It details the Brocade reference architecture for deploying VCS-based data centers with
IP storage and VxLAN interconnectivity.
It should be noted that not all features such as automation practices, zero-touch provisioning, and monitoring of the Brocade VCS fabric
are included in this document. Future versions of this document are planned to include these aspects of the Brocade VCS fabric solution.
The design practices documented here follow the best-practice recommendations, but there are variations to the design that are
supported as well.
Target Audience
This document is written for Brocade systems engineers, partners, and customers who design, implement, and support data center
networks. This document is intended for experienced data center architects and engineers. It assumes that the reader has a good
understanding of data center switching and routing features.
Brocade VCS Fabric with IP Storage
53-1004936-01 7
About the Author
Eldho Jacob is a Technical Leader in the Solutions Architecture and Validation team at Brocade. He has extensive experience across data
center and service provider technologies. At Brocade, he is focused on developing and validating solution architectures that customers
can use in deployments.
The author would like to acknowledge the following Brocadians for their technical guidance in developing this validated design:
• Abdul Khader: Technical Director
• Krish Padmanabhan: Principal Engineer
• Anuj Dewangan: Technical Marketing Engineer
• Dan DeBacker: Principal Systems Engineer
• Kamini Santhanagopalan: Product Manager
• Mike Molinaro: Support Account Manager
• Sadashiv Kudlamath: Technical Marketing Engineer
• Steve Como: Onsite Engineer
• Syed Hasan Raza Naqvi: Technical Leader
• Ted Trautman: Director, Service Delivery
• Vasanthi Adusumalli: Staff Engineer
Document History
Date Part Number Description
December 21, 2016 53-1004936-01 Initial version.
About Brocade
Brocade® (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where
applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity,
non-stop networking, application optimization, and investment protection.
Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and
cost while enabling virtualization and cloud computing to increase business agility.
To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support,
and professional services offerings (www.brocade.com).
About the Author
Brocade VCS Fabric with IP Storage
8 53-1004936-01
Introduction
Brocade is the industry leader in storage SAN fabrics and through VCS technology brings fabric architecture into the Ethernet world.
This document describes converged data center network designs for storage and Ethernet networks integrating Brocade VCS
technology with various IP storage solutions. The configurations and design practices documented here are fully validated and conform
to the Brocade data center fabric reference architectures. The intention of this Brocade validated design document is to provide reference
configurations and document best practices for building converged data center networks with VCS fabric and IP storage using Brocade
VDX switches.
This document describes the following architectures:
• Single POD VCS data center with dedicated and shared IP storage designs
• Multi-POD VCS data center with shared IP storage
Apart from various converged data center architectures, the paper also covers various innovative features that VCS brings to the data
center fabric.
We recommend that you review the data center fabric architectures described in the Brocade Data Center Fabric Architectures[1] white
paper for a detailed discussion on data center architectures for building data center sites.
Brocade VCS Fabric with IP Storage
53-1004936-01 9
Brocade VCS Fabric with IP Storage
10 53-1004936-01
Technology Overview
• Benefits..................................................................................................................................................................................................................11
• Terminology......................................................................................................................................................................................................... 12
• Virtual Cluster Switching.................................................................................................................................................................................13
• VCS Deployment Models.............................................................................................................................................................................. 33
• IP Storage ............................................................................................................................................................................................................39
• IP Storage Deployment Models..................................................................................................................................................................43
Data center networks evolved from a traditional three-tier architecture to the flat spine-leaf/Clos architecture to address the requirements
of newer applications and fluid workloads.
In addition to scalability, high availability, and greater bandwidth, other prime architectural requirements in a data center network are
workload mobility, multitenancy, network automation, and CapEx/OpEx reduction. Having traditional Layer 2 or Layer 3 technologies to
solve these requirements involves significant compromises in network architecture and a higher OpEx to manage these networks. This
requires a fabric that is easy to provision as in traditional Layer 2 networks, and also is non-blocking as in Layer 3 networks.
Brocade VCS technology merges the best of both Layer 2 and Layer 3 networks along with a host of other Ethernet fabric innovations
and IP storage features to provide a purpose-built fabric for a converged data and storage network.
This white paper covers VCS and IP storage technologies (iSCSI and NAS) and provides various design options for a converged Brocade
data center fabric architecture.
In addition to VCS and IP storage deployment models, this document covers the following VCS features:
• Virtual Fabric for building a scalable multitenant data center
• VxLAN based DCI connectivity using Virtual Fabric extension
• VM-aware network automation
• Various FHRP and fabric bring-up options
Benefits
Some of the key benefits of using Brocade VCS technology in the data center:
Bridge Aware Routing—VCS technology conforms to the TRILL principles and brings in benefits of TRILL bridge aware routing like non-
blocking links, TTL for loop avoidance, faster fail-over, workload mobility, and scalability to the fabric.
Topology agnostic—Can be provisioned as mesh, star, Clos, or any other desired topology as per the network requirement. In this
document, we will go over the Clos architecture since this is the most prevalent in data centers.
Self-forming—VCS fabrics are self-forming with no user intervention needed to have the fabric setup other than physical cabling of the
switches.
Zero-Touch provisioning—VCS fabrics are ZTP capable enabling user to bring up the fabric right out of the box. DHCP Automatic
Deployment (DAD) and auto-fabric features enables this.
Plug and play model—VCS enables a fluid and scalable true plug and play fabric for attaching servers/workloads or provisioning new
nodes to the fabric thru ISL and vLAG technologies.
Unified fabric management—VCS fabric can be provisioned to provide the user with visibility to configure, manage and control the entire
data-center fabric from a single node.
Virtual Machine aware network automation—VCS fabric enables hypervisor-agnostic auto provisioning of server connected ports
through the AMPP feature.
Brocade VCS Fabric with IP Storage
53-1004936-01 11
Efficient Load balancing—Load balancing in VCS fabric is available at Layer 1 through Layer 3. Per-packet load balancing is available at
Layer 1 and VCS link-state routing provides un-equal cost load balancing for efficient usage of all links in the fabric.
Scalable multitenant fabric—Supports L2 multitenancy beyond the traditional L2-bit VLAN space with Virtual Fabric feature based of
TRILL Fine Grained Label standard.
Storage Support—AutoQoS, Buffering, and DCBx support for IP storage technologies and End-to-End FCoE, enables converged
network for storage and server data traffic on the VCS fabric.
Seamless Integration with VxLAN—VCS fabric seamlessly integrates with VxLAN technology for both inter-DC and intra-DC
connections using VF extension feature.
Terminology
Term Description
ACL Access Control List.
AMPP Automatic Migration of Port Profiles.
ARP Address Resolution Protocol.
BGP Border Gateway Protocol.
BLDP Brocade Link Discovery Protocol.
BPDU Bridge Protocol Data Unit.
BUM Broadcast, Unknown unicast, and Multicast.
CNA Converge Network Adapter.
CLI Command-Line Interface.
CoS Class of Service for Layer 2.
DCI Data Center Interconnect.
ELD Edge Loop Detection protocol.
ECMP Equal Cost Multi-Path.
EVPN Ethernet Virtual Private Network.
IP Internet Protocol.
ISL Inter-Switch Link.
MAC Media Access Control.
MPLS Multi-Protocol Label Switching.
ND Neighbor Discovery.
NLRI Network Layer Reachability Information.
PoD Point of Delivery.
RBridge Routing Bridge
STP Spanning Tree Protocol.
ToR Top of Rack switch.
UDP User Datagram Protocol.
vLAG Virtual Link Aggregation Group.
VLAN Virtual Local Area Network.
VM Virtual Machine.
VNI VXLAN Network Identifier.
VPN Virtual Private Network.
Terminology
Brocade VCS Fabric with IP Storage
12 53-1004936-01
Term Description
VRF VPN Routing and Forwarding instance. An instance of the routing/forwarding table with a set of networks and hosts in a
router.
VTEP VXLAN Tunnel End Point.
VXLAN Virtual Extensible Local Area Network.
Virtual Cluster Switching
Layer 2 switching networks are popular for the minimal configuration and seamless mobility but are affected by blocked links, higher
network fail-over times, and inefficient bandwidth utilization due to STP, resulting in networks that do not scale well. At the same time
networks build on popular Layer 3 routing protocols address many of these concerns but are operational intensive and not suited for
Layer 2 multitenant networks.
TRILL or RFC 5556 tries to address these concerns and combines the Layer 2 and Layer 3 features into a bridge system capable of
network-style routing.
VCS is a TRILL compliant Ethernet fabric formed between Brocade switches. At the data plane VCS uses TRILL framing and in control
plane uses proven Fiber Channel fabric protocols to form Ethernet fabric and maintain a link-state routing for the nodes. In addition to
TRILL benefits, VCS provides on a host of other innovative features to provide the next-gen data-center fabric.
VCS in Brief
VCS fabric is formed between Brocade switches and switches in the fabric are denoted as RBridges. Links connecting the RBridges in
the fabric are called Inter-Switch links or ISLs. VCS fabric connect other devices like servers, storage arrays, non-VCS switches or routers
through L2 or L3 links. Based on the kind of devices attached to the physical ports of RBridge they are classified as Edge ports or Fabric
ports. Edge ports connect external devices to VCS fabric and Fabric ports connect RBridges over ISL. VCS fabric provides link-
aggregation for ISL's through ISL trunk and at Edge-port a multi-switch link-aggregation called vLAG. More details on these are explored
later in this section, meanwhile a typical VCS fabric is shown below, showing the various components of a VCS fabric.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 13
FIGURE 1 Components of a VCS Fabric
In a typical Layer 2 network STP is used to form a loop-free topology and traffic gets forwarded at each node with a Layer 2 or MAC
look-up. While in TRILL or VCS fabric a loop-free topology is formed using a link state routing protocol. The link-state routing protocol is
used to exchange RBridge info in the fabric and this info is used to efficiently forward packets between RBridges at Layer 2.
The use of a link-state routing protocol is the primary reason the fabric scales better than the classic Ethernet network based of STP. And
the link-state routing protocol enables a loop-free topology without blocking any paths unlike STP. In TRILL standard ISIS is
recommended while in the Brocade VCS fabric well known storage network link-state routing protocol FSPF (Fabric Shortest Path First)
is used.
Switches/RBridges in VCS fabric have two types of physical interfaces: Edge port and Fabric port. Fabric ports connect two switches in
the same VCS fabric and forward TRILL frames. And Edge ports are L2 or L3 ports which receive and transmit regular Ethernet frames.
In VCS fabric a Classical Ethernet (CE) frame enter at the Edge port of a source RBridge and then undergo Layer 2 hardware lookup.
Layer 2 lookup will provide the info for TRILL encapsulation of the CE frame.
The encapsulated CE frame would then get forwarded out of the Fabric ports of source RBridge based on the forwarding information
provided by FSPF. The TRILL frame gets forwarded hop-by-hop on the VCS fabric to the destination RBridge where it gets
decapsulated and send as regular Ethernet frame out of an Edge port after Layer 2 or Layer 3 hardware lookup.
While this briefly explores VCS operation and components of the fabric, over the next few sections, VCS fabric formation using FLDP
(Fabric Link Discovery Protocol), RBridge routing using FSPF, TRILL frame, and other VCS innovations will be discussed in detail.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
14 53-1004936-01
VCS Fabric
The very first step after wiring up a network is to configure the switches to form a fabric. The biggest strength of Layer 2 networks is the
simplistic fabric formation by the use of switch ports. To form loop-free topology STP is used in Layer 2 networks.
VCS brings in this same simplicity into fabric formation, but without STP's drawbacks of blocked link and lack of multipathing. In VCS the
fabric formation happens automatically and is as simple as just connecting two switches and a single line of configuration to identify the
switch as part of a VCS fabric. This section will go over how the VCS fabric formation happens.
VCS capable switches from Brocade can operate in two modes:
• VCS disabled mode wherein the switch will operate in traditional STP mode.
• VCS enabled mode, this mode of operation is what will be discussed in this white-paper.
With VCS enabled, switches form a VCS fabric automatically across point-point links with minimal configuration. In a nut-shell the
requirements for automatic fabric formation are.
• Each VCS fabric is identified by a VCS ID configuration. VCS ID will be same across all switches in a fabric.
• A switch in VCS fabric is identified as RBridge and will have a unique RBridge-ID configuration.
• RBridge is a switch responsible for negotiating VCS fabric connections and forwarding both TRILL and classical Ethernet traffic.
• Ports in a switch will be identified either as a Fabric port or Edge port by Brocade Link Discovery Protocol alternatively called
Fabric link discovery protocol.
• The switches will discover the neighbors automatically across the fabric port if VCS ID are same and RBridge ID is unique.
• Fabric ports are responsible for TRILL forwarding and for fabric neighbor discovery.
• During Fabric neighbor discovery, across Fabric ports RBridges in a VCS fabric form ISL (inter-switch link) and trunks (group of
ISLs part of same hardware/ASIC port-group).
• Edge ports connects external devices to VCS fabric or in essence will provide L2 or L3 connectivity for Server's or Routers to
the VCS fabric.
• Edge ports can be regular L2 switch ports or be a part of a multi-chassis LAG (vLAG) with a non-VCS switch (vSwitch, server or
regular STP switch).
• Once Fabric formation happens FSPF builds the distributed fabric topology on each switch/RBridge.
VCS Identifiers
VCS ID identifies the fabric as a whole and all switches/RBridges part of a VCS fabric should have the same VCS ID.
RBridge ID is a unique ID configured to identify each RBridge part of a VCS fabric.
Apart from VCS-ID and RBridge-ID configuration, the VCS mode configuration is needed for the automatic fabric formation.
VCS Fabric Mode of Operation
Logical Chassis Mode is the flexible and popular VCS mode of operation1
. In this mode all switches in the VCS fabric can be managed
as if they were a single logical chassis.
• Provides unified control of the fabric from a single Principal switch/RBridge in the fabric.
1 Fabric Cluster Mode - This is another VCS mode of operation which is deprecated in the latest releases. In this mode, VCS fabric discovery and
formation are automatic. But user will have to manage each switch individually. This was one of the earlier modes of operation in the VCS technology
evolution and does not provide a unified configuration management capability.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 15
• User will be configuring the fabric from the Principal RBridge.
• This is a distributed configuration mode wherein the fabric configuration info is present on all nodes providing higher availability
in the fabric.
• The fabric wide configuration management is performed from the principal RBridge and changes are immediately updated on
other RBridges.
• Add/Rejoin/Remove/Replacing of switches in VCS fabric is simplified with principal switch taking care of configuration
management without user intervention.
• Operational simplicity is provided by unified view of the fabric from every switch.
• Fabric can be accessed by a single virtual IP bound to principal switch and can be used for fabric firmware upgrades.
Logical Chassis Mode is the recommended VCS fabric mode and the deployment models in this document uses this mode.
Once VCS-ID, RBridge-ID, and VCS mode are known the automatic fabric formation process begins. As part of it:
• All interfaces in the switch will be brought up as Edge port.
• Brocade Link Discovery Protocol (BLDP) is run between physical interfaces to identify ports as Edge or Fabric.
• Inter-Switch links and trunks are formed between VCS switches over fabric ports.
• After ISL and trunk formation FSPF or Fabric Shortest Path First link state routing protocol is run over ISLs to identify the
shortest-path to each switch in the VCS fabric.
Brocade Link Discovery Protocol
Brocade Link Discovery Protocol (BLDP) attempts to discover if a Brocade VCS Fabric-capable switch is connected to any of the edge
ports. BLDP is alternatively called as FLDP (Fabric Link Discovery Protocol).
Ports on a VCS capable switch will first come up as Edge port with BLDP enabled. With BLDP PDU exchange between the ports
neighbors in VCS fabric are formed across the inter-switch links. Ports which discover neighbors in the same VCS cluster would
transition to fabric ports while others remain as Edge port.
With BLDP PDU exchange the switches will the classify a port as:
• Edge port if the neighboring switch is not Brocade
• Edge port if not running VCS mode
• Edge port if VCS ID not same between the switches.
• If the neighboring switch runs VCS and the VCS ID matches the port transitions to a fabric port and an ISL or Inter-Switch Link
is established.
The BLDP is invoked only upon port online and is not sent periodically. Once the link type is determined, the protocol execution stops.
ISL Trunk
ISL or Inter-switch Links are formed automatically across fabric ports between two VCS enabled switches if the VCS-ID matches. ISL's
links forward TRILL frames in VCS fabric and by default trunk all VLANs.
When there are multiple ISL links between two switches and if they are part of the same ASIC port-group on both switches, these ISL
links gets grouped together to form a Brocade ISL trunk.
• ISL Trunk is comparable to the traditional LACP LAG to provide link aggregation.
• Brocade ISL trunk don't run LACP but use a proprietary protocol to maintain the trunk membership.
• ISL trunks are formed across the same ASIC port-group and hence the max ISL's possible in a trunk group is 8.
• There can be multiple ISL trunks between two switches.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
16 53-1004936-01
• The ISL trunk is self-forming like the ISL formation and needs no user configuration unlike LACP.
• The ISL trunk provides true per-packet load balancing across all member links.
Compared to a traditional LAG, the ISL trunk provides a per-packet load balancing of traffic across the links. This provides very high link
utilization and even distribution of traffic across the ISL trunk compared to a traditional LAG, which uses frame header hashing to
distribute traffic.
Principal RBridge
VCS fabric after ISL formation elects a Principal RBridge. The RBridge with the lowest configured principal priority or with lowest World
Wide Name (WWN) in the fabric is elected as the Principal RBridge. WWN is a unique identifier used in storage technologies. Brocade
VCS capable switches are shipped with factory-programmed WWN.
Principal RBridge is alternatively called as coordinator switch or fabric coordinator and performs the following functions in the fabric:
• Decides whether a newly joining RBridge has unique RBridge-ID and in case of conflict, the new RBridge is segregated from
VCS fabric until the configuration is fixed.
• In logical chassis mode, all fabric wide configuration is done from the principal switch.
• Apart from this in AMPP feature which will be discussed later, the principal RBridge talks to vCenter to distribute port-profiles.
Fabric Shortest Path First (FSPF)
FSPF is the routing protocol used in VCS to create the fabric route topology for TRILL forwarding. FSPF is a well-known link-state
routing protocol used in FC storage area network (SAN) fabrics. Since VCS fabric was out prior to IETF's TRILL fabric got standardized,
VCS fabric uses FSPF instead of ISIS as specified for TRILL.
Use of a link-state routing protocol in VCS enables having a highly scalable fabric, avoid blocked links like in STP Layer 2 networks and
enables equal cost multipath aka ECMP to destination.
After ISL and ISL trunk formation in VCS fabric bring-up FSPF is run to create a fabric topology. FSPF is a link-state routing protocol like
OSPF or ISIS and have the following salient features.
• Neighborship is formed and maintained by FSPF hello packets.
• Maintains one neighborship per ISL.
• Cost to reach a given RBridge is the cumulative cost of all the links to reach that RBridge
• Supports only point-to-point networks
• Can have only one area
• No Stub areas and summarization
Edge Ports
Edge ports attach switches, servers or routers to the VCS fabric over standard IEEE 802.1Q Layer 2 and Layer 3 ports.
Edge ports support industry-standard Link Aggregation Groups (LAGs) via Link Aggregation Control Protocol (LACP). Multi-Chassis
Trunking (MCT) is the industry accepted solution to provide redundancy and avoid blocked links due to spanning-tree when connecting
servers to multiple upstream switches. LAG-based MCT is a special case of LAG, covered in IEEE 802.3ad, in which one end of a LAG
can terminate on two separate switches.
Virtual Lag or vLAG
Virtual LAG (vLAG) is an MCT solution that is included in Brocade VCS Fabric technology which extends the concept of LAG to include
edge ports on multiple VCS Fabric switches.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 17
vLAGs can be formed in three different scenarios pre-requisite being that the LAG control-group have to be same on all the RBridges in
the VCS fabric.
• Server multi-homed to multiple RBridges in a VCS fabric.
• Classical Ethernet Switch multi-homed to RBridges in a VCS fabric
• When connecting two VCS fabrics, since VCS behaves like a single switching domain vLAGs are formed across the LAG's.
FIGURE 2 vLAG Connectivity Options
Using vLAG a single server, classical Ethernet switch or another VCS fabric would connect to multiple RBridges in a VCS fabric and the
fabric will act as a single node to the server/CE-switch/other.
• When a LAG spans multiple switches in a VCS fabric, it will be automatically detected and become a vLAG.
• The port-channel numbers needs to be same across multiple-switches for the vLAG to be formed.
• LACP need not be enabled but recommended.
• When LACP is used the LACP PDU's will use a virtual RBridge MAC to appear as a single node to the other end.
• vLAG is comparable to the vPC technology from Cisco but doesn't need a peer-link or keep-alive mechanism like in vPC for
active-active forwarding.
• vLAG's can span across 8 VCS nodes and across 64 links providing higher node and link redundancy.
• Only ports of the same speed are aggregated.
• Edge ports in a vLAG support both classic Ethernet and DCB extensions. Therefore, any edge port can forward IP, IP Storage
and FCoE traffic over a vLAG.
vLAG Operation
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
18 53-1004936-01
LACP System-ID: For vLAG with LACP to be active across multiple RBridges a common LACP system-ID is used by LACP PDU's.
Each VCS fabric have a common VCS Bridge-Mac address starting with 01:E0:52: with VCS-ID appended to have a unique mac. This
VCS bridge mac is used as LACP system-id in the PDUs.
Virtual RBridge-ID: When transmitting packet received over a vLAG, the source RBridge-ID in TRILL frames are set to a virtual
RBridgeID. The virtual RBridgeID is constructed by appending the vLAG's port-channel ID to 0x400, so a vLAG for port-channel 101
will have virtual RBridgeID of 0x465. By using virtual RBridgeID in the TRILL frames, member RBridges of a vLAG can efficiently
perform source port check for loop detection and mac-move based on the port-channel ID embedded in the virtual RBridge-ID.
Primary Link: Another vital component for vLAG operation is determining the primary Link. Primary link is the only link through which
BUM Traffic will be transmitted. BUM traffic are transmitted out of edge ports which are either a normal non-vLAG port or if it is a
primary link of a vLAG. Without this check BUM being multi-destination traffic will otherwise result in duplicate packets at the receiver.
The actual state machine of determining primary Link is Brocade specific. This protocol also ensures that only one of the links in an
RBridge becomes BUM transmitter and is also responsible to elect a new primary link on link failure, RBridge failures or other failure
events.
Master RBridge: The node responsible for primary link is elected as the Master RBridge. The Master RBridge is also responsible for
MAC address age-out. MAC addresses learnt on VCS fabric are distributed through ENS (Ethernet Name Server).
Traffic Load Balancing
Traffic in a VCS fabric gets load-balanced at multiple level in the network: ISL trunks provide per-packet load balancing, while at Layer-2
TRILL based load-balancing would kick in and at Layer-3 regular IP route based load-balancing can happen over IP ECMP paths.
ISL Trunk: When packets go over a Brocade ISL trunk, proprietary protocols ensure that no hashing is used and an even distribution or
per-packet load balancing happens across all the links in the ISL trunk. This provides a very high link-utilization and even distribution of
traffic across the ISL trunk compared to a traditional LAG which uses frame header based hashing to distribute traffic.
Layer 2: VCS builds a Layer 2 routed topology using the link-state routing protocol FSPF and with support of load balancing in FSPF,
load sharing for Layer 2 traffic is achieved in the VCS fabric.
When doing TRILL forwarding, if a neighbor switch is reachable via several interfaces with different bandwidths, all of them are treated as
equal-cost paths. Any interface with a bandwidth equal to or greater than 10 Gbps has a predetermined link cost of 500. Thus, a 10
Gbps interface has the same link cost as a 40 Gbps interface.
Simplicity is a key value of Brocade VCS Fabric technology, so an implementation was chosen that does not consider the bandwidth of
the interface when selecting equal-cost paths. The distributed control plane is aware of the bandwidth of each interface (ISL or Brocade
ISL Trunk). Given an ECMP route to a destination RBridge, it can load-balance the traffic across the next-hop ECMP interfaces,
according to the individual interface bandwidth and avoids overloading lower bandwidth interfaces. So effectively equal-cost paths for
TRILL forwarding between RBridges are determined based on hop-count and traffic gets distributed between the paths based on link
bandwidths.
This maximizes the utilization of available links in the network. In the traditional approach, a 40 Gbps interface, which has the least cost
among all other 10 Gbps paths, is used as the only route to reach the destination. In effect the lower-speed 10 GbE interfaces are not
utilized, resulting in lower overall bandwidth. With VCS Fabric technology, lower bandwidth interfaces can be used to improve network
utilization and efficiency.
While traffic gets proportionately distributed among ECMP paths, VCS forwarding uses the regular hash algorithms to select a link for a
flow.
Layer 3: Layer 3 traffic in VCS fabric undergoes routing over VE interfaces as on a regular router and the traditional IP based ECMP
hashing happens for these traffic. And the regular BGP and IGP routing protocol's IP ECMP techniques are available.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 19
Data Frames Inside a VCS Fabric
Switch/RBridge in a VCS fabric receives classical Ethernet frame on Edge port and then gets encapsulates into a TRILL frame based on
the destination MAC look-up. The encapsulated TRILL frame is forwarded out of the Fabric port into the VCS fabric.
At each RBridge in the VCS fabric the traffic undergoes hop-by-hop forwarding based on the Egress DMAC and RBridge ID. At the
destination RBridge the TRILL headers are stripped and the frame will undergo a traditional Layer 2 or Layer 3 lookup to forward out of
the Edge port.
FIGURE 3 TRILL Data Format
CE Frame
The CE frame in its entirety without modification becomes payload for the TRILL frame. An exception for this would be the Virtual Fabric
or fine-grained TRILL scenario where the inner Dot1q tag in CE frame could be modified, this will be discussed in the later sections.
Outer Ethernet Header
• Outer Destination MAC Address - Specifies the next hop destination RBridge.
• Outer Source MAC Address - Specifies the transmitting RBridge.
• Outer 802.1Q VLAN Tag - Depends on the core port configuration.
• EtherType - 0x22F3 assigned by IEEE for TRILL.
Trill Header Fields
• Version - If not recognized, silently drop the packet at the ingress of the fabric.
• Reserved - Not currently in use - for future expansions. Must be set to 0 at the moment.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
20 53-1004936-01
• Multi-Destination
– Set to 1 for BUM frames (Broadcast, Unknown-Unicast, Multicast).
– The frame is to be delivered to multiple destinations via a distribution tree.
– The egress RBridge nickname field specifies the distribution tree to use.
• Options Length - Currently not used
• Hop Count
– TTL to avoid infinite packet loop.
– TTL decremented at every hop.
– If TTL=0, an RBridge will drop the frame.
– For unicast frames, the ingress RBridge should set the TTL to a value in excess of the amount of hops it expects to use to
reach the egress RBridge.
– For multi-destination frames, the ingress RBridge should set the TTL to at least the number of hops to reach the most
distant RBridge. Multi-destination frames are most susceptible to loops and hence have strict RPF checks.
• Egress RB Nickname (RB ID)
– If the multi-destination tree is set to 0, then the egress RB nickname is the egress RBridge ID.
– If the multi-destination tree is set to 1, then the egress RB nickname is egress RBridge ID that is root of that tree.
• Ingress RB Nickname (RB ID) - Ingress RB nickname is set to a nickname/ID of the ingress RBridge of the fabric.
• Options - present if Op-Length is non-zero.
VCS Traffic Forwarding
On the physical wire the frame format is TRILL when forwarding between VCS nodes. After fabric formation and defining the edge ports
traffic forwarding in VCS fabric involves mac learning, handling unicast/multi-destination traffic, first hop redundancy services and ECMP
handling.
MAC Learning
Brocade VCS node follows hardware source-mac address learning at the Edge ports similar to any standard IEEE 802.1Q bridge. An
edge RBridge learns about a MAC, its VLAN, and the interface on which the MAC was seen. This learned MAC information is distributed
in the VCS fabric and hence each node in the fabric knows which RBridge to forward a particular MAC/frame. The frame is forwarded
into the fabric on fabric port with TRILL encapsulation, based on whether the destination address in the frame is known (unicast) or
unknown (multi-destination).
eNS
VCS distributed control plane synchronize aging and learning states across all fabric switches via the Ethernet Name Service (eNS); which
is a mac distribution service. eNS by distribution of learned mac info avoids flooding in the fabric. eNS does MAC synchronization across
MCT/vLAG pairs and distributing multicast info learned at the edge ports through IGMP snooping.
eNS is also responsible for MAC aging in the VCS fabric. When MAC aging happens on an RBridge where it was learned, eNS takes care
of aging out the MAC from other RBridges in the fabric
Layer 2 Unicast Traffic Forwarding
The source-mac learned on the edge RBridge gets distributed by eNS to every node in the fabric. When remote RBridges need to
forward traffic to this MAC it looks up the Layer 2 forwarding table to figure out which RBridge to send the frame and based on this info
TRILL forwarding happens to the destination RBridge.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 21
The below diagram shows the Layer 2 forwarding within a VLAN between two hosts across the VCS fabric.
• Traffic is destined to 10.0.1.1 from 10.0.1.2 over vlan 101 or essentially involves Layer 2 forwarding within a VLAN.
• Assumption is the ARP is resolved and MAC learning has already happened for the hosts.
• On RBridge 103 the traffic received on edge-port will undergo MAC lookup.
• MAC points to a remote RBridge port in the VCS fabric and hence the traffic undergoes TRILL encapsulation.
• The TRILL encapsulated packet follows the link-state based routed fabric topology created by FSPF.
FIGURE 4 Layer 2 Unicast Forwarding
Layer 3 Unicast Forwarding
Brocade Network OS has Layer 3 routing enabled by default. Apart from enabling IP on Ethernet interface for routing brocade provides
routing capability for vlan network through VE interface.
VCS fabrics support VRF-lite and routing protocols like OSPF and BGP on the VE and physical interfaces. Routing and IP configurations
in VCS fabrics are done at the RBridge configuration mode. So for enabling IP configurations like VRF-lite, IP address and routing on an
RBridge, user has to enter the RBridge mode for that node and configure. And in logical-chassis mode all this configuration is done from
the Primary RBridge.
VE Interfaces
Brocade uses VE interface to provide routed functionality across vlan or out of VCS fabric. VE interface is similar to the switched-VLAN
or SVI interface from cisco. VE interface at Layer 3 maps to corresponding Vlan at Layer 2, so for example VE interface 500 maps to
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
22 53-1004936-01
vlan 500. So very much like SVI, VE interface is a Layer 3 interface on which an IP address is configured and could be enabled with
FHRP and routing protocols to provide L2/L3 boundary and other L3 router functionality.
First Hop Redundancy Protocols
First hop redundancy protocols provide protection for the default-gateway of a subnet, by allowing multiple routers to respond on behalf
of the default-gateway IP. Brocade support the following FHRPs:
• VRRP
• VRRP-E
• FVG
VRRP is standard based while VRRP-E and FVG can be supported only across Brocade devices.
VRRP Extended (VRRP-E)
IETF standard based VRRP eliminates a single point of failure in the static route environment. It is an election protocol that dynamically
assigns responsibilities of a virtual router to one of the VRRP enabled routers on the LAN. VRRP thus provides a higher availability
default path without requiring configuration of dynamic routing or router discovery protocols on every end host.
VRRP-E (VRRP Extended) is the Brocade proprietary extension to the standard VRRP protocol. It does not interoperate with VRRP.
VRRP-E protocol configuration and operation is very similar to VRRP, wherein Master and Standby election happens and Master is
responsible for ARP replies and propagation of control packets.
From a forwarding perspective, VRRP-E provides active-active forwarding through "short path forwarding" feature compared to VRRP
where Master is the only node responsible for routing.
VRRP-E also supports up to 8 active/active Layer 3 gateways and in conjunction with short-path forwarding (VRRP-E SPF) yield higher
redundancy, scalability and better bandwidth utilization in the fabric. VRRP-E is supported in both VRRPv2 and v3 protocol
specifications and supports IPv4 and IPv6. VRRP-E can be configured only on VE interfaces.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 23
FIGURE 5 Layer-3 Forwarding with VRRP-E
Figure 5 shows the efficient forwarding behavior of VRRP-E in a CLOS design.
• Spines are configured with VE interface and VRRP-E virtual-IP 10.1.1.1 for subnet 10.1.1.0/24.
• Each of the virtual-IP have an associated virtual-MAC (02e0.5200.00xx).
• Virtual-mac are distributed to every RBridge in the fabric by VCS.
• VMACs are installed with special MAC programming to load-balance traffic to all VRRP-E.
• As shown below all VRRP-E nodes configured for the virtual-IP would route the traffic across subnet 10.1.1.0.
• VRRP-E does active/active forwarding across Master and standby routers while in standard VRRP only the master node would
route the traffic.
• VRRP-E thus provide efficient load balancing and bandwidth utilization in the fabric.
Fabric Virtual Gateway
Fabric Virtual Gateway or FVG is Brocade-proprietary implementation of a router redundancy protocol and works only in VCS fabric.
FVG is a highly scalable FHRP solution compared to VRRP or VRRP-E and doesn't have any control plane PDU exchange between the
nodes. Instead, it leverages the Brocade-proprietary VCS Fabric services to exchange Fabric-Virtual-Gateway group information among
the participating nodes.
The Fabric-Virtual-Gateway feature allows multiple RBridges in a VCS Fabric to form a group of gateway routers and share the same
gateway IP address for a given subnet like VRRP or VRRP-E.
• FVG is configured under the global VE interface mode.
• Configuration primarily involves configuring a Gateway-IP under the VE interface and the participating RBridges.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
24 53-1004936-01
• A gateway mac (02e0.5200.01ff) is by default allocated per vlan/VE to reach the gateway-IP.
• VCS services takes care of distributing the GW mac info in the fabric.
• Nodes not participating in FVG will install gateway MAC and the special GW mac programming allows load-balancing of traffic
to FVG members.
• There is no need of individual IP address under the RBridge VE interfaces like VRRP.
• FVG doesn't have the concept of Master and standby nodes.
• But one of the nodes will be elected as the ARP Responder, comparable to the Master node in VRRP-E.
• ARP Responder takes care of Arp request for GW-IP.
• All FVG member RBridges do an active/active forwarding like VRRP-E.
• Short path forwarding is by default enabled.
• When SPF is disabled, the ARP responder is responsible for traffic forwarding.
• Forwarding behavior for FVG is similar to as shown in VRRP-E diagram.
Preventing Layer 2 flooding for Router MAC Address
Every single router MAC address associated with a Layer 3 interface is synced throughout the VCS fabric. The reason for this syncing is
to ensure that every single router MAC address is treated as a known MAC address within the fabric. This ensures that when any packet
enters the VCS fabric destined towards a router, it is never flooded and is always unicast to its correct destination.
Similarly, when routing is disabled, the router sends a message to withdraw that particular router MAC address from the VCS fabric. This
behavior prevents the periodic issue of Layer 2 flooding caused by the router MAC address being aged out and we no longer require the
administrator to ensure that the Layer 2 aging time is greater than the ARP aging time interval.
Layer 3 Inter-VLAN Packet Forwarding
Based on the constructs of VCS fabric that's discussed until now will go over inter-subnet routing scenario. Figure 6 shows the Layer 3
forwarding in VCS fabric.
• Traffic is routed between subnets from host IPs 20.0.1.100 to 10.0.1.100 or from VLAN 201 to VLAN 101.
• VE interfaces 101 and 201 are configured with VRRP-E GW IP 10.0.1.1 and 20.0.1.1 respectively on the spine's.
• It is assumed the that the hosts have not learned the remote MACs initially.
• When host 20.0.0.100 wants to talk to 10.0.1.100, the host would ARP for the GW 20.0.1.1.
• Host 20.0.1.100 would arp and get gw-mac for 20.0.1.1 and forward the traffic in vlan 201.
• When VLAN 201 traffic is received on Spine and since Spine has VE for 101 and 201, it would arp for 10.0.1.100.
• Spine would originate the BUM for 10.0.1.100 which will be received on all nodes, but the host reply would come from only
10.0.1.100 and thus the spine learns the DMAC of 10.0.1.100.
• BUM traffic forwarding is explained in the next section "Multi-destination Traffic".
• Once the spine learns the DMAC of 10.0.1.100, Vlan 201 traffic received on Spine would then get routed to 10.0.1.100 on
vlan 100.
• The traffic between Leaf and Spine nodes would be TRILL forwarded while at the edge it would be classical Ethernet.
• The diagram below explains this behavior wherein RBID-103 receives packet with SIP 20.0.1.100, DIP 10.0.1.100, and
DMAC equal to vrrp-e mac of vlan 201. This CE frame is TRILL encapped on RB 103 and forwarded based on Layer 2 table
info of vlan 201.
• Packets gets TRILL forwarded to one of Spines which in the diagram is RBID-201. At RBridge 201 the traffic has to be routed
across vlan 201 to 101 subnet. So Arp is resolved for DIP 10.0.1.100 at the spine.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 25
• Traffic then gets routed on Spine and TRILL encapped for vlan 101 and forwarded to RBID-101.
• At RBridge 101 the frame gets TRILL decapped and forwarded out as regular CE-frame to destination 10.0.1.100
FIGURE 6 Layer-3 Inter-subnet Forwarding on VCS Fabric
Multi-Destination Traffic
Broadcast, unknown-unicast, and multicast traffic or BUM traffic are multi-destination traffic which are flooded to all nodes in the fabric.
To avoid duplicate traffic and loop for BUM traffic a multi-destination tree is formed rooted at the multicast root RBridge. The multi-
destination tree includes all RBridges in the VCS.
VCS uses FSPF to calculate a loop-free multicast tree routed at the Muticast root RBridge. The multicast tree Root RBridge election
happens based on higher of the configured mcast root priority of RBridges else based of RBridge-ID. An alternate multicast root is also
preselected to account for primary root failure
Figure 7 shows BUM traffic flow over the multi-destination tree routed at Multicast root RBridge.
• Traffic from the source is received on the edge port of RBridge-ID 102.
• The traffic gets forwarded to the multicast Root RBridge 201 on one of the links.
• When multiple-links exist only one of the link is selected to forward towards the root.
• From root RBridge 202, 203, and 204 does not have direct connection.
• Hence the FSPF enables links on RBridge 101 to forward mcast traffic to RBridges 202, 203 and 204.
• When BUM traffic is received on vLAG pairs 103 and 104 or 105 and 106, only the primary link of the vLAG forwards BUM
traffic.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
26 53-1004936-01
FIGURE 7 Multicast Tree for BUM Traffic
IGMP Snooping
Layer 2 networks implement IGMP snooping to avoid multicast flooding. In VCS fabric IGMP snooping happens at the edge-port, thus
the RBridge knows about the interested multicast Receivers behind its Edge ports. So when multicast traffic is received from the VCS
fabric on fabric ports the RBridge will prune traffic out of Edge port based on the snooping database. But when multicast traffic is
received on an edge port it gets flooded to all other RBridges in the VCS fabric. If IGMP snooping is disabled multicast traffic from the
VCS fabric will be flooded on edge ports too.
eNS is used to distribute the IGMP snooping database to all RBridges. This helps in vLAG scenario where IGMP snooping entry could
be learned on any of the vLAG member RBridges but multicast traffic will flow out of only the primary link.
VCS Services
Over this section will go over some of the VCS services like AMPP, VM-aware automation, virtual-fabric, virtual-fabric extension, and
auto-fabric.
Automatic Migration of Port Profile
In virtualized server environment like VMware, virtual-machines (VM) are provided switching connectivity through a Virtual Ethernet
Bridge (VEB) which in VMware context is called a vSwitch. VEB provides a Layer 2 switch functionality and inter-VM communication
albeit in software.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 27
A VEB port has a set of functions defined through port-profile like:
• The types of frames that are allowed on a port (whether all frames, only VLAN-tagged frames, or untagged frames)
• The VLAN identifiers that are allowed to be used on egress
• Rate-limiting attributes (such as port or access control-based rate limiting)
FIGURE 8 VEB on Virtualized Server Environment
Brocade Ethernet switches through the port-profile feature emulates the VEB port-profile on the hypervisor. And the Brocade switches
have much more advanced policy controls which can be applied through the port-profiles.
Port-profile define vlan, QoS and security configuration which can be applied on multiple physical ports on the switch. Port-profile on
Brocade VDX switches provide:
• Port profile for VLAN and quality of service (QoS) policy profiles.
• Port profile for FCoE policy profile.
• Port profile with FCoE, VLAN, and QoS policy profiles.
• In addition, any of the above combinations can be mixed with a security policy profile.
A port profile does not contain some of the interface configuration attributes, including LLDP, SPAN, or LAG. These are associated only
with the physical interface.
When work-loads or virtual-machines move, hypervisor ensures that the associated VEB port profile moves with it. On Brocade switches
this port-profile move based on VM move is achieved using the feature AMPP or Automatic Migration of Port Profile. In short Automatic
Migration of Port Profile (AMPP) functionality provides fabric wide configuration of Ethernet policies and enables network level features
to support VM mobility.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
28 53-1004936-01
AMPP configuration and operation involves the following in brief:
• Creating a port-profile and defining the vlan, qos and security configuration.
• Activate the port-profile.
• Associate the VM-mac's associated with the port-profile.
• Enable port-profile mode on the virtual-machine connected ports.
• After configuring profiles when a VM-mac is detected at port-profile enabled port the corresponding port-profile gets
downloaded on to that port.
• MAC detection is basically the source-mac learning on the port.
• When a VM move or mac move happens the port-profile migrate.
Since VCS in logical chassis mode is a distributed fabric, AMPP configuration is done on the principal switch and the AMPP profile gets
activated on all RBridges in the fabric.
AMPP can operate in two scenarios.
• The manual configuration way as described above. This is Hypervisor agnostic and AMPP activates the profiles on
corresponding interfaces based on MAC detection.
• AMPP integrated with vCenter or the vm-aware network automation.
VM-Aware Network Automation
With VM-aware network automation feature, a Brocade VDX switch can dynamically discover virtual assets and provision the physical
ports based on this discovery.
Configuration and operation of this feature involves the following:
• The switch is preconfigured with the relevant vCenter that exists in its environment.
• The discovery process entails making appropriate queries to the vCenter.
• After discovery, the switch/Brocade Network OS enters the port profile creation phase.
• It creates port profiles on the switch based on discovered standard or distributed vSwitch port groups.
• The operation creates port profiles and associated VLANs in the running configuration of the switch.
• MAC-address association to each port-profile are also configured based of vCenter information.
• Port, LAG's, vLAGs are put into port-profile mode automatically based on the ESX connectivity.
• When the Virtual Machine mac is detected behind an edge port, the corresponding port-profile is activated.
Virtual Fabric
Virtual Fabric is Brocade's implementation of the TRILL fine-grained label standard in VCS fabric. Traditional TRILL supports 802.1Q
VLAN IDs, which in today's virtualized and highly scalable data center, pose operational issues like scaling beyond 4K VLANs, having L2
adjacency between different server-side VLANs, or reusing the same server-side VLAN for different subnets.
The Virtual Fabric feature solves these problems by having the customer-tag or server-side VLANs with a 12-bit VLAN value mapped
to a 24-bit VLAN value. The 24-bit VLAN space provided by Virtual Fabric allows theoretically 16 million broadcast domains, although
the current virtual-fabric implementation allows only 8K in total.
In essence virtual fabric can provide a per-port VLAN-scope behavior. With VF classification of a customer-tag to VF-VLAN, the port
becomes part of the VF-VLAN broadcast domain or subnet in the VCS fabric.
From a forwarding perspective, the VF or TRILL fine-grained label is achieved by inserting the 24-bit VLAN on the inner payload.
Brocade provides two types of Virtual Fabrics.
• Service Virtual Fabric (SVF)
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 29
• Transport Virtual Fabric (TVF)
Service Virtual Fabric
• SVF provides a one to one mapping of customer-tag to a VF vlan (VFID).
• The VE associated with VF vlan can be configured with L2/L3 features.
• The VF vlan will support all L2/L3 features like a regular 802.1Q vlan.
• Through SVF multitenancy on the VCS fabric can be extended beyond the 4K vlan space.
Transport Virtual Fabric
• TVF is a VLAN aggregation feature, wherein multiple customer VLANs are mapped to a single VF-VLAN.
• The VF-VLAN provides L2 functionality and cannot be enabled for L3.
• VEs cannot be defined for TVF VLANs.
SVF functionality in service-provider and virtualized environments is depicted below.
When a cloud service provider provisions the virtual DC by replicating server rack PODs across server ports, different tenant domains
exist with overlapping 802.1Q VLANs at the server ports. Achieving isolation of tenant domains in this scenario is not possible with the
regular 802.1Q VLANs, and virtual-fabric technology can easily solve this issue.
The tenant domain isolation is achieved by mapping the 802.1Q VLAN at each VDX switch interface to a different VF VLAN. This
capability allows the VCS fabric to support more than the 4K VLANs permitted by the conventional 802.1Q address space.
FIGURE 9 SVF in a Cloud Service Provider Environment
Another use case for SVF is in a virtualized environments and the diagram below illustrates the Service VF deployment model for
multitenancy and overlapping VLANs in such an environment. The data center has three PODs All three PODs (ESXi 1-3) have the
identical pre-installed configuration. Each POD supports two tenants. Tenant 1 and Tenant 3 have two applications running on VLAN 10
and VLAN 20. Tennant 2 and 4 have one application each running on VLAN 30. Tenant 1 and Tenant 2 currently run on ESXi1 and
ESXi2 while Tenant 3 and 4 applications run on ESXi3. With Service VF, the same VLANs (VLAN 10, 20) can be used for Tenant 1 and
3 yet their traffic is logically isolated into separate Service VFs (5010 and 5020 for Tennant 1 and 6010 and 6020 for Tenant 3).
Similarly, the same VLANs for Tenant 2 and Tenant 4 are isolated into separate Service VFs (6030 for Tenant 2 and 6030 for Tenant 4).
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
30 53-1004936-01
FIGURE 10 SVF in a Highly Virtualized Environment
Virtual Fabric Extension
Virtual-fabric extension provides connectivity for Layer 2 domains across multiple VCS fabric. VF extension achieves this by building a
VxLAN based overlay network to connect these disjoint Layer 2 domains.
VxLAN is a Layer-2 overlay scheme on Layer 3 network or in other words VxLAN is a MAC-in-IP encapsulation technology which
stretch Layer 2 domains over a layer-3 infrastructure.
Extension of layer-2 domains (vlan) is achieved by enapping Ethernet-frame within a VxLAN UDP packet and vlan of Ethernet-frame
gets mapped to VNI in VxLAN header.
VxLAN Network identifier or VNI identifies a VxLAN segment and is comparable to VLAN in Ethernet world. Network elements on a
VxLAN segment can talk only to each other like in VLAN/Ethernet world. VNI has a 24-bit value, and thus VxLAN extends the total
Layer 2 domain to 16Mil compared to 12-bit or 4K Layer 2 networks provided by VLAN.
A typical VxLAN packet is shown below.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
53-1004936-01 31
FIGURE 11 VxLAN Packet Format
To extend Layer 2 domains using VF-extension feature, point-to-point VxLAN tunnels are setup between the VCS fabrics and then
VLANs are extended over these VxLAN tunnels by mapping the VLANs to VNIs in VxLAN header.
In VF extension case the VxLAN tunnel is setup through configuration instead of automatic VxLAN tunnel setup methods like "Flood and
learn" or EVPN.
Both 802.1Q VLANs in the range 2–4096 and the virtual-fabric VLANs in the range 4097–8192 can be extended using the VF-
extension feature. A simple VF-extension packet flow is shown below; details on the configuration are provided in the illustration section.
In the diagram below Virtual-Fabric Gateway configuration are done on Spine. On Spine as part of Virtual-fabric gateway configuration
the VxLAN tunnel endpoint (VTEP) is defined. VTEP is responsible for VxLAN encapsulation and decapsulation. VTEP would require a
unique-IP per VCS fabric.
On the VF gateway, one would have to manually configure the remote data-center sites VTEP IP. Based on this, VxLAN tunnels are set
up between the data centers if there is IP reachability for the remote VTEP IP. The VxLAN tunnel does nott have any control plane
messaging and depends only on the IP reachability of VTEP IPs.
Along with the VxLAN tunnel configuration, the user would also indicate which VLANs are to be extended and the VLAN-to-VNI
mapping.
The packet flow sequence is for Layer 2 forwarding from the server on Datacenter-1 to Datacenter-2. Both DC-1 and DC-2 are in two
different VCS fabrics. With the fabrics having Layer 3 data-center connectivity through the edge router on each data center.
1. An Ethernet frame from the server on DC-1 will be received on a VLAN at the ToR/RBridge. MAC look-up at the ToR will
indicate that the frame has to be TRILL-forwarded to the spine.
2. The Ethernet frame gets encapsulated into TRILL and TRILL-forwarded to the spine of DC-1.
3. At the spines of DC-1, the VTEP is configured and hence on MAC lookup the TRILL frame will be decapped and the Ethernet
frame from the server will be VxLAN-encapped. VxLAN encapsulation will result in an IP packet with the source IP of the packet
being the VTEP IP of DC-1 and the destination IP being the VTEP IP of DC-2.
4. The packet will traverse from the spine to the edge router as an IP packet, and from the edge router on DC-1, the VxLAN IP
packet will undergo regular IP forwarding until it reaches the destination IP, which is the VTEP end on DC-2.
5. The VTEP end on DC-2 will be the spine RBridges on that fabric. At the spine, the VxLAN packet will be decapped and the
Ethernet frame will undergo Layer 2 lookup.
6. The Layer 2 look-up will show that the packet has to TRILL encapped and forwarded to one of the ToRs/RBridges on the DC-2
VCS fabric.
Virtual Cluster Switching
Brocade VCS Fabric with IP Storage
32 53-1004936-01
7. The TRILL frame will be forwarded to the destination ToR where it will be decapped and the Ethernet frame will be forwarded to
the server on Datacenter-2.
FIGURE 12 Packet Forwarding over a VxLAN-Based DCI Implemented with VF-Extension
Auto-fabric
Auto-fabric feature allows true plug and play model for VDX platform in VCS fabric.
When a new switch comes up it has the "bare-metal" flag enabled, meaning it's not part of any VCS and has default "VCS ID 1" and
"rbridge-id 1".
If the switch is connected to a VCS fabric and the VCS fabric is pre-provisioned with the new switches WWN number to an RBridge ID.
Then the new switch will reboot again and automatically add itself to the VCS fabric with the pre-provisioned RBridge-ID. Working of this
feature is present in the Illustration section.
VCS Deployment Models
This section provides various deployment models of a VCS fabric. The network architectures are based of the Clos or leaf-spine model
since it provides predictive latency, better load distribution in data centers, and VCS fabrics are inherently suited for a Clos design.
Primarily the VCS deployments can be categorized based on the scale requirement of the network into a:
• 3-stage Clos or single-POD VCS fabric
• 5-stage Clos or multi-VCS fabric
VCS Deployment Models
Brocade VCS Fabric with IP Storage
53-1004936-01 33
The decision to go with a single-POD or a multi-VCS fabric would primarily be decided based on the scale requirements of a fabric.
Single-POD VCS Fabric
A typical data-center site will have a server VCS fabric and a DC edge services block as shown below.
FIGURE 13 Single-POD VCS Fabric
The server VCS fabric follows a 2-tier leaf-spine architecture with spine and leaf tiers interconnecting the server racks. The server VCS
fabric will provide a Layer 2 interconnect for the server racks. The VCS fabric as described earlier is TRILL compliant, providing network-
style routing in a bridge environment with nonblocked, multipath enabled and load-balanced interconnection between RBridges.
Leaf Tier
The leaf tier provides connectivity for the compute or storage rack to the VCS fabric. A pair of switches referred to as vLAG pair leaf acts
as dual or redundant top-of-rack (ToR) in each rack. These dual ToRs are interconnected with two ISL links.
The compute or storage resources are connected to the dual-ToR using vLAG port-channels for redundancy. The ISL links between Leaf
pairs would provide back-up path if one of the RBridges loses all connectivity to the spines
Each RBridge in a vLAG pair connect to 4 spines through 40 gig Links, providing 320 Gbps bandwidth per rack.
Spine Tier
Spine Tier provide the L2-L3 boundary for the VCS fabric. Also Spines attach to the Edge Routers or the Edge-Service block to route
traffic to the Internet or other Data centers.
VCS Deployment Models
Brocade VCS Fabric with IP Storage
34 53-1004936-01
Spine and Leaf RBridges are interconnected by ISL. And traffic forwarding on ISLs use FSPF based routed topology to multi-path the
traffic. Spine Tier would have FHRP protocols enabled for providing routing within the fabric or to the Edge routers. VCS technology will
allow the FHRP GW's to be advertised to the Leaf tier for efficient load-balancing of Server traffic to the Spine.
Since each spine provide active-active forwarding using FHRP short-path forwarding feature, it is also recommended to run Layer 3
routing between the Spines. This will provide a backup routing path on a spine if it were to lose all uplinks.
It is recommended to interconnect the Spine switches in a ring, to avoid leaf switch as a transit in back-up routing scenarios.
Edge Services
Edge services provide WAN, Internet, and DCI connectivity for the data center through the edge routers. And this would be where firewall,
load balancers, and VPN services would be placed. To provide redundancy, two edge routers are recommended to connect from the
spines.
Traffic Flow
VCS ensures that there is a loop-free topology, ECMP, and efficient load balancing across the multipaths in the leaf-spine fabric. Leaf
nodes act as a typical Layer 2 node doing MAC learning and Layer 2 switching under VLAN and in a VCS fabric are responsible for
TRILL encapsulation/decapsulation. Layer 3 routing in this fabric is performed at the spine layer where L2 termination of VLANs on VE
interfaces happens.
Leaf-to-server connections are classical Ethernet links or vLAGs receiving and sending CE frames while the spine-to-leaf or spine-to-
spine links forward TRILL frames over ISLs or ISL trunks.
Within the same subnet/VLAN, leaf RBridges perform Layer 2 lookup and based on the MAC table at the leaf RBridge would
encapsulate classical Ethernet frames into TRILL frames and forward to the destination leaf pair through the spine. The spine does
TRILL switching to the leaf, where the TRILL frame will be decapsulated and sent out as a CE frame from the destination leaf's vLAG.
For inter-VLAN traffic within data center, leaf will send TRILL frame destined to the GW on spine and spine will route across the subnet
rewriting the inner CE payload's destination MAC and VLAN. The packet after routing at spine would be forwarded as TRILL frame to
destination leaf RBridge, where it will decapsulate TRILL and send CE frame to the server.
And for inter-DC or Internet, traffic flows from leaf to spine as TRILL, gets routed at spine, and is send out as CE packet to DC edge
services or vice-versa.
Multi-VCS Fabric
For a highly scalable fabric multi-vcs design is recommended where multiple VCS fabric POD's are interconnected by a Super-Spine
layer. A single-pod VCS fabric can scale only up to 40 nodes, and for a highly scalable data center, several of these PODs must be
interconnected and a super-spine layer is recommended for this. Brocade provides two interconnection designs:
• Multi-VCS fabric using vLAG
• Multi-VCS fabric using VxLAN
Multi-VCS Fabric Using vLAG
In this design, VCS PODs are interconnected through a vLAG to the super-spine as shown in Figure 14.
VCS Deployment Models
Brocade VCS Fabric with IP Storage
53-1004936-01 35
FIGURE 14 Multi-VCS Fabric Interconnected Through Layer 2 vLAG
This is a 5-stage Clos design with three tiers: leaf, spine, and super-spine. It's a simpler network design with Layer 2 extended from the
leaf through the spine with the L2/L3 boundary for the data center at the super-spines.
The leaf and spine tiers for a POD are in the same VCS fabric. And in this architecture, there would be multiple such leaf-spine VCS
fabrics in a data center. And multiple such leaf-spine PODs are interconnected at the super-spine through vLAGs from the spine. All
nodes in the super-spine tier are in the same VCS fabric. The super-spine provides Internet, WAN, and DCI connectivity through the
edge router in the edge service block.
Leaf Tier
Leaf pairs form the edge of the network connecting compute or storage racks through vLAGs to the leaf-spine VCS fabric. CE frames
received from the server racks are TRILL-forwarded from leaf to spine and vice-versa.
Spine Tier
Compared to the single-POD design, the spine tier in this design does not provide L2-L3 gateway functionality. The leaf-spine nodes
provide a Clos design with 4 spines supporting up to 40 leafs or 20 racks, assuming that each rack is serviced by a pair of leafs/ToR
RBridges.
Intra-VLAN traffic within a POD gets TRILL-bridged within the POD by the spine. When intra-VLAN traffic has to go across the PODs,
traffic is sent to the super-spine tier to be switched. And all routing traffic is also sent to the super-spine, which includes inter-VLAN
within the same POD or inter-VLAN across the POD or to edge routing.
The spine connects to the super-spine over vLAGs, so from each leaf-spine tier, one vLAG is formed between the spines to the super-
spine.
Super-Spine Tier
VCS Deployment Models
Brocade VCS Fabric with IP Storage
36 53-1004936-01
In the 5-Stage Clos with vLAG, the super-spine is configured as the L2/L3 boundary for all VLANs in the data center. Super-spines will
be configured with VE interfaces, and VRRP-E would be enabled for FHRP. When traffic must go across a POD or is routed across a
subnet/VLAN or out of a data center, traffic is forwarded to the super-spine tier.
All RBridges in the super-spine tier are in the same VCS fabric. Spines are connected to the super-spine through vLAGs, and essentially
CE frames are forwarded across vLAGs, and between super-spine RBridges forwarding is TRILL.
Spine to super-spines are 40-gig links, and 4 super-spine RBridges are recommended for the best performance and better
oversubscription ratios.
We recommend that you interconnect the super-spine switches in a ring and run Layer 3 routing between the spines. This will provide a
backup routing path on the super-spine if it were to lose all uplinks.
The super-spines would connect to the edge services through the edge router to provide Internet, WAN, and DCI connectivity and
services like firewall, load balancers, and VPN.
Pros and Cons
A multi-VCS fabric provides a higher scalable data-center architecture. This is a simplistic design in terms of configuration. Seamless
Layer 2 extension between the PODs is present in this architecture without the need for configuring other features. Routing traffic would
trombone between spine and super-spine tiers since the L3 GW is placed at the super-spine.
Multi-VCS Fabric Using VxLAN
A multi-VCS fabric using VxLAN interconnectivity is another 5-Stage Clos design that is possible with a VCS fabric to build a highly
scalable data-center fabric. Brocade supports VxLAN-based DC interconnectivity through Virtual Fabric Extension technology. Figure
15 shows the topology for multi-VCS using VxLAN.
VCS Deployment Models
Brocade VCS Fabric with IP Storage
53-1004936-01 37
FIGURE 15 Multi-VCS Fabric Interconnected Through VxLAN over L3 Links
Multi-VCS using VxLAN leverage the Virtual fabric extension technology to interconnect multiple-PODs to build a scalable data center.
This 5-stage Clos fabric consists of three tiers: leaf, spine, and super-spine.
Each POD is constituted of leaf and spine RBridges part of a unique VCS. Every POD is connected to the spine by L3 links.
The Virtual Fabric extension feature provides VxLAN tunnel between two end points over L3 infrastructure. For interconnecting multiple
PODs, static VxLAN tunnels are set up between the PODs. Every POD is configured with VTEP, and static VxLAN tunnels are set up
between the PODs. The L3 links that connect PODs to super-spine form the underlay network to interconnect multiple PODs.
Through the VF extension feature, Layer 2 extension is provided between each individual POD.
Leaf Tier
The leaf tier connects server to the leaf-spine VCS fabric through vLAGs. A pair of leaf/top-of-rack RBridges would service each server
rack. The leaf RBridges are connected to the spine over ISL, and leaf-spine traffic forwarding happens on TRILL. The leaf tier essentially
does Layer 2 forwarding of CE frames to TRILL or vice-versa.
Spine Tier
In multi-VCS using a VxLAN design, the spine tier acts as the L2-L3 boundary and the VxLAN VTEP endpoint for the VF extension
feature.
Spines will be configured with the VE interface and FHRP's to provide L3 GW functionality for the server VLANs/subnets. Spines on
every POD will localize the routing for subnets under it. Thus the ARP scale and routing scale are limited to each POD. Spines have L3
connectivity to the super-spine tier. And over this routing to Internet/WAN and VF extension will happen. For routing to Internet/WAN,
each POD will receive a default route from super-spine.
VCS Deployment Models
Brocade VCS Fabric with IP Storage
38 53-1004936-01
When Layer 2 domains must be extended across the POD, the VF extension feature is used. For VF extension, static VxLAN tunnels
through configuration are set up between spines on each POD. For this, every spine would be a VTEP end point to provide VxLAN
encap/decap. And for support of BUM traffic, one of the VTEP/spines would be selected as the BUM forwarder on each POD to avoid
duplicate traffic.
VLANs that must be extended across PODs will be enabled under the VTEPs.
Apart from 802.1Q VLANs, the VF extension feature also provides seamless extension for the virtual fabric VLANs. The virtual fabric
feature is Brocade's fine-grain TRILL implementation to extend VLAN ranges beyond the traditional 4K VLAN space and allows
reusability of VLANs by providing a per-interface VLAN scope.
With VF extension, virtual-fabric VLANs are seamlessly extended between TRILL and VxLAN, thus providing higher multitenancy. The
VLAN carried in the TRILL frame is converted as VxLAN VNI, and hence the seamless integration of both these features is achieved.
The spine tier is connected to the super-spine tier over L3 links, and BGP is recommended as a routing protocol. The spine tier on each
POD will receive a default route for the Internet/WAN traffic and will also exchange the directly connected physical network and the VTEP
end point IPs.
Super-Spine Tier
The super-spine tier in this architecture provides interconnectivity through L3 links with VxLAN traffic using it for underlay network. The
super-spine tier also connects the multi-VCS fabric to the edge routers. A routing protocol must be run between the super-spine nodes
to exchange L3 networks and for connectivity.
Edge services provide WAN, Internet, and DCI connectivity and services like firewall, load balancer, and VPN for the data center.
Pros and Cons
• This design provide L2/L3 GW's per POD, and thus routing doesn't have to cross each POD.
• Hence the architecture is much more scalable and provides efficient usage of link bandwidth.
• Broadcast storms are limited to each POD in this multi-VCS.
• Provides higher multitenant architecture with use of VF extension and VF features.
• At the same this design involves much more configuration compared to the multi-VCS using vLAG.
IP Storage
Over the past few years, server virtualization has become a de facto standard, and lately data centers are moving to containerized work
environments, and IP storage networks have been gaining a lot more mind-share from data-center professionals.
Recent market studies have shown greater adoption of IP-based network attached storage (NAS) for file-based storage and iSCSI for
block-based storage. The use of IP-based NAS or iSCSI generates a new set of interesting challenges for the network and storage
administrators in terms of performance and SLA guarantees for storage traffic across an Ethernet network that is agnostic of loss.
Traditional FC storage uses a dedicated SAN network that is purpose-built for storage, and storage traffic is the only workload that runs
over the infrastructure. IP storage traffic protocols, such as NAS and iSCSI, are often deployed over the general-purpose LAN
infrastructure, sharing bandwidth with non-storage traffic. This helps to drive efficiencies by leveraging the existing IP network
infrastructure to carry storage traffic, but it also creates challenges such as the inability to guarantee the stringent SLAs that mission-
critical workloads require.
Brocade is the industry leader in FC-based SAN networks, and with VCS-based fabric technology, the same simplicity and performance
can be achieved for an IP storage network. VCS provides to the Ethernet world an automated and simplified fabric bring-up, along with
load-balanced multipathing of traffic and nonblocked efficient usage of fabric bandwidth. VCS fabric also supports the Data Center
Bridging features and other enhancements to support the storage network.
IP Storage
Brocade VCS Fabric with IP Storage
53-1004936-01 39
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published
brocade-vcs-fabric-ip-storage-bvd-published

More Related Content

Similar to brocade-vcs-fabric-ip-storage-bvd-published

Fastiron 08040-l2guide
Fastiron 08040-l2guideFastiron 08040-l2guide
Fastiron 08040-l2guide
MP Casanova
 
brocade-vcs-gateway-vmware-dp
brocade-vcs-gateway-vmware-dpbrocade-vcs-gateway-vmware-dp
brocade-vcs-gateway-vmware-dp
Anuj Dewangan
 
Oracle 10g release 1
Oracle 10g release  1Oracle 10g release  1
Oracle 10g release 1
Rakesh Kumar Pandey
 
Opm costing
Opm costingOpm costing
Opm costing
Madhuri Uppala
 
Shipping execution user guide r12
Shipping execution user guide r12Shipping execution user guide r12
Shipping execution user guide r12
aruna777
 
Primavera help 2012
Primavera help 2012Primavera help 2012
Primavera help 2012
Dr Ezzat Mansour
 
Ceragon_Evolution_IP20LH_Installation_Guide_Rev_B.01.pdf
Ceragon_Evolution_IP20LH_Installation_Guide_Rev_B.01.pdfCeragon_Evolution_IP20LH_Installation_Guide_Rev_B.01.pdf
Ceragon_Evolution_IP20LH_Installation_Guide_Rev_B.01.pdf
jonatanmedeirosgomes1
 
Web logic installation document
Web logic installation documentWeb logic installation document
Web logic installation document
Taoqir Hassan
 
Oracle® business intelligence
Oracle® business intelligenceOracle® business intelligence
Oracle® business intelligence
George Heretis
 
Oracle9
Oracle9Oracle9
Oracle9
Guille Anaya
 
vmware-dell-nsx-reference-architecture.pdf
vmware-dell-nsx-reference-architecture.pdfvmware-dell-nsx-reference-architecture.pdf
vmware-dell-nsx-reference-architecture.pdf
SelvamVenkatajalam1
 
Oracle® application server
Oracle® application serverOracle® application server
Oracle® application server
FITSFSd
 
Oracle® application server forms and reports services installation guide
Oracle® application server forms and reports services installation guideOracle® application server forms and reports services installation guide
Oracle® application server forms and reports services installation guide
FITSFSd
 
Oracle_10g_PLSQL_Guia_Ref.pdf
Oracle_10g_PLSQL_Guia_Ref.pdfOracle_10g_PLSQL_Guia_Ref.pdf
Oracle_10g_PLSQL_Guia_Ref.pdf
Carlos Valente Albarracin
 
using-advanced-controls (1).pdf
using-advanced-controls (1).pdfusing-advanced-controls (1).pdf
using-advanced-controls (1).pdf
Hussein Abdelrahman
 
Acknow
AcknowAcknow
Acknow
Vikas Saini
 
E29632
E29632E29632
E29632
ssfdsdsf
 
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
hptoga
 
E13882== ORACLE SOA COOK BOOK
E13882== ORACLE SOA COOK BOOKE13882== ORACLE SOA COOK BOOK
E13882== ORACLE SOA COOK BOOK
prathap kumar
 
B13922
B13922B13922

Similar to brocade-vcs-fabric-ip-storage-bvd-published (20)

Fastiron 08040-l2guide
Fastiron 08040-l2guideFastiron 08040-l2guide
Fastiron 08040-l2guide
 
brocade-vcs-gateway-vmware-dp
brocade-vcs-gateway-vmware-dpbrocade-vcs-gateway-vmware-dp
brocade-vcs-gateway-vmware-dp
 
Oracle 10g release 1
Oracle 10g release  1Oracle 10g release  1
Oracle 10g release 1
 
Opm costing
Opm costingOpm costing
Opm costing
 
Shipping execution user guide r12
Shipping execution user guide r12Shipping execution user guide r12
Shipping execution user guide r12
 
Primavera help 2012
Primavera help 2012Primavera help 2012
Primavera help 2012
 
Ceragon_Evolution_IP20LH_Installation_Guide_Rev_B.01.pdf
Ceragon_Evolution_IP20LH_Installation_Guide_Rev_B.01.pdfCeragon_Evolution_IP20LH_Installation_Guide_Rev_B.01.pdf
Ceragon_Evolution_IP20LH_Installation_Guide_Rev_B.01.pdf
 
Web logic installation document
Web logic installation documentWeb logic installation document
Web logic installation document
 
Oracle® business intelligence
Oracle® business intelligenceOracle® business intelligence
Oracle® business intelligence
 
Oracle9
Oracle9Oracle9
Oracle9
 
vmware-dell-nsx-reference-architecture.pdf
vmware-dell-nsx-reference-architecture.pdfvmware-dell-nsx-reference-architecture.pdf
vmware-dell-nsx-reference-architecture.pdf
 
Oracle® application server
Oracle® application serverOracle® application server
Oracle® application server
 
Oracle® application server forms and reports services installation guide
Oracle® application server forms and reports services installation guideOracle® application server forms and reports services installation guide
Oracle® application server forms and reports services installation guide
 
Oracle_10g_PLSQL_Guia_Ref.pdf
Oracle_10g_PLSQL_Guia_Ref.pdfOracle_10g_PLSQL_Guia_Ref.pdf
Oracle_10g_PLSQL_Guia_Ref.pdf
 
using-advanced-controls (1).pdf
using-advanced-controls (1).pdfusing-advanced-controls (1).pdf
using-advanced-controls (1).pdf
 
Acknow
AcknowAcknow
Acknow
 
E29632
E29632E29632
E29632
 
87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade87652141 the-new-data-center-brocade
87652141 the-new-data-center-brocade
 
E13882== ORACLE SOA COOK BOOK
E13882== ORACLE SOA COOK BOOKE13882== ORACLE SOA COOK BOOK
E13882== ORACLE SOA COOK BOOK
 
B13922
B13922B13922
B13922
 

brocade-vcs-fabric-ip-storage-bvd-published

  • 1. BROCADE VALIDATED DESIGN Brocade VCS Fabric with IP Storage 53-1004936-01 21 December 2016
  • 2. © 2016, Brocade Communications Systems, Inc. All Rights Reserved. Brocade, the B-wing symbol, and MyBrocade are registered trademarks of Brocade Communications Systems, Inc., in the United States and in other countries. Other brands, product names, or service names mentioned of Brocade Communications Systems, Inc. are listed at www.brocade.com/en/legal/ brocade-Legal-intellectual-property/brocade-legal-trademarks.html. Other marks may belong to third parties. Notice: This document is for informational purposes only and does not set forth any warranty, expressed or implied, concerning any equipment, equipment feature, or service offered or to be offered by Brocade. Brocade reserves the right to make changes to this document at any time, without notice, and assumes no responsibility for its use. This informational document describes features that may not be currently available. Contact a Brocade sales office for information on feature and product availability. Export of technical data contained in this document may require an export license from the United States government. The authors and Brocade Communications Systems, Inc. assume no liability or responsibility to any person or entity with respect to the accuracy of this document or any loss, cost, liability, or damages arising from the information contained herein or the computer programs that accompany it. The product described by this document may contain open source software covered by the GNU General Public License or other open source license agreements. To find out which open source software is included in Brocade products, view the licensing terms applicable to the open source software, and obtain a copy of the programming source code, please visit http://www.brocade.com/support/oscd. Brocade VCS Fabric with IP Storage 2 53-1004936-01
  • 3. Contents List of Figures...................................................................................................................................................................................................................... 5 Preface...................................................................................................................................................................................................................................7 Brocade Validated Designs....................................................................................................................................................................................................................7 Purpose of This Document....................................................................................................................................................................................................................7 Target Audience..........................................................................................................................................................................................................................................7 About the Author........................................................................................................................................................................................................................................8 Document History......................................................................................................................................................................................................................................8 About Brocade............................................................................................................................................................................................................................................ 8 Introduction.......................................................................................................................................................................................................................... 9 Technology Overview...................................................................................................................................................................................................... 11 Benefits....................................................................................................................................................................................................................................................... 11 Terminology...............................................................................................................................................................................................................................................12 Virtual Cluster Switching...................................................................................................................................................................................................................... 13 VCS in Brief......................................................................................................................................................................................................................................13 VCS Fabric........................................................................................................................................................................................................................................15 Data Frames Inside a VCS Fabric...........................................................................................................................................................................................20 VCS Traffic Forwarding................................................................................................................................................................................................................21 VCS Services...................................................................................................................................................................................................................................27 VCS Deployment Models....................................................................................................................................................................................................................33 Single-POD VCS Fabric.............................................................................................................................................................................................................34 Multi-VCS Fabric............................................................................................................................................................................................................................35 IP Storage ................................................................................................................................................................................................................................................. 39 Data Center Bridging....................................................................................................................................................................................................................40 Auto-NAS......................................................................................................................................................................................................................................... 41 CEE-Map.......................................................................................................................................................................................................................................... 41 Dynamic Packet Buffering.........................................................................................................................................................................................................42 iSCSI Initiators and Targets........................................................................................................................................................................................................42 IP Storage Deployment Models........................................................................................................................................................................................................43 Dedicated IP Storage...................................................................................................................................................................................................................43 Hybrid IP Storage..........................................................................................................................................................................................................................45 Shared IP Storage..........................................................................................................................................................................................................................46 VCS Validated Designs ...................................................................................................................................................................................................51 Hardware and Software Matrix...........................................................................................................................................................................................................51 VCS Fabric Configuration....................................................................................................................................................................................................................52 Fabric Bringup.................................................................................................................................................................................................................................52 Edge Ports........................................................................................................................................................................................................................................58 First Hop Redundancy.................................................................................................................................................................................................................62 Multitenancy.....................................................................................................................................................................................................................................66 IP Storage Configuration......................................................................................................................................................................................................................66 Dynamic Shared Buffer...............................................................................................................................................................................................................66 DCBX Configuration for iSCSI................................................................................................................................................................................................. 68 Priority Flow Control and ETS..................................................................................................................................................................................................70 Auto-NAS......................................................................................................................................................................................................................................... 73 CEE-MAP Configuration............................................................................................................................................................................................................77 Brocade VCS Fabric with IP Storage 53-1004936-01 3
  • 4. Routed vs Layer 2 Switching for Storage Traffic.............................................................................................................................................................. 77 Jumbo MTU.....................................................................................................................................................................................................................................79 Storage Initiator/Target Configuration...................................................................................................................................................................................80 Edge Services Configuration..............................................................................................................................................................................................................85 Deployment 1: Single-POD VCS Fabric With Attached Shared IP Storage..................................................................................................................89 Fabric Wide Configuration..........................................................................................................................................................................................................90 Leaf Configuration.........................................................................................................................................................................................................................91 Spine Configuration...................................................................................................................................................................................................................... 93 Deployment 2: Single-POD VCS Fabric with Dedicated IP Storage VCS.....................................................................................................................97 Storage VCS.................................................................................................................................................................................................................................... 98 Server VCS....................................................................................................................................................................................................................................102 Deployment 3: Multi-VCS Fabric with Shared IP Storage VCS.......................................................................................................................................106 Storage VCS.................................................................................................................................................................................................................................108 Multi-VCS Converged Fabric.................................................................................................................................................................................................112 Illustration Examples.....................................................................................................................................................................................................123 Example 1: FVG in a 3-Stage Clos Fabric................................................................................................................................................................................123 Configuration................................................................................................................................................................................................................................124 Verification.....................................................................................................................................................................................................................................125 Example 2: Virtual Fabric Across Disjoint VLANs on Two ToRs in a 3-Stage Clos................................................................................................127 Configuration................................................................................................................................................................................................................................128 Verification.....................................................................................................................................................................................................................................130 Example 3: Virtual Fabric per Interface VLAN-Scope on ToRs in a 3-Stage Clos..................................................................................................131 Configuration................................................................................................................................................................................................................................132 Verification.....................................................................................................................................................................................................................................134 Example 4: VM-Aware Network Automation...........................................................................................................................................................................135 Configuration and Verification............................................................................................................................................................................................... 136 Virtual Machine Move...............................................................................................................................................................................................................138 Example 5: AMPP...............................................................................................................................................................................................................................139 Configuration and Verification for VLAN..........................................................................................................................................................................139 Virtual Machine Moves.............................................................................................................................................................................................................141 Configuration and Verification for Virtual Fabric VLAN..............................................................................................................................................142 Example 6: Virtual Fabric Extension............................................................................................................................................................................................143 Configuration................................................................................................................................................................................................................................144 Verification.....................................................................................................................................................................................................................................145 Example 7: Auto-Fabric....................................................................................................................................................................................................................147 Configuration................................................................................................................................................................................................................................148 Verification.....................................................................................................................................................................................................................................148 Design Considerations .................................................................................................................................................................................................151 References.......................................................................................................................................................................................................................153 Brocade VCS Fabric with IP Storage 4 53-1004936-01
  • 5. List of Figures Figure 1 on page 14—Components of a VCS Fabric Figure 2 on page 18—vLAG Connectivity Options Figure 3 on page 20—TRILL Data Format Figure 4 on page 22—Layer 2 Unicast Forwarding Figure 5 on page 24—Layer 3 Forwarding with VRRP-E Figure 6 on page 26—Layer 3 Intersubnet Forwarding on VCS Fabric Figure 7 on page 27—Multicast Tree for BUM Traffic Figure 8 on page 28—VEB on Virtualized Server Environment Figure 9 on page 30—SVF in a Cloud Service-Provider Environment Figure 10 on page 31—SVF in a Highly Virtualized Environment Figure 11 on page 32—VxLAN Packet Format Figure 12 on page 33—Packet Forwarding over a VxLAN-Based DCI Implemented with VF-Extension Figure 13 on page 34—Single-POD VCS Fabric Figure 14 on page 36—Multi-VCS Fabric Interconnected Through Layer 2 vLAG Figure 15 on page 38—Multi-VCS Fabric Interconnected Through VxLAN over L3 Links Figure 16 on page 42—PFC and ETS in Action over a DCBX-Capable Edge Port on a VDX Switch Figure 17 on page 44—Dedicated Storage Design with VCS Figure 18 on page 45—Hybrid IP Storage with Single-Storage vLAG from ToR Figure 19 on page 46—Hybrid IP Storage with Multiple-Storage vLAG from ToR Figure 20 on page 47—Single-POD VCS with Attached IP Storage Device Figure 21 on page 48—Single POD with Shared IP Storage VCS Figure 22 on page 49—Multi-VCS Using vLAG with Shared IP Storage Figure 23 on page 50—Multi-VCS Using VxLAN with Shared IP Storage Figure 24 on page 54—VCS Fabric Figure 26 on page 63—VRRP-E in 3-Stage Clos VCS Fabric Figure 27 on page 89—Single-POD DC with Attached Storage Figure 28 on page 98—Data Center with Dedicated Storage VCS Figure 29 on page 107—Multi-VCS Fabric with Shared Storage VCS Figure 30 on page 124—FVG Topology Figure 31 on page 128—VF Across Disjoint VLANs Figure 32 on page 132—VF Per-Interface VLAN Scope Figure 33 on page 143—Virtual Fabric Extension Brocade VCS Fabric with IP Storage 53-1004936-01 5
  • 6. Brocade VCS Fabric with IP Storage 6 53-1004936-01
  • 7. Preface • Brocade Validated Designs.............................................................................................................................................................................. 7 • Purpose of This Document.............................................................................................................................................................................. 7 • Target Audience.....................................................................................................................................................................................................7 • About the Author...................................................................................................................................................................................................8 • Document History................................................................................................................................................................................................8 • About Brocade.......................................................................................................................................................................................................8 Brocade Validated Designs Helping customers consider, select, and deploy network solutions for current and planned needs is our mission. Brocade Validated Designs offer a fast track to success by accelerating that process. Validated designs are repeatable reference network architectures that have been engineered and tested to address specific use cases and deployment scenarios. They document systematic steps and best practices that help administrators, architects, and engineers plan, design, and deploy physical and virtual network technologies. Leveraging these validated network architectures accelerates deployment speed, increases reliability and predictability, and reduces risk. Brocade Validated Designs incorporate network and security principles and technologies across the ecosystem of service provider, data center, campus, and wireless networks. Each Brocade Validated Design provides a standardized network architecture for a specific use case, incorporating technologies and feature sets across Brocade products and partner offerings. All Brocade Validated Designs follow best-practice recommendations and allow for customer-specific network architecture variations that deliver additional benefits. The variations are documented and supported to provide ongoing value, and all Brocade Validated Designs are continuously maintained to ensure that every design remains supported as new products and software versions are introduced. By accelerating time-to-value, reducing risk, and offering the freedom to incorporate creative, supported variations, these validated network architectures provide a tremendous value-add for building and growing a flexible network infrastructure. Purpose of This Document This Brocade validated design provides guidance for designing and implementing Brocade VCS fabric with IP storage in a data center network using Brocade hardware and software. It details the Brocade reference architecture for deploying VCS-based data centers with IP storage and VxLAN interconnectivity. It should be noted that not all features such as automation practices, zero-touch provisioning, and monitoring of the Brocade VCS fabric are included in this document. Future versions of this document are planned to include these aspects of the Brocade VCS fabric solution. The design practices documented here follow the best-practice recommendations, but there are variations to the design that are supported as well. Target Audience This document is written for Brocade systems engineers, partners, and customers who design, implement, and support data center networks. This document is intended for experienced data center architects and engineers. It assumes that the reader has a good understanding of data center switching and routing features. Brocade VCS Fabric with IP Storage 53-1004936-01 7
  • 8. About the Author Eldho Jacob is a Technical Leader in the Solutions Architecture and Validation team at Brocade. He has extensive experience across data center and service provider technologies. At Brocade, he is focused on developing and validating solution architectures that customers can use in deployments. The author would like to acknowledge the following Brocadians for their technical guidance in developing this validated design: • Abdul Khader: Technical Director • Krish Padmanabhan: Principal Engineer • Anuj Dewangan: Technical Marketing Engineer • Dan DeBacker: Principal Systems Engineer • Kamini Santhanagopalan: Product Manager • Mike Molinaro: Support Account Manager • Sadashiv Kudlamath: Technical Marketing Engineer • Steve Como: Onsite Engineer • Syed Hasan Raza Naqvi: Technical Leader • Ted Trautman: Director, Service Delivery • Vasanthi Adusumalli: Staff Engineer Document History Date Part Number Description December 21, 2016 53-1004936-01 Initial version. About Brocade Brocade® (NASDAQ: BRCD) networking solutions help the world's leading organizations transition smoothly to a world where applications and information reside anywhere. This vision is designed to deliver key business benefits such as unmatched simplicity, non-stop networking, application optimization, and investment protection. Innovative Ethernet and storage networking solutions for data center, campus, and service provider networks help reduce complexity and cost while enabling virtualization and cloud computing to increase business agility. To help ensure a complete solution, Brocade partners with world-class IT companies and provides comprehensive education, support, and professional services offerings (www.brocade.com). About the Author Brocade VCS Fabric with IP Storage 8 53-1004936-01
  • 9. Introduction Brocade is the industry leader in storage SAN fabrics and through VCS technology brings fabric architecture into the Ethernet world. This document describes converged data center network designs for storage and Ethernet networks integrating Brocade VCS technology with various IP storage solutions. The configurations and design practices documented here are fully validated and conform to the Brocade data center fabric reference architectures. The intention of this Brocade validated design document is to provide reference configurations and document best practices for building converged data center networks with VCS fabric and IP storage using Brocade VDX switches. This document describes the following architectures: • Single POD VCS data center with dedicated and shared IP storage designs • Multi-POD VCS data center with shared IP storage Apart from various converged data center architectures, the paper also covers various innovative features that VCS brings to the data center fabric. We recommend that you review the data center fabric architectures described in the Brocade Data Center Fabric Architectures[1] white paper for a detailed discussion on data center architectures for building data center sites. Brocade VCS Fabric with IP Storage 53-1004936-01 9
  • 10. Brocade VCS Fabric with IP Storage 10 53-1004936-01
  • 11. Technology Overview • Benefits..................................................................................................................................................................................................................11 • Terminology......................................................................................................................................................................................................... 12 • Virtual Cluster Switching.................................................................................................................................................................................13 • VCS Deployment Models.............................................................................................................................................................................. 33 • IP Storage ............................................................................................................................................................................................................39 • IP Storage Deployment Models..................................................................................................................................................................43 Data center networks evolved from a traditional three-tier architecture to the flat spine-leaf/Clos architecture to address the requirements of newer applications and fluid workloads. In addition to scalability, high availability, and greater bandwidth, other prime architectural requirements in a data center network are workload mobility, multitenancy, network automation, and CapEx/OpEx reduction. Having traditional Layer 2 or Layer 3 technologies to solve these requirements involves significant compromises in network architecture and a higher OpEx to manage these networks. This requires a fabric that is easy to provision as in traditional Layer 2 networks, and also is non-blocking as in Layer 3 networks. Brocade VCS technology merges the best of both Layer 2 and Layer 3 networks along with a host of other Ethernet fabric innovations and IP storage features to provide a purpose-built fabric for a converged data and storage network. This white paper covers VCS and IP storage technologies (iSCSI and NAS) and provides various design options for a converged Brocade data center fabric architecture. In addition to VCS and IP storage deployment models, this document covers the following VCS features: • Virtual Fabric for building a scalable multitenant data center • VxLAN based DCI connectivity using Virtual Fabric extension • VM-aware network automation • Various FHRP and fabric bring-up options Benefits Some of the key benefits of using Brocade VCS technology in the data center: Bridge Aware Routing—VCS technology conforms to the TRILL principles and brings in benefits of TRILL bridge aware routing like non- blocking links, TTL for loop avoidance, faster fail-over, workload mobility, and scalability to the fabric. Topology agnostic—Can be provisioned as mesh, star, Clos, or any other desired topology as per the network requirement. In this document, we will go over the Clos architecture since this is the most prevalent in data centers. Self-forming—VCS fabrics are self-forming with no user intervention needed to have the fabric setup other than physical cabling of the switches. Zero-Touch provisioning—VCS fabrics are ZTP capable enabling user to bring up the fabric right out of the box. DHCP Automatic Deployment (DAD) and auto-fabric features enables this. Plug and play model—VCS enables a fluid and scalable true plug and play fabric for attaching servers/workloads or provisioning new nodes to the fabric thru ISL and vLAG technologies. Unified fabric management—VCS fabric can be provisioned to provide the user with visibility to configure, manage and control the entire data-center fabric from a single node. Virtual Machine aware network automation—VCS fabric enables hypervisor-agnostic auto provisioning of server connected ports through the AMPP feature. Brocade VCS Fabric with IP Storage 53-1004936-01 11
  • 12. Efficient Load balancing—Load balancing in VCS fabric is available at Layer 1 through Layer 3. Per-packet load balancing is available at Layer 1 and VCS link-state routing provides un-equal cost load balancing for efficient usage of all links in the fabric. Scalable multitenant fabric—Supports L2 multitenancy beyond the traditional L2-bit VLAN space with Virtual Fabric feature based of TRILL Fine Grained Label standard. Storage Support—AutoQoS, Buffering, and DCBx support for IP storage technologies and End-to-End FCoE, enables converged network for storage and server data traffic on the VCS fabric. Seamless Integration with VxLAN—VCS fabric seamlessly integrates with VxLAN technology for both inter-DC and intra-DC connections using VF extension feature. Terminology Term Description ACL Access Control List. AMPP Automatic Migration of Port Profiles. ARP Address Resolution Protocol. BGP Border Gateway Protocol. BLDP Brocade Link Discovery Protocol. BPDU Bridge Protocol Data Unit. BUM Broadcast, Unknown unicast, and Multicast. CNA Converge Network Adapter. CLI Command-Line Interface. CoS Class of Service for Layer 2. DCI Data Center Interconnect. ELD Edge Loop Detection protocol. ECMP Equal Cost Multi-Path. EVPN Ethernet Virtual Private Network. IP Internet Protocol. ISL Inter-Switch Link. MAC Media Access Control. MPLS Multi-Protocol Label Switching. ND Neighbor Discovery. NLRI Network Layer Reachability Information. PoD Point of Delivery. RBridge Routing Bridge STP Spanning Tree Protocol. ToR Top of Rack switch. UDP User Datagram Protocol. vLAG Virtual Link Aggregation Group. VLAN Virtual Local Area Network. VM Virtual Machine. VNI VXLAN Network Identifier. VPN Virtual Private Network. Terminology Brocade VCS Fabric with IP Storage 12 53-1004936-01
  • 13. Term Description VRF VPN Routing and Forwarding instance. An instance of the routing/forwarding table with a set of networks and hosts in a router. VTEP VXLAN Tunnel End Point. VXLAN Virtual Extensible Local Area Network. Virtual Cluster Switching Layer 2 switching networks are popular for the minimal configuration and seamless mobility but are affected by blocked links, higher network fail-over times, and inefficient bandwidth utilization due to STP, resulting in networks that do not scale well. At the same time networks build on popular Layer 3 routing protocols address many of these concerns but are operational intensive and not suited for Layer 2 multitenant networks. TRILL or RFC 5556 tries to address these concerns and combines the Layer 2 and Layer 3 features into a bridge system capable of network-style routing. VCS is a TRILL compliant Ethernet fabric formed between Brocade switches. At the data plane VCS uses TRILL framing and in control plane uses proven Fiber Channel fabric protocols to form Ethernet fabric and maintain a link-state routing for the nodes. In addition to TRILL benefits, VCS provides on a host of other innovative features to provide the next-gen data-center fabric. VCS in Brief VCS fabric is formed between Brocade switches and switches in the fabric are denoted as RBridges. Links connecting the RBridges in the fabric are called Inter-Switch links or ISLs. VCS fabric connect other devices like servers, storage arrays, non-VCS switches or routers through L2 or L3 links. Based on the kind of devices attached to the physical ports of RBridge they are classified as Edge ports or Fabric ports. Edge ports connect external devices to VCS fabric and Fabric ports connect RBridges over ISL. VCS fabric provides link- aggregation for ISL's through ISL trunk and at Edge-port a multi-switch link-aggregation called vLAG. More details on these are explored later in this section, meanwhile a typical VCS fabric is shown below, showing the various components of a VCS fabric. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 13
  • 14. FIGURE 1 Components of a VCS Fabric In a typical Layer 2 network STP is used to form a loop-free topology and traffic gets forwarded at each node with a Layer 2 or MAC look-up. While in TRILL or VCS fabric a loop-free topology is formed using a link state routing protocol. The link-state routing protocol is used to exchange RBridge info in the fabric and this info is used to efficiently forward packets between RBridges at Layer 2. The use of a link-state routing protocol is the primary reason the fabric scales better than the classic Ethernet network based of STP. And the link-state routing protocol enables a loop-free topology without blocking any paths unlike STP. In TRILL standard ISIS is recommended while in the Brocade VCS fabric well known storage network link-state routing protocol FSPF (Fabric Shortest Path First) is used. Switches/RBridges in VCS fabric have two types of physical interfaces: Edge port and Fabric port. Fabric ports connect two switches in the same VCS fabric and forward TRILL frames. And Edge ports are L2 or L3 ports which receive and transmit regular Ethernet frames. In VCS fabric a Classical Ethernet (CE) frame enter at the Edge port of a source RBridge and then undergo Layer 2 hardware lookup. Layer 2 lookup will provide the info for TRILL encapsulation of the CE frame. The encapsulated CE frame would then get forwarded out of the Fabric ports of source RBridge based on the forwarding information provided by FSPF. The TRILL frame gets forwarded hop-by-hop on the VCS fabric to the destination RBridge where it gets decapsulated and send as regular Ethernet frame out of an Edge port after Layer 2 or Layer 3 hardware lookup. While this briefly explores VCS operation and components of the fabric, over the next few sections, VCS fabric formation using FLDP (Fabric Link Discovery Protocol), RBridge routing using FSPF, TRILL frame, and other VCS innovations will be discussed in detail. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 14 53-1004936-01
  • 15. VCS Fabric The very first step after wiring up a network is to configure the switches to form a fabric. The biggest strength of Layer 2 networks is the simplistic fabric formation by the use of switch ports. To form loop-free topology STP is used in Layer 2 networks. VCS brings in this same simplicity into fabric formation, but without STP's drawbacks of blocked link and lack of multipathing. In VCS the fabric formation happens automatically and is as simple as just connecting two switches and a single line of configuration to identify the switch as part of a VCS fabric. This section will go over how the VCS fabric formation happens. VCS capable switches from Brocade can operate in two modes: • VCS disabled mode wherein the switch will operate in traditional STP mode. • VCS enabled mode, this mode of operation is what will be discussed in this white-paper. With VCS enabled, switches form a VCS fabric automatically across point-point links with minimal configuration. In a nut-shell the requirements for automatic fabric formation are. • Each VCS fabric is identified by a VCS ID configuration. VCS ID will be same across all switches in a fabric. • A switch in VCS fabric is identified as RBridge and will have a unique RBridge-ID configuration. • RBridge is a switch responsible for negotiating VCS fabric connections and forwarding both TRILL and classical Ethernet traffic. • Ports in a switch will be identified either as a Fabric port or Edge port by Brocade Link Discovery Protocol alternatively called Fabric link discovery protocol. • The switches will discover the neighbors automatically across the fabric port if VCS ID are same and RBridge ID is unique. • Fabric ports are responsible for TRILL forwarding and for fabric neighbor discovery. • During Fabric neighbor discovery, across Fabric ports RBridges in a VCS fabric form ISL (inter-switch link) and trunks (group of ISLs part of same hardware/ASIC port-group). • Edge ports connects external devices to VCS fabric or in essence will provide L2 or L3 connectivity for Server's or Routers to the VCS fabric. • Edge ports can be regular L2 switch ports or be a part of a multi-chassis LAG (vLAG) with a non-VCS switch (vSwitch, server or regular STP switch). • Once Fabric formation happens FSPF builds the distributed fabric topology on each switch/RBridge. VCS Identifiers VCS ID identifies the fabric as a whole and all switches/RBridges part of a VCS fabric should have the same VCS ID. RBridge ID is a unique ID configured to identify each RBridge part of a VCS fabric. Apart from VCS-ID and RBridge-ID configuration, the VCS mode configuration is needed for the automatic fabric formation. VCS Fabric Mode of Operation Logical Chassis Mode is the flexible and popular VCS mode of operation1 . In this mode all switches in the VCS fabric can be managed as if they were a single logical chassis. • Provides unified control of the fabric from a single Principal switch/RBridge in the fabric. 1 Fabric Cluster Mode - This is another VCS mode of operation which is deprecated in the latest releases. In this mode, VCS fabric discovery and formation are automatic. But user will have to manage each switch individually. This was one of the earlier modes of operation in the VCS technology evolution and does not provide a unified configuration management capability. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 15
  • 16. • User will be configuring the fabric from the Principal RBridge. • This is a distributed configuration mode wherein the fabric configuration info is present on all nodes providing higher availability in the fabric. • The fabric wide configuration management is performed from the principal RBridge and changes are immediately updated on other RBridges. • Add/Rejoin/Remove/Replacing of switches in VCS fabric is simplified with principal switch taking care of configuration management without user intervention. • Operational simplicity is provided by unified view of the fabric from every switch. • Fabric can be accessed by a single virtual IP bound to principal switch and can be used for fabric firmware upgrades. Logical Chassis Mode is the recommended VCS fabric mode and the deployment models in this document uses this mode. Once VCS-ID, RBridge-ID, and VCS mode are known the automatic fabric formation process begins. As part of it: • All interfaces in the switch will be brought up as Edge port. • Brocade Link Discovery Protocol (BLDP) is run between physical interfaces to identify ports as Edge or Fabric. • Inter-Switch links and trunks are formed between VCS switches over fabric ports. • After ISL and trunk formation FSPF or Fabric Shortest Path First link state routing protocol is run over ISLs to identify the shortest-path to each switch in the VCS fabric. Brocade Link Discovery Protocol Brocade Link Discovery Protocol (BLDP) attempts to discover if a Brocade VCS Fabric-capable switch is connected to any of the edge ports. BLDP is alternatively called as FLDP (Fabric Link Discovery Protocol). Ports on a VCS capable switch will first come up as Edge port with BLDP enabled. With BLDP PDU exchange between the ports neighbors in VCS fabric are formed across the inter-switch links. Ports which discover neighbors in the same VCS cluster would transition to fabric ports while others remain as Edge port. With BLDP PDU exchange the switches will the classify a port as: • Edge port if the neighboring switch is not Brocade • Edge port if not running VCS mode • Edge port if VCS ID not same between the switches. • If the neighboring switch runs VCS and the VCS ID matches the port transitions to a fabric port and an ISL or Inter-Switch Link is established. The BLDP is invoked only upon port online and is not sent periodically. Once the link type is determined, the protocol execution stops. ISL Trunk ISL or Inter-switch Links are formed automatically across fabric ports between two VCS enabled switches if the VCS-ID matches. ISL's links forward TRILL frames in VCS fabric and by default trunk all VLANs. When there are multiple ISL links between two switches and if they are part of the same ASIC port-group on both switches, these ISL links gets grouped together to form a Brocade ISL trunk. • ISL Trunk is comparable to the traditional LACP LAG to provide link aggregation. • Brocade ISL trunk don't run LACP but use a proprietary protocol to maintain the trunk membership. • ISL trunks are formed across the same ASIC port-group and hence the max ISL's possible in a trunk group is 8. • There can be multiple ISL trunks between two switches. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 16 53-1004936-01
  • 17. • The ISL trunk is self-forming like the ISL formation and needs no user configuration unlike LACP. • The ISL trunk provides true per-packet load balancing across all member links. Compared to a traditional LAG, the ISL trunk provides a per-packet load balancing of traffic across the links. This provides very high link utilization and even distribution of traffic across the ISL trunk compared to a traditional LAG, which uses frame header hashing to distribute traffic. Principal RBridge VCS fabric after ISL formation elects a Principal RBridge. The RBridge with the lowest configured principal priority or with lowest World Wide Name (WWN) in the fabric is elected as the Principal RBridge. WWN is a unique identifier used in storage technologies. Brocade VCS capable switches are shipped with factory-programmed WWN. Principal RBridge is alternatively called as coordinator switch or fabric coordinator and performs the following functions in the fabric: • Decides whether a newly joining RBridge has unique RBridge-ID and in case of conflict, the new RBridge is segregated from VCS fabric until the configuration is fixed. • In logical chassis mode, all fabric wide configuration is done from the principal switch. • Apart from this in AMPP feature which will be discussed later, the principal RBridge talks to vCenter to distribute port-profiles. Fabric Shortest Path First (FSPF) FSPF is the routing protocol used in VCS to create the fabric route topology for TRILL forwarding. FSPF is a well-known link-state routing protocol used in FC storage area network (SAN) fabrics. Since VCS fabric was out prior to IETF's TRILL fabric got standardized, VCS fabric uses FSPF instead of ISIS as specified for TRILL. Use of a link-state routing protocol in VCS enables having a highly scalable fabric, avoid blocked links like in STP Layer 2 networks and enables equal cost multipath aka ECMP to destination. After ISL and ISL trunk formation in VCS fabric bring-up FSPF is run to create a fabric topology. FSPF is a link-state routing protocol like OSPF or ISIS and have the following salient features. • Neighborship is formed and maintained by FSPF hello packets. • Maintains one neighborship per ISL. • Cost to reach a given RBridge is the cumulative cost of all the links to reach that RBridge • Supports only point-to-point networks • Can have only one area • No Stub areas and summarization Edge Ports Edge ports attach switches, servers or routers to the VCS fabric over standard IEEE 802.1Q Layer 2 and Layer 3 ports. Edge ports support industry-standard Link Aggregation Groups (LAGs) via Link Aggregation Control Protocol (LACP). Multi-Chassis Trunking (MCT) is the industry accepted solution to provide redundancy and avoid blocked links due to spanning-tree when connecting servers to multiple upstream switches. LAG-based MCT is a special case of LAG, covered in IEEE 802.3ad, in which one end of a LAG can terminate on two separate switches. Virtual Lag or vLAG Virtual LAG (vLAG) is an MCT solution that is included in Brocade VCS Fabric technology which extends the concept of LAG to include edge ports on multiple VCS Fabric switches. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 17
  • 18. vLAGs can be formed in three different scenarios pre-requisite being that the LAG control-group have to be same on all the RBridges in the VCS fabric. • Server multi-homed to multiple RBridges in a VCS fabric. • Classical Ethernet Switch multi-homed to RBridges in a VCS fabric • When connecting two VCS fabrics, since VCS behaves like a single switching domain vLAGs are formed across the LAG's. FIGURE 2 vLAG Connectivity Options Using vLAG a single server, classical Ethernet switch or another VCS fabric would connect to multiple RBridges in a VCS fabric and the fabric will act as a single node to the server/CE-switch/other. • When a LAG spans multiple switches in a VCS fabric, it will be automatically detected and become a vLAG. • The port-channel numbers needs to be same across multiple-switches for the vLAG to be formed. • LACP need not be enabled but recommended. • When LACP is used the LACP PDU's will use a virtual RBridge MAC to appear as a single node to the other end. • vLAG is comparable to the vPC technology from Cisco but doesn't need a peer-link or keep-alive mechanism like in vPC for active-active forwarding. • vLAG's can span across 8 VCS nodes and across 64 links providing higher node and link redundancy. • Only ports of the same speed are aggregated. • Edge ports in a vLAG support both classic Ethernet and DCB extensions. Therefore, any edge port can forward IP, IP Storage and FCoE traffic over a vLAG. vLAG Operation Virtual Cluster Switching Brocade VCS Fabric with IP Storage 18 53-1004936-01
  • 19. LACP System-ID: For vLAG with LACP to be active across multiple RBridges a common LACP system-ID is used by LACP PDU's. Each VCS fabric have a common VCS Bridge-Mac address starting with 01:E0:52: with VCS-ID appended to have a unique mac. This VCS bridge mac is used as LACP system-id in the PDUs. Virtual RBridge-ID: When transmitting packet received over a vLAG, the source RBridge-ID in TRILL frames are set to a virtual RBridgeID. The virtual RBridgeID is constructed by appending the vLAG's port-channel ID to 0x400, so a vLAG for port-channel 101 will have virtual RBridgeID of 0x465. By using virtual RBridgeID in the TRILL frames, member RBridges of a vLAG can efficiently perform source port check for loop detection and mac-move based on the port-channel ID embedded in the virtual RBridge-ID. Primary Link: Another vital component for vLAG operation is determining the primary Link. Primary link is the only link through which BUM Traffic will be transmitted. BUM traffic are transmitted out of edge ports which are either a normal non-vLAG port or if it is a primary link of a vLAG. Without this check BUM being multi-destination traffic will otherwise result in duplicate packets at the receiver. The actual state machine of determining primary Link is Brocade specific. This protocol also ensures that only one of the links in an RBridge becomes BUM transmitter and is also responsible to elect a new primary link on link failure, RBridge failures or other failure events. Master RBridge: The node responsible for primary link is elected as the Master RBridge. The Master RBridge is also responsible for MAC address age-out. MAC addresses learnt on VCS fabric are distributed through ENS (Ethernet Name Server). Traffic Load Balancing Traffic in a VCS fabric gets load-balanced at multiple level in the network: ISL trunks provide per-packet load balancing, while at Layer-2 TRILL based load-balancing would kick in and at Layer-3 regular IP route based load-balancing can happen over IP ECMP paths. ISL Trunk: When packets go over a Brocade ISL trunk, proprietary protocols ensure that no hashing is used and an even distribution or per-packet load balancing happens across all the links in the ISL trunk. This provides a very high link-utilization and even distribution of traffic across the ISL trunk compared to a traditional LAG which uses frame header based hashing to distribute traffic. Layer 2: VCS builds a Layer 2 routed topology using the link-state routing protocol FSPF and with support of load balancing in FSPF, load sharing for Layer 2 traffic is achieved in the VCS fabric. When doing TRILL forwarding, if a neighbor switch is reachable via several interfaces with different bandwidths, all of them are treated as equal-cost paths. Any interface with a bandwidth equal to or greater than 10 Gbps has a predetermined link cost of 500. Thus, a 10 Gbps interface has the same link cost as a 40 Gbps interface. Simplicity is a key value of Brocade VCS Fabric technology, so an implementation was chosen that does not consider the bandwidth of the interface when selecting equal-cost paths. The distributed control plane is aware of the bandwidth of each interface (ISL or Brocade ISL Trunk). Given an ECMP route to a destination RBridge, it can load-balance the traffic across the next-hop ECMP interfaces, according to the individual interface bandwidth and avoids overloading lower bandwidth interfaces. So effectively equal-cost paths for TRILL forwarding between RBridges are determined based on hop-count and traffic gets distributed between the paths based on link bandwidths. This maximizes the utilization of available links in the network. In the traditional approach, a 40 Gbps interface, which has the least cost among all other 10 Gbps paths, is used as the only route to reach the destination. In effect the lower-speed 10 GbE interfaces are not utilized, resulting in lower overall bandwidth. With VCS Fabric technology, lower bandwidth interfaces can be used to improve network utilization and efficiency. While traffic gets proportionately distributed among ECMP paths, VCS forwarding uses the regular hash algorithms to select a link for a flow. Layer 3: Layer 3 traffic in VCS fabric undergoes routing over VE interfaces as on a regular router and the traditional IP based ECMP hashing happens for these traffic. And the regular BGP and IGP routing protocol's IP ECMP techniques are available. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 19
  • 20. Data Frames Inside a VCS Fabric Switch/RBridge in a VCS fabric receives classical Ethernet frame on Edge port and then gets encapsulates into a TRILL frame based on the destination MAC look-up. The encapsulated TRILL frame is forwarded out of the Fabric port into the VCS fabric. At each RBridge in the VCS fabric the traffic undergoes hop-by-hop forwarding based on the Egress DMAC and RBridge ID. At the destination RBridge the TRILL headers are stripped and the frame will undergo a traditional Layer 2 or Layer 3 lookup to forward out of the Edge port. FIGURE 3 TRILL Data Format CE Frame The CE frame in its entirety without modification becomes payload for the TRILL frame. An exception for this would be the Virtual Fabric or fine-grained TRILL scenario where the inner Dot1q tag in CE frame could be modified, this will be discussed in the later sections. Outer Ethernet Header • Outer Destination MAC Address - Specifies the next hop destination RBridge. • Outer Source MAC Address - Specifies the transmitting RBridge. • Outer 802.1Q VLAN Tag - Depends on the core port configuration. • EtherType - 0x22F3 assigned by IEEE for TRILL. Trill Header Fields • Version - If not recognized, silently drop the packet at the ingress of the fabric. • Reserved - Not currently in use - for future expansions. Must be set to 0 at the moment. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 20 53-1004936-01
  • 21. • Multi-Destination – Set to 1 for BUM frames (Broadcast, Unknown-Unicast, Multicast). – The frame is to be delivered to multiple destinations via a distribution tree. – The egress RBridge nickname field specifies the distribution tree to use. • Options Length - Currently not used • Hop Count – TTL to avoid infinite packet loop. – TTL decremented at every hop. – If TTL=0, an RBridge will drop the frame. – For unicast frames, the ingress RBridge should set the TTL to a value in excess of the amount of hops it expects to use to reach the egress RBridge. – For multi-destination frames, the ingress RBridge should set the TTL to at least the number of hops to reach the most distant RBridge. Multi-destination frames are most susceptible to loops and hence have strict RPF checks. • Egress RB Nickname (RB ID) – If the multi-destination tree is set to 0, then the egress RB nickname is the egress RBridge ID. – If the multi-destination tree is set to 1, then the egress RB nickname is egress RBridge ID that is root of that tree. • Ingress RB Nickname (RB ID) - Ingress RB nickname is set to a nickname/ID of the ingress RBridge of the fabric. • Options - present if Op-Length is non-zero. VCS Traffic Forwarding On the physical wire the frame format is TRILL when forwarding between VCS nodes. After fabric formation and defining the edge ports traffic forwarding in VCS fabric involves mac learning, handling unicast/multi-destination traffic, first hop redundancy services and ECMP handling. MAC Learning Brocade VCS node follows hardware source-mac address learning at the Edge ports similar to any standard IEEE 802.1Q bridge. An edge RBridge learns about a MAC, its VLAN, and the interface on which the MAC was seen. This learned MAC information is distributed in the VCS fabric and hence each node in the fabric knows which RBridge to forward a particular MAC/frame. The frame is forwarded into the fabric on fabric port with TRILL encapsulation, based on whether the destination address in the frame is known (unicast) or unknown (multi-destination). eNS VCS distributed control plane synchronize aging and learning states across all fabric switches via the Ethernet Name Service (eNS); which is a mac distribution service. eNS by distribution of learned mac info avoids flooding in the fabric. eNS does MAC synchronization across MCT/vLAG pairs and distributing multicast info learned at the edge ports through IGMP snooping. eNS is also responsible for MAC aging in the VCS fabric. When MAC aging happens on an RBridge where it was learned, eNS takes care of aging out the MAC from other RBridges in the fabric Layer 2 Unicast Traffic Forwarding The source-mac learned on the edge RBridge gets distributed by eNS to every node in the fabric. When remote RBridges need to forward traffic to this MAC it looks up the Layer 2 forwarding table to figure out which RBridge to send the frame and based on this info TRILL forwarding happens to the destination RBridge. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 21
  • 22. The below diagram shows the Layer 2 forwarding within a VLAN between two hosts across the VCS fabric. • Traffic is destined to 10.0.1.1 from 10.0.1.2 over vlan 101 or essentially involves Layer 2 forwarding within a VLAN. • Assumption is the ARP is resolved and MAC learning has already happened for the hosts. • On RBridge 103 the traffic received on edge-port will undergo MAC lookup. • MAC points to a remote RBridge port in the VCS fabric and hence the traffic undergoes TRILL encapsulation. • The TRILL encapsulated packet follows the link-state based routed fabric topology created by FSPF. FIGURE 4 Layer 2 Unicast Forwarding Layer 3 Unicast Forwarding Brocade Network OS has Layer 3 routing enabled by default. Apart from enabling IP on Ethernet interface for routing brocade provides routing capability for vlan network through VE interface. VCS fabrics support VRF-lite and routing protocols like OSPF and BGP on the VE and physical interfaces. Routing and IP configurations in VCS fabrics are done at the RBridge configuration mode. So for enabling IP configurations like VRF-lite, IP address and routing on an RBridge, user has to enter the RBridge mode for that node and configure. And in logical-chassis mode all this configuration is done from the Primary RBridge. VE Interfaces Brocade uses VE interface to provide routed functionality across vlan or out of VCS fabric. VE interface is similar to the switched-VLAN or SVI interface from cisco. VE interface at Layer 3 maps to corresponding Vlan at Layer 2, so for example VE interface 500 maps to Virtual Cluster Switching Brocade VCS Fabric with IP Storage 22 53-1004936-01
  • 23. vlan 500. So very much like SVI, VE interface is a Layer 3 interface on which an IP address is configured and could be enabled with FHRP and routing protocols to provide L2/L3 boundary and other L3 router functionality. First Hop Redundancy Protocols First hop redundancy protocols provide protection for the default-gateway of a subnet, by allowing multiple routers to respond on behalf of the default-gateway IP. Brocade support the following FHRPs: • VRRP • VRRP-E • FVG VRRP is standard based while VRRP-E and FVG can be supported only across Brocade devices. VRRP Extended (VRRP-E) IETF standard based VRRP eliminates a single point of failure in the static route environment. It is an election protocol that dynamically assigns responsibilities of a virtual router to one of the VRRP enabled routers on the LAN. VRRP thus provides a higher availability default path without requiring configuration of dynamic routing or router discovery protocols on every end host. VRRP-E (VRRP Extended) is the Brocade proprietary extension to the standard VRRP protocol. It does not interoperate with VRRP. VRRP-E protocol configuration and operation is very similar to VRRP, wherein Master and Standby election happens and Master is responsible for ARP replies and propagation of control packets. From a forwarding perspective, VRRP-E provides active-active forwarding through "short path forwarding" feature compared to VRRP where Master is the only node responsible for routing. VRRP-E also supports up to 8 active/active Layer 3 gateways and in conjunction with short-path forwarding (VRRP-E SPF) yield higher redundancy, scalability and better bandwidth utilization in the fabric. VRRP-E is supported in both VRRPv2 and v3 protocol specifications and supports IPv4 and IPv6. VRRP-E can be configured only on VE interfaces. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 23
  • 24. FIGURE 5 Layer-3 Forwarding with VRRP-E Figure 5 shows the efficient forwarding behavior of VRRP-E in a CLOS design. • Spines are configured with VE interface and VRRP-E virtual-IP 10.1.1.1 for subnet 10.1.1.0/24. • Each of the virtual-IP have an associated virtual-MAC (02e0.5200.00xx). • Virtual-mac are distributed to every RBridge in the fabric by VCS. • VMACs are installed with special MAC programming to load-balance traffic to all VRRP-E. • As shown below all VRRP-E nodes configured for the virtual-IP would route the traffic across subnet 10.1.1.0. • VRRP-E does active/active forwarding across Master and standby routers while in standard VRRP only the master node would route the traffic. • VRRP-E thus provide efficient load balancing and bandwidth utilization in the fabric. Fabric Virtual Gateway Fabric Virtual Gateway or FVG is Brocade-proprietary implementation of a router redundancy protocol and works only in VCS fabric. FVG is a highly scalable FHRP solution compared to VRRP or VRRP-E and doesn't have any control plane PDU exchange between the nodes. Instead, it leverages the Brocade-proprietary VCS Fabric services to exchange Fabric-Virtual-Gateway group information among the participating nodes. The Fabric-Virtual-Gateway feature allows multiple RBridges in a VCS Fabric to form a group of gateway routers and share the same gateway IP address for a given subnet like VRRP or VRRP-E. • FVG is configured under the global VE interface mode. • Configuration primarily involves configuring a Gateway-IP under the VE interface and the participating RBridges. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 24 53-1004936-01
  • 25. • A gateway mac (02e0.5200.01ff) is by default allocated per vlan/VE to reach the gateway-IP. • VCS services takes care of distributing the GW mac info in the fabric. • Nodes not participating in FVG will install gateway MAC and the special GW mac programming allows load-balancing of traffic to FVG members. • There is no need of individual IP address under the RBridge VE interfaces like VRRP. • FVG doesn't have the concept of Master and standby nodes. • But one of the nodes will be elected as the ARP Responder, comparable to the Master node in VRRP-E. • ARP Responder takes care of Arp request for GW-IP. • All FVG member RBridges do an active/active forwarding like VRRP-E. • Short path forwarding is by default enabled. • When SPF is disabled, the ARP responder is responsible for traffic forwarding. • Forwarding behavior for FVG is similar to as shown in VRRP-E diagram. Preventing Layer 2 flooding for Router MAC Address Every single router MAC address associated with a Layer 3 interface is synced throughout the VCS fabric. The reason for this syncing is to ensure that every single router MAC address is treated as a known MAC address within the fabric. This ensures that when any packet enters the VCS fabric destined towards a router, it is never flooded and is always unicast to its correct destination. Similarly, when routing is disabled, the router sends a message to withdraw that particular router MAC address from the VCS fabric. This behavior prevents the periodic issue of Layer 2 flooding caused by the router MAC address being aged out and we no longer require the administrator to ensure that the Layer 2 aging time is greater than the ARP aging time interval. Layer 3 Inter-VLAN Packet Forwarding Based on the constructs of VCS fabric that's discussed until now will go over inter-subnet routing scenario. Figure 6 shows the Layer 3 forwarding in VCS fabric. • Traffic is routed between subnets from host IPs 20.0.1.100 to 10.0.1.100 or from VLAN 201 to VLAN 101. • VE interfaces 101 and 201 are configured with VRRP-E GW IP 10.0.1.1 and 20.0.1.1 respectively on the spine's. • It is assumed the that the hosts have not learned the remote MACs initially. • When host 20.0.0.100 wants to talk to 10.0.1.100, the host would ARP for the GW 20.0.1.1. • Host 20.0.1.100 would arp and get gw-mac for 20.0.1.1 and forward the traffic in vlan 201. • When VLAN 201 traffic is received on Spine and since Spine has VE for 101 and 201, it would arp for 10.0.1.100. • Spine would originate the BUM for 10.0.1.100 which will be received on all nodes, but the host reply would come from only 10.0.1.100 and thus the spine learns the DMAC of 10.0.1.100. • BUM traffic forwarding is explained in the next section "Multi-destination Traffic". • Once the spine learns the DMAC of 10.0.1.100, Vlan 201 traffic received on Spine would then get routed to 10.0.1.100 on vlan 100. • The traffic between Leaf and Spine nodes would be TRILL forwarded while at the edge it would be classical Ethernet. • The diagram below explains this behavior wherein RBID-103 receives packet with SIP 20.0.1.100, DIP 10.0.1.100, and DMAC equal to vrrp-e mac of vlan 201. This CE frame is TRILL encapped on RB 103 and forwarded based on Layer 2 table info of vlan 201. • Packets gets TRILL forwarded to one of Spines which in the diagram is RBID-201. At RBridge 201 the traffic has to be routed across vlan 201 to 101 subnet. So Arp is resolved for DIP 10.0.1.100 at the spine. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 25
  • 26. • Traffic then gets routed on Spine and TRILL encapped for vlan 101 and forwarded to RBID-101. • At RBridge 101 the frame gets TRILL decapped and forwarded out as regular CE-frame to destination 10.0.1.100 FIGURE 6 Layer-3 Inter-subnet Forwarding on VCS Fabric Multi-Destination Traffic Broadcast, unknown-unicast, and multicast traffic or BUM traffic are multi-destination traffic which are flooded to all nodes in the fabric. To avoid duplicate traffic and loop for BUM traffic a multi-destination tree is formed rooted at the multicast root RBridge. The multi- destination tree includes all RBridges in the VCS. VCS uses FSPF to calculate a loop-free multicast tree routed at the Muticast root RBridge. The multicast tree Root RBridge election happens based on higher of the configured mcast root priority of RBridges else based of RBridge-ID. An alternate multicast root is also preselected to account for primary root failure Figure 7 shows BUM traffic flow over the multi-destination tree routed at Multicast root RBridge. • Traffic from the source is received on the edge port of RBridge-ID 102. • The traffic gets forwarded to the multicast Root RBridge 201 on one of the links. • When multiple-links exist only one of the link is selected to forward towards the root. • From root RBridge 202, 203, and 204 does not have direct connection. • Hence the FSPF enables links on RBridge 101 to forward mcast traffic to RBridges 202, 203 and 204. • When BUM traffic is received on vLAG pairs 103 and 104 or 105 and 106, only the primary link of the vLAG forwards BUM traffic. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 26 53-1004936-01
  • 27. FIGURE 7 Multicast Tree for BUM Traffic IGMP Snooping Layer 2 networks implement IGMP snooping to avoid multicast flooding. In VCS fabric IGMP snooping happens at the edge-port, thus the RBridge knows about the interested multicast Receivers behind its Edge ports. So when multicast traffic is received from the VCS fabric on fabric ports the RBridge will prune traffic out of Edge port based on the snooping database. But when multicast traffic is received on an edge port it gets flooded to all other RBridges in the VCS fabric. If IGMP snooping is disabled multicast traffic from the VCS fabric will be flooded on edge ports too. eNS is used to distribute the IGMP snooping database to all RBridges. This helps in vLAG scenario where IGMP snooping entry could be learned on any of the vLAG member RBridges but multicast traffic will flow out of only the primary link. VCS Services Over this section will go over some of the VCS services like AMPP, VM-aware automation, virtual-fabric, virtual-fabric extension, and auto-fabric. Automatic Migration of Port Profile In virtualized server environment like VMware, virtual-machines (VM) are provided switching connectivity through a Virtual Ethernet Bridge (VEB) which in VMware context is called a vSwitch. VEB provides a Layer 2 switch functionality and inter-VM communication albeit in software. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 27
  • 28. A VEB port has a set of functions defined through port-profile like: • The types of frames that are allowed on a port (whether all frames, only VLAN-tagged frames, or untagged frames) • The VLAN identifiers that are allowed to be used on egress • Rate-limiting attributes (such as port or access control-based rate limiting) FIGURE 8 VEB on Virtualized Server Environment Brocade Ethernet switches through the port-profile feature emulates the VEB port-profile on the hypervisor. And the Brocade switches have much more advanced policy controls which can be applied through the port-profiles. Port-profile define vlan, QoS and security configuration which can be applied on multiple physical ports on the switch. Port-profile on Brocade VDX switches provide: • Port profile for VLAN and quality of service (QoS) policy profiles. • Port profile for FCoE policy profile. • Port profile with FCoE, VLAN, and QoS policy profiles. • In addition, any of the above combinations can be mixed with a security policy profile. A port profile does not contain some of the interface configuration attributes, including LLDP, SPAN, or LAG. These are associated only with the physical interface. When work-loads or virtual-machines move, hypervisor ensures that the associated VEB port profile moves with it. On Brocade switches this port-profile move based on VM move is achieved using the feature AMPP or Automatic Migration of Port Profile. In short Automatic Migration of Port Profile (AMPP) functionality provides fabric wide configuration of Ethernet policies and enables network level features to support VM mobility. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 28 53-1004936-01
  • 29. AMPP configuration and operation involves the following in brief: • Creating a port-profile and defining the vlan, qos and security configuration. • Activate the port-profile. • Associate the VM-mac's associated with the port-profile. • Enable port-profile mode on the virtual-machine connected ports. • After configuring profiles when a VM-mac is detected at port-profile enabled port the corresponding port-profile gets downloaded on to that port. • MAC detection is basically the source-mac learning on the port. • When a VM move or mac move happens the port-profile migrate. Since VCS in logical chassis mode is a distributed fabric, AMPP configuration is done on the principal switch and the AMPP profile gets activated on all RBridges in the fabric. AMPP can operate in two scenarios. • The manual configuration way as described above. This is Hypervisor agnostic and AMPP activates the profiles on corresponding interfaces based on MAC detection. • AMPP integrated with vCenter or the vm-aware network automation. VM-Aware Network Automation With VM-aware network automation feature, a Brocade VDX switch can dynamically discover virtual assets and provision the physical ports based on this discovery. Configuration and operation of this feature involves the following: • The switch is preconfigured with the relevant vCenter that exists in its environment. • The discovery process entails making appropriate queries to the vCenter. • After discovery, the switch/Brocade Network OS enters the port profile creation phase. • It creates port profiles on the switch based on discovered standard or distributed vSwitch port groups. • The operation creates port profiles and associated VLANs in the running configuration of the switch. • MAC-address association to each port-profile are also configured based of vCenter information. • Port, LAG's, vLAGs are put into port-profile mode automatically based on the ESX connectivity. • When the Virtual Machine mac is detected behind an edge port, the corresponding port-profile is activated. Virtual Fabric Virtual Fabric is Brocade's implementation of the TRILL fine-grained label standard in VCS fabric. Traditional TRILL supports 802.1Q VLAN IDs, which in today's virtualized and highly scalable data center, pose operational issues like scaling beyond 4K VLANs, having L2 adjacency between different server-side VLANs, or reusing the same server-side VLAN for different subnets. The Virtual Fabric feature solves these problems by having the customer-tag or server-side VLANs with a 12-bit VLAN value mapped to a 24-bit VLAN value. The 24-bit VLAN space provided by Virtual Fabric allows theoretically 16 million broadcast domains, although the current virtual-fabric implementation allows only 8K in total. In essence virtual fabric can provide a per-port VLAN-scope behavior. With VF classification of a customer-tag to VF-VLAN, the port becomes part of the VF-VLAN broadcast domain or subnet in the VCS fabric. From a forwarding perspective, the VF or TRILL fine-grained label is achieved by inserting the 24-bit VLAN on the inner payload. Brocade provides two types of Virtual Fabrics. • Service Virtual Fabric (SVF) Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 29
  • 30. • Transport Virtual Fabric (TVF) Service Virtual Fabric • SVF provides a one to one mapping of customer-tag to a VF vlan (VFID). • The VE associated with VF vlan can be configured with L2/L3 features. • The VF vlan will support all L2/L3 features like a regular 802.1Q vlan. • Through SVF multitenancy on the VCS fabric can be extended beyond the 4K vlan space. Transport Virtual Fabric • TVF is a VLAN aggregation feature, wherein multiple customer VLANs are mapped to a single VF-VLAN. • The VF-VLAN provides L2 functionality and cannot be enabled for L3. • VEs cannot be defined for TVF VLANs. SVF functionality in service-provider and virtualized environments is depicted below. When a cloud service provider provisions the virtual DC by replicating server rack PODs across server ports, different tenant domains exist with overlapping 802.1Q VLANs at the server ports. Achieving isolation of tenant domains in this scenario is not possible with the regular 802.1Q VLANs, and virtual-fabric technology can easily solve this issue. The tenant domain isolation is achieved by mapping the 802.1Q VLAN at each VDX switch interface to a different VF VLAN. This capability allows the VCS fabric to support more than the 4K VLANs permitted by the conventional 802.1Q address space. FIGURE 9 SVF in a Cloud Service Provider Environment Another use case for SVF is in a virtualized environments and the diagram below illustrates the Service VF deployment model for multitenancy and overlapping VLANs in such an environment. The data center has three PODs All three PODs (ESXi 1-3) have the identical pre-installed configuration. Each POD supports two tenants. Tenant 1 and Tenant 3 have two applications running on VLAN 10 and VLAN 20. Tennant 2 and 4 have one application each running on VLAN 30. Tenant 1 and Tenant 2 currently run on ESXi1 and ESXi2 while Tenant 3 and 4 applications run on ESXi3. With Service VF, the same VLANs (VLAN 10, 20) can be used for Tenant 1 and 3 yet their traffic is logically isolated into separate Service VFs (5010 and 5020 for Tennant 1 and 6010 and 6020 for Tenant 3). Similarly, the same VLANs for Tenant 2 and Tenant 4 are isolated into separate Service VFs (6030 for Tenant 2 and 6030 for Tenant 4). Virtual Cluster Switching Brocade VCS Fabric with IP Storage 30 53-1004936-01
  • 31. FIGURE 10 SVF in a Highly Virtualized Environment Virtual Fabric Extension Virtual-fabric extension provides connectivity for Layer 2 domains across multiple VCS fabric. VF extension achieves this by building a VxLAN based overlay network to connect these disjoint Layer 2 domains. VxLAN is a Layer-2 overlay scheme on Layer 3 network or in other words VxLAN is a MAC-in-IP encapsulation technology which stretch Layer 2 domains over a layer-3 infrastructure. Extension of layer-2 domains (vlan) is achieved by enapping Ethernet-frame within a VxLAN UDP packet and vlan of Ethernet-frame gets mapped to VNI in VxLAN header. VxLAN Network identifier or VNI identifies a VxLAN segment and is comparable to VLAN in Ethernet world. Network elements on a VxLAN segment can talk only to each other like in VLAN/Ethernet world. VNI has a 24-bit value, and thus VxLAN extends the total Layer 2 domain to 16Mil compared to 12-bit or 4K Layer 2 networks provided by VLAN. A typical VxLAN packet is shown below. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 53-1004936-01 31
  • 32. FIGURE 11 VxLAN Packet Format To extend Layer 2 domains using VF-extension feature, point-to-point VxLAN tunnels are setup between the VCS fabrics and then VLANs are extended over these VxLAN tunnels by mapping the VLANs to VNIs in VxLAN header. In VF extension case the VxLAN tunnel is setup through configuration instead of automatic VxLAN tunnel setup methods like "Flood and learn" or EVPN. Both 802.1Q VLANs in the range 2–4096 and the virtual-fabric VLANs in the range 4097–8192 can be extended using the VF- extension feature. A simple VF-extension packet flow is shown below; details on the configuration are provided in the illustration section. In the diagram below Virtual-Fabric Gateway configuration are done on Spine. On Spine as part of Virtual-fabric gateway configuration the VxLAN tunnel endpoint (VTEP) is defined. VTEP is responsible for VxLAN encapsulation and decapsulation. VTEP would require a unique-IP per VCS fabric. On the VF gateway, one would have to manually configure the remote data-center sites VTEP IP. Based on this, VxLAN tunnels are set up between the data centers if there is IP reachability for the remote VTEP IP. The VxLAN tunnel does nott have any control plane messaging and depends only on the IP reachability of VTEP IPs. Along with the VxLAN tunnel configuration, the user would also indicate which VLANs are to be extended and the VLAN-to-VNI mapping. The packet flow sequence is for Layer 2 forwarding from the server on Datacenter-1 to Datacenter-2. Both DC-1 and DC-2 are in two different VCS fabrics. With the fabrics having Layer 3 data-center connectivity through the edge router on each data center. 1. An Ethernet frame from the server on DC-1 will be received on a VLAN at the ToR/RBridge. MAC look-up at the ToR will indicate that the frame has to be TRILL-forwarded to the spine. 2. The Ethernet frame gets encapsulated into TRILL and TRILL-forwarded to the spine of DC-1. 3. At the spines of DC-1, the VTEP is configured and hence on MAC lookup the TRILL frame will be decapped and the Ethernet frame from the server will be VxLAN-encapped. VxLAN encapsulation will result in an IP packet with the source IP of the packet being the VTEP IP of DC-1 and the destination IP being the VTEP IP of DC-2. 4. The packet will traverse from the spine to the edge router as an IP packet, and from the edge router on DC-1, the VxLAN IP packet will undergo regular IP forwarding until it reaches the destination IP, which is the VTEP end on DC-2. 5. The VTEP end on DC-2 will be the spine RBridges on that fabric. At the spine, the VxLAN packet will be decapped and the Ethernet frame will undergo Layer 2 lookup. 6. The Layer 2 look-up will show that the packet has to TRILL encapped and forwarded to one of the ToRs/RBridges on the DC-2 VCS fabric. Virtual Cluster Switching Brocade VCS Fabric with IP Storage 32 53-1004936-01
  • 33. 7. The TRILL frame will be forwarded to the destination ToR where it will be decapped and the Ethernet frame will be forwarded to the server on Datacenter-2. FIGURE 12 Packet Forwarding over a VxLAN-Based DCI Implemented with VF-Extension Auto-fabric Auto-fabric feature allows true plug and play model for VDX platform in VCS fabric. When a new switch comes up it has the "bare-metal" flag enabled, meaning it's not part of any VCS and has default "VCS ID 1" and "rbridge-id 1". If the switch is connected to a VCS fabric and the VCS fabric is pre-provisioned with the new switches WWN number to an RBridge ID. Then the new switch will reboot again and automatically add itself to the VCS fabric with the pre-provisioned RBridge-ID. Working of this feature is present in the Illustration section. VCS Deployment Models This section provides various deployment models of a VCS fabric. The network architectures are based of the Clos or leaf-spine model since it provides predictive latency, better load distribution in data centers, and VCS fabrics are inherently suited for a Clos design. Primarily the VCS deployments can be categorized based on the scale requirement of the network into a: • 3-stage Clos or single-POD VCS fabric • 5-stage Clos or multi-VCS fabric VCS Deployment Models Brocade VCS Fabric with IP Storage 53-1004936-01 33
  • 34. The decision to go with a single-POD or a multi-VCS fabric would primarily be decided based on the scale requirements of a fabric. Single-POD VCS Fabric A typical data-center site will have a server VCS fabric and a DC edge services block as shown below. FIGURE 13 Single-POD VCS Fabric The server VCS fabric follows a 2-tier leaf-spine architecture with spine and leaf tiers interconnecting the server racks. The server VCS fabric will provide a Layer 2 interconnect for the server racks. The VCS fabric as described earlier is TRILL compliant, providing network- style routing in a bridge environment with nonblocked, multipath enabled and load-balanced interconnection between RBridges. Leaf Tier The leaf tier provides connectivity for the compute or storage rack to the VCS fabric. A pair of switches referred to as vLAG pair leaf acts as dual or redundant top-of-rack (ToR) in each rack. These dual ToRs are interconnected with two ISL links. The compute or storage resources are connected to the dual-ToR using vLAG port-channels for redundancy. The ISL links between Leaf pairs would provide back-up path if one of the RBridges loses all connectivity to the spines Each RBridge in a vLAG pair connect to 4 spines through 40 gig Links, providing 320 Gbps bandwidth per rack. Spine Tier Spine Tier provide the L2-L3 boundary for the VCS fabric. Also Spines attach to the Edge Routers or the Edge-Service block to route traffic to the Internet or other Data centers. VCS Deployment Models Brocade VCS Fabric with IP Storage 34 53-1004936-01
  • 35. Spine and Leaf RBridges are interconnected by ISL. And traffic forwarding on ISLs use FSPF based routed topology to multi-path the traffic. Spine Tier would have FHRP protocols enabled for providing routing within the fabric or to the Edge routers. VCS technology will allow the FHRP GW's to be advertised to the Leaf tier for efficient load-balancing of Server traffic to the Spine. Since each spine provide active-active forwarding using FHRP short-path forwarding feature, it is also recommended to run Layer 3 routing between the Spines. This will provide a backup routing path on a spine if it were to lose all uplinks. It is recommended to interconnect the Spine switches in a ring, to avoid leaf switch as a transit in back-up routing scenarios. Edge Services Edge services provide WAN, Internet, and DCI connectivity for the data center through the edge routers. And this would be where firewall, load balancers, and VPN services would be placed. To provide redundancy, two edge routers are recommended to connect from the spines. Traffic Flow VCS ensures that there is a loop-free topology, ECMP, and efficient load balancing across the multipaths in the leaf-spine fabric. Leaf nodes act as a typical Layer 2 node doing MAC learning and Layer 2 switching under VLAN and in a VCS fabric are responsible for TRILL encapsulation/decapsulation. Layer 3 routing in this fabric is performed at the spine layer where L2 termination of VLANs on VE interfaces happens. Leaf-to-server connections are classical Ethernet links or vLAGs receiving and sending CE frames while the spine-to-leaf or spine-to- spine links forward TRILL frames over ISLs or ISL trunks. Within the same subnet/VLAN, leaf RBridges perform Layer 2 lookup and based on the MAC table at the leaf RBridge would encapsulate classical Ethernet frames into TRILL frames and forward to the destination leaf pair through the spine. The spine does TRILL switching to the leaf, where the TRILL frame will be decapsulated and sent out as a CE frame from the destination leaf's vLAG. For inter-VLAN traffic within data center, leaf will send TRILL frame destined to the GW on spine and spine will route across the subnet rewriting the inner CE payload's destination MAC and VLAN. The packet after routing at spine would be forwarded as TRILL frame to destination leaf RBridge, where it will decapsulate TRILL and send CE frame to the server. And for inter-DC or Internet, traffic flows from leaf to spine as TRILL, gets routed at spine, and is send out as CE packet to DC edge services or vice-versa. Multi-VCS Fabric For a highly scalable fabric multi-vcs design is recommended where multiple VCS fabric POD's are interconnected by a Super-Spine layer. A single-pod VCS fabric can scale only up to 40 nodes, and for a highly scalable data center, several of these PODs must be interconnected and a super-spine layer is recommended for this. Brocade provides two interconnection designs: • Multi-VCS fabric using vLAG • Multi-VCS fabric using VxLAN Multi-VCS Fabric Using vLAG In this design, VCS PODs are interconnected through a vLAG to the super-spine as shown in Figure 14. VCS Deployment Models Brocade VCS Fabric with IP Storage 53-1004936-01 35
  • 36. FIGURE 14 Multi-VCS Fabric Interconnected Through Layer 2 vLAG This is a 5-stage Clos design with three tiers: leaf, spine, and super-spine. It's a simpler network design with Layer 2 extended from the leaf through the spine with the L2/L3 boundary for the data center at the super-spines. The leaf and spine tiers for a POD are in the same VCS fabric. And in this architecture, there would be multiple such leaf-spine VCS fabrics in a data center. And multiple such leaf-spine PODs are interconnected at the super-spine through vLAGs from the spine. All nodes in the super-spine tier are in the same VCS fabric. The super-spine provides Internet, WAN, and DCI connectivity through the edge router in the edge service block. Leaf Tier Leaf pairs form the edge of the network connecting compute or storage racks through vLAGs to the leaf-spine VCS fabric. CE frames received from the server racks are TRILL-forwarded from leaf to spine and vice-versa. Spine Tier Compared to the single-POD design, the spine tier in this design does not provide L2-L3 gateway functionality. The leaf-spine nodes provide a Clos design with 4 spines supporting up to 40 leafs or 20 racks, assuming that each rack is serviced by a pair of leafs/ToR RBridges. Intra-VLAN traffic within a POD gets TRILL-bridged within the POD by the spine. When intra-VLAN traffic has to go across the PODs, traffic is sent to the super-spine tier to be switched. And all routing traffic is also sent to the super-spine, which includes inter-VLAN within the same POD or inter-VLAN across the POD or to edge routing. The spine connects to the super-spine over vLAGs, so from each leaf-spine tier, one vLAG is formed between the spines to the super- spine. Super-Spine Tier VCS Deployment Models Brocade VCS Fabric with IP Storage 36 53-1004936-01
  • 37. In the 5-Stage Clos with vLAG, the super-spine is configured as the L2/L3 boundary for all VLANs in the data center. Super-spines will be configured with VE interfaces, and VRRP-E would be enabled for FHRP. When traffic must go across a POD or is routed across a subnet/VLAN or out of a data center, traffic is forwarded to the super-spine tier. All RBridges in the super-spine tier are in the same VCS fabric. Spines are connected to the super-spine through vLAGs, and essentially CE frames are forwarded across vLAGs, and between super-spine RBridges forwarding is TRILL. Spine to super-spines are 40-gig links, and 4 super-spine RBridges are recommended for the best performance and better oversubscription ratios. We recommend that you interconnect the super-spine switches in a ring and run Layer 3 routing between the spines. This will provide a backup routing path on the super-spine if it were to lose all uplinks. The super-spines would connect to the edge services through the edge router to provide Internet, WAN, and DCI connectivity and services like firewall, load balancers, and VPN. Pros and Cons A multi-VCS fabric provides a higher scalable data-center architecture. This is a simplistic design in terms of configuration. Seamless Layer 2 extension between the PODs is present in this architecture without the need for configuring other features. Routing traffic would trombone between spine and super-spine tiers since the L3 GW is placed at the super-spine. Multi-VCS Fabric Using VxLAN A multi-VCS fabric using VxLAN interconnectivity is another 5-Stage Clos design that is possible with a VCS fabric to build a highly scalable data-center fabric. Brocade supports VxLAN-based DC interconnectivity through Virtual Fabric Extension technology. Figure 15 shows the topology for multi-VCS using VxLAN. VCS Deployment Models Brocade VCS Fabric with IP Storage 53-1004936-01 37
  • 38. FIGURE 15 Multi-VCS Fabric Interconnected Through VxLAN over L3 Links Multi-VCS using VxLAN leverage the Virtual fabric extension technology to interconnect multiple-PODs to build a scalable data center. This 5-stage Clos fabric consists of three tiers: leaf, spine, and super-spine. Each POD is constituted of leaf and spine RBridges part of a unique VCS. Every POD is connected to the spine by L3 links. The Virtual Fabric extension feature provides VxLAN tunnel between two end points over L3 infrastructure. For interconnecting multiple PODs, static VxLAN tunnels are set up between the PODs. Every POD is configured with VTEP, and static VxLAN tunnels are set up between the PODs. The L3 links that connect PODs to super-spine form the underlay network to interconnect multiple PODs. Through the VF extension feature, Layer 2 extension is provided between each individual POD. Leaf Tier The leaf tier connects server to the leaf-spine VCS fabric through vLAGs. A pair of leaf/top-of-rack RBridges would service each server rack. The leaf RBridges are connected to the spine over ISL, and leaf-spine traffic forwarding happens on TRILL. The leaf tier essentially does Layer 2 forwarding of CE frames to TRILL or vice-versa. Spine Tier In multi-VCS using a VxLAN design, the spine tier acts as the L2-L3 boundary and the VxLAN VTEP endpoint for the VF extension feature. Spines will be configured with the VE interface and FHRP's to provide L3 GW functionality for the server VLANs/subnets. Spines on every POD will localize the routing for subnets under it. Thus the ARP scale and routing scale are limited to each POD. Spines have L3 connectivity to the super-spine tier. And over this routing to Internet/WAN and VF extension will happen. For routing to Internet/WAN, each POD will receive a default route from super-spine. VCS Deployment Models Brocade VCS Fabric with IP Storage 38 53-1004936-01
  • 39. When Layer 2 domains must be extended across the POD, the VF extension feature is used. For VF extension, static VxLAN tunnels through configuration are set up between spines on each POD. For this, every spine would be a VTEP end point to provide VxLAN encap/decap. And for support of BUM traffic, one of the VTEP/spines would be selected as the BUM forwarder on each POD to avoid duplicate traffic. VLANs that must be extended across PODs will be enabled under the VTEPs. Apart from 802.1Q VLANs, the VF extension feature also provides seamless extension for the virtual fabric VLANs. The virtual fabric feature is Brocade's fine-grain TRILL implementation to extend VLAN ranges beyond the traditional 4K VLAN space and allows reusability of VLANs by providing a per-interface VLAN scope. With VF extension, virtual-fabric VLANs are seamlessly extended between TRILL and VxLAN, thus providing higher multitenancy. The VLAN carried in the TRILL frame is converted as VxLAN VNI, and hence the seamless integration of both these features is achieved. The spine tier is connected to the super-spine tier over L3 links, and BGP is recommended as a routing protocol. The spine tier on each POD will receive a default route for the Internet/WAN traffic and will also exchange the directly connected physical network and the VTEP end point IPs. Super-Spine Tier The super-spine tier in this architecture provides interconnectivity through L3 links with VxLAN traffic using it for underlay network. The super-spine tier also connects the multi-VCS fabric to the edge routers. A routing protocol must be run between the super-spine nodes to exchange L3 networks and for connectivity. Edge services provide WAN, Internet, and DCI connectivity and services like firewall, load balancer, and VPN for the data center. Pros and Cons • This design provide L2/L3 GW's per POD, and thus routing doesn't have to cross each POD. • Hence the architecture is much more scalable and provides efficient usage of link bandwidth. • Broadcast storms are limited to each POD in this multi-VCS. • Provides higher multitenant architecture with use of VF extension and VF features. • At the same this design involves much more configuration compared to the multi-VCS using vLAG. IP Storage Over the past few years, server virtualization has become a de facto standard, and lately data centers are moving to containerized work environments, and IP storage networks have been gaining a lot more mind-share from data-center professionals. Recent market studies have shown greater adoption of IP-based network attached storage (NAS) for file-based storage and iSCSI for block-based storage. The use of IP-based NAS or iSCSI generates a new set of interesting challenges for the network and storage administrators in terms of performance and SLA guarantees for storage traffic across an Ethernet network that is agnostic of loss. Traditional FC storage uses a dedicated SAN network that is purpose-built for storage, and storage traffic is the only workload that runs over the infrastructure. IP storage traffic protocols, such as NAS and iSCSI, are often deployed over the general-purpose LAN infrastructure, sharing bandwidth with non-storage traffic. This helps to drive efficiencies by leveraging the existing IP network infrastructure to carry storage traffic, but it also creates challenges such as the inability to guarantee the stringent SLAs that mission- critical workloads require. Brocade is the industry leader in FC-based SAN networks, and with VCS-based fabric technology, the same simplicity and performance can be achieved for an IP storage network. VCS provides to the Ethernet world an automated and simplified fabric bring-up, along with load-balanced multipathing of traffic and nonblocked efficient usage of fabric bandwidth. VCS fabric also supports the Data Center Bridging features and other enhancements to support the storage network. IP Storage Brocade VCS Fabric with IP Storage 53-1004936-01 39