Simplifying the Next Generation Data Centre for Virtualization and Convergence

4,727 views
4,579 views

Published on

This seminar reviews key data centre network challenges, including server virtualisation, and how Brocade(r) Virtual Cluster Switching (VCSTM) technology addresses them. VCS is designed to meet these challenges by enabling next-generation virtual data centre and private cloud computing initiatives.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
4,727
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
128
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • As experts in the data center, we lead the industry in building mission-critical networks. Foundry was a networking company with a breadth of advanced high-performance solutions throughout the cloud.With the acquisition of Foundry, Brocade laid the foundation for delivering solutions that provide data center quality from anywhere your applications run to anywhere that your information is consumed. Having converged an Ethernet company and a Fibre Channel company, we have an intimate understanding of where each technology Excels And, has limitations During this time we have integrated these perspectives into the most comprehensive and yet precise vision and architecture to enable this new world…
  • Deck Contains all the markets and technology areas – Presenters need to hide non-relevant market slides which will shorten the deck as appropriateNetworking today from the service provider core to enterprise LANs to mission-critical data center networks are undergoing dramatic, dynamic change and architectural reconsideration. With our acquisition of Foundry, Brocade has become a true end-to-end network provider offering the most demanding customers a next generation network today. Complimented with value added transport services, intelligent management and professional services – Brocade offers the full solutions that is required today and in the future.Both companies share the same requirements for investment protection, power and cooling, reliability and high-performanceBrocade is the alternative/choice for end-to-end networking – we are in a unique and compelling position – with the ability to provide you with combined leadership in industry-leading networks on both sides of the server, from local area networks (LANs to storage area networks (SANs)The performance and capacity requirements of your network are continuing to expand well beyond the legacy architectures that have been adopted for years. Both companies have strong and complementary roadmaps that combined going forward, we are well positioned to assist you in architecting the next generation of network topologies.
  • Key PointsWhen the fabric extension service is added to the Ethernet fabric in two data centers, a tunnel can be created through the public network.This allows fabric information and data to be sent between to fabrics as if they were one.
  • Key PointsWhen the native Fibre Channel service is added to the Ethernet fabric, native FC storage can be directly connected to the fabric.Also, a native FC connection can be established between the Ethernet fabric and FC SAN fabric, providing an optimal storage connection between the LAN and SAN.
  • Key PointsVCS can be deployed like existing ToR switches, providing key advantages while preserving the existing architecture.This is for the customer that would like to ease into deploying VCS technology.
  • Key PointsSince the two ToR switches are one VCS, the servers see a single switch, allowing for active/active connections, end-to-end.
  • Key PointsThe layout of this use case is identical to existing ToR architectures.
  • Key PointsSimilar to Use Case #1, but with blade servers.Blade modules can be a switch or passthrough.
  • Key PointsIn the future, Brocade will have embedded switches with VCS functionality, allowing the blade switches and ToR switches to act as a single logical chassis. This drastically reduces management of the network.
  • Key PointsThe layout of this use case is identical to existing ToR architectures.
  • Key PointsAs the Ethernet fabrics scale, the networks can flatten, since fabrics are self-aggregating.In this example, VCS is used in the LAN and separate FC connections are made to the SAN.Note that the core can be of any vendor. To take full advantage of VCS, the core must appear as one by leveraging a MCT-like technology.
  • Key PointsFor this use case, we are showing two ways the fabric can be configured.In this diagram, we are using a ToR Mesh architecture. The benefit is a true flat network edge, where the switches are connected to its peers.
  • Key PointsIn this example, the network is logically and physically flattened.
  • Key PointsFor this use case, we are showing two ways the fabric can be configured.In this diagram, we are using a Clos Fabric architecture. The benefit is a simplified design and maximum performance/availability. In this design, there are switches used to create the fabric that will not have edge ports, but the fabric is still managed as one logical chassis, flattening the network.
  • Simplifying the Next Generation Data Centre for Virtualization and Convergence

    1. 1. Brocade OneVirtual Cluster Switching<br />
    2. 2. Agenda<br />Brocade Evolution<br />Next Generation Data Centre Challenges<br />Brocade One <br />Brocade’s vision for the Data Centre<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />2<br />
    3. 3. Acquired Foundry 2008<br /><ul><li>Price/performance leader in IP networks
    4. 4. Powering 90% of Internet Exchange Points
    5. 5. 15,000+ customers worldwide
    6. 6. Data center networking experts
    7. 7. Storage networking pioneer and leader
    8. 8. 70% SAN market share</li></ul>© 2010 Brocade Communications Systems, Inc. Company Proprietary Information<br />3<br />10/12/2010<br />
    9. 9. Brocade Networks<br />End-to-End Networking<br />December 2008<br />Brocade Technology Vision, Mission, and Markets<br />Service Providers<br />Enterprise Networks<br />Data Center Networks<br />Provisioning, Operations, Management<br />Mgmt<br />Application<br />Storage<br />TCP/IP<br />FC SAN<br />Storage<br />Servers<br />Servers<br />Transport Services<br />Files Management, Load Balancers, NAT, SSL Acceleration, Firewalls, VPN, Extension, Migration, FC Encryption, Replication<br />Global<br />Services<br />Consulting, Integration, Logistics, Maintenance<br />
    10. 10. Brocade One<br />The Challenges<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />5<br />
    11. 11. Brocade One<br />The Challenges- Architecture<br /><ul><li>ClassicData Centre model
    12. 12. Designed for North to South Traffic
    13. 13. Client to Server traffic model
    14. 14. Designed for transport, not the application
    15. 15. Standard Enterprise Solution
    16. 16. Enterprise technologies -stacking
    17. 17. Enterprise topologies- STP, MSTP
    18. 18. Enterprise limitations – STP, stacking
    19. 19. Minimize Layer 2 fault domains
    20. 20. Increased Management footprint
    21. 21. Multi-layered, multi-protocol architectures for scalability </li></ul>© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />6<br />75% North to South<br />25 % West to East<br />Layer 2 <br />Domain-3<br />Layer 2 <br />Domain-2<br />Layer 2 <br />Domain-1<br />Layer 2 <br />Domain-4<br />
    22. 22. Brocade One<br />The Challenges- Architecture<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />7<br /><ul><li>Increased West to East traffic
    23. 23. Next Generation Apps (SOA, SAS. Web 2.0)
    24. 24. Server Virtualisation (VM)– Server to Server
    25. 25. Convergence (FCOE) – Server to Storage
    26. 26. Drive for applications awareness
    27. 27. Applications the business enabler
    28. 28. DC designed around the application
    29. 29. Network needsto be aware of theapps
    30. 30. The New DC needs to be flat
    31. 31. Single scalable Layer 2 Domain</li></ul>30% North to South<br />70 % West to East<br />SOA<br />FCOE<br />VM<br />SOA<br />FCOE<br />VM<br />Single Layer 2 Domain<br />
    32. 32. Brocade One<br />The Challenges- Virtual Machine Mobility<br />VM migration <br />Break network /application access<br />Port Profile information must be identical at destination<br />QoS, VLAN, Security etc<br />Map Profile to every Port<br />Eases mobility, <br />Network and security best practices !!<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />8<br /><br />!<br />!<br />The Network needs to be aware of Vmotion Dynamically <br />
    33. 33. Brocade One<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />9<br />SAN <br />The Challenges- Operational Complexity<br />Core<br />Layer 3<br />BGP, EIGRP, OSPF, PIM<br />SAN<br />Mgmt.<br />Too many Network layers<br />Multiple standard and proprietary protocols<br />Too many management points<br />Multiple small-form-factor edge switches<br />Individual management points<br />Restricting deployment schedules<br />Too many management Tools<br />Separate Management tool for LAN, SAN and HBA/NICs<br />Management Silo’s<br />LAN<br />Mgmt.<br />Aggregation<br />Layer 2/3<br />IS-IS, OSPF, PIM, RIP<br />Access<br />(fixed & bladed)<br />Layer 2/3<br />STP, OSPF, PLD, UDLD<br />Blade Switch Mgmt.<br />NIC Mgmt.<br />HBA Mgmt.<br />
    34. 34. Brocade One<br />The Challenges-Flexibility for Open System<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />10<br />To provide business differentiation and investment protection the Date Centre needs to be open, flexible and agile<br />HYPERVISOR<br />Hyper-V<br />SERVER<br />NETWORK<br />STORAGE<br />
    35. 35. Brocade One<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />11<br />Virtual Cluster Switching<br />
    36. 36. Brocade One<br />Virtual Cluster Switching<br />Brocade’s Vision for the Next generation Data Centre<br />Evolutionary Technology<br />Built on the principles of Brocade’s SAN fabric technology<br />Merged with Foundry’s IP knowledge<br />Open technology<br />Intregation and operability with storage and server partners<br />Virtual Cluster Switching<br />10/12/2010<br />© 2010 Brocade Communications Systems, Inc. Company Proprietary Information<br />12<br />VCS Evolutionnot Revolution<br />SAN Fabric<br />Heritage<br />Foundry<br />IP <br />knowledge<br />Virtual <br />Cluster<br />Switching<br />FC/FCOE<br />knowledge<br />Storage<br />& Server <br />OEM/ <br />partnership<br />
    37. 37. Brocade’s Virtual Cluster Switching(VCS)<br />ETHERNETFABRIC<br />DISTRIBUTED INTELLIGENCE<br />LOGICAL CHASSIS<br />DYNAMIC SERVICE INSERTION<br />Lossless Ethernet Fabricfor scalable converged Layer 2 domains<br />Distributed Intelligence within the fabric for seamless server mobility<br />Logical chassis, behaviourfor simplified management and collapsing layers<br />Dynamic Service Insertionwithin the fabricfor agility and zero downtime<br />VCS<br />VCS<br />© 2010 Brocade Communications Systems, Inc. Company Proprietary Information<br />13<br />10/12/2010<br />
    38. 38. © 2010 Brocade Communications Systems, Inc. Company Proprietary Information<br />14<br />Virtual Cluster Switching<br />ETHERNETFABRIC<br />LOGICAL <br />CHASSIS<br />DISTRIBUTED INTELLIGENCE<br />DYNAMIC SERVICE INSERTION<br />First data center Ethernet fabric<br />No Spanning Tree Protocol<br />Active-Active layer 2 topology<br />Multi-path, fully deterministic<br />Auto-healing, non-disruptive<br />Arbitrary topology, Star, Mesh, Hub & Spoke, Clos etc<br />Built for convergence, Lossless, low latency<br />VM<br />VM<br />NAS<br />iSCSI<br />FCoE<br />10/12/2010<br />
    39. 39. Virtual Cluster Switching<br />Ethernet Fabric & TRILL<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />15<br />A proposed data center L2 protocol being developed by an Internet Engineering Task Force (IETF) workgroup<br />The VCS Ethernet Fabric capabilities are achieved using TRILL (Transparent Interconnection of Lots of Links)<br />Introduces Layer 3 Control plane concepts to layer 2<br />Providing scalability, control and manageability for layer 2 domains<br />“The TRILL WG will design a solution for shortest-path frame routing in multi-hop IEEE 802.1-compliant Ethernet networks with arbitrary topologies, using an existing link-state routing protocol technology.” - source IETF<br />Mission<br />“TRILL solutions are intended to address the problems of …, inability to multipath, … within a single Ethernet link subnet” - source IETF<br />Scope<br />
    40. 40. Virtual Cluster Switching<br />Ethernet Fabric & TRILL<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />16<br />To achieve Layer 2 scalability, multi-pathing and stability TRILL <br />introduces Layer 3 concepts<br />Link state protocol for Control Plane<br />Announce Rbridge – NOT end MAC addresses<br />Rbridge has full topology of the network<br />Hop by Hop forwarding to destination<br />Allowing traffic engineering<br />No Transient loops<br />TTL within TRILL, decremented at each hop<br />Avoiding transient loops & broadcast storms<br />Traceroute capability<br />No Traffic flooding<br />Unknown U/C and M/C sent down multicast tree<br />Reverse path forward on each link of tree<br />Adjacency<br />Adjacency<br />MAC table<br />MAC-B -> Rbridge-3<br />Rbridge-3 -> Rbridge-2<br />Dest RB-3<br />Rbridge 2<br />Dest RB-2<br />Rbridge 3<br />Rbridge 1<br />Ingress<br />Nicname<br />Egresss<br />Nicname<br />Dest<br />Rbridge<br />SRC<br />Rbridge<br />Outer <br />VLAN<br />Inner<br />VLAN<br />Dest<br />MAC<br />SRC<br />MAC<br />TTL<br />TRILL Frame<br />
    41. 41. Virtual Cluster Switching <br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />17<br />Ethernet Fabric & TRILL<br />Intelligent bandwidth utilization within each Fabric path <br />Active-Active Ethernet Fabric <br />Achieved through TRILL Multi-pathing capability<br />A path built from 10Gbe LAG<br />Allowing bandwidth on demand<br />Path bandwidth intelligence<br />Traffic load balancing<br />Flow based hashing, 65-70% utilizing<br />Hardware based byte spreading, 90-95% utilizing<br />Optimal Path utilization<br />Packet Spraying<br />90-95% Utilisation within each Fabric path<br />
    42. 42. Virtual Cluster Switching<br /><ul><li>Convergence Ready
    43. 43. The VCS Ethernet Fabric is Lossless
    44. 44. 802.1Qbb – Priority-Based Flow Control
    45. 45. PFC: Allows Identification and prioritization of traffic
    46. 46. 802.1Qaz – Enhanced Transmission Selection/ Data Center Bridging Exchange
    47. 47. ETS: Allows grouping of different priorities and allocation of bandwidth to PFC groups
    48. 48. DCBX: Discovery and initialization protocol to discover resources connected to DCB-enabled network</li></ul>Ethernet Fabric –Lossless QoSbehaviour<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />18<br />
    49. 49. Virtual Cluster Switching<br />Lossless Ethernet Fabric<br />SAN A<br />Top of Rack Configuration<br />Fewer cables<br />SAN B<br />LAN<br />Fewer adapters<br />Fewer switches <br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />10/12/2010<br />19<br />
    50. 50. © 2010 Brocade Communications Systems, Inc. Company Proprietary Information<br />20<br />Virtual Cluster Switching<br />ETHERNETFABRIC<br />LOGICAL <br />CHASSIS<br />DISTRIBUTED INTELLIGENCE<br />DYNAMIC SERVICE INSERTION<br />Fabric managed as a single switch<br />Logically collapses network layers<br />Single management for Edge and Aggregation layer<br />Auto-configuration for new devices <br />Centralized or distributed management<br />Reducing managed elements<br />10/12/2010<br />
    51. 51. Virtual Cluster Switching<br />Logical Chassis<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />21<br />Core<br />Layer 3<br />BGP, EIGRP, OSPF, PIM<br />VCS Standard Ethernet switch<br />VCS members blades in a modular chassis<br />Standard protocols to communicate outside fabric<br />RSTP, LACP, 802.1x, sFLOW, etc<br />No need to rip and replace<br />Evolutionary Migration<br />Not rip and Replace<br />Leverage existing infrastructure<br />Evolutionary not Revolutionary<br />Aggregation/<br />Distribution<br />Layer 2/3<br />IS-IS, OSPF, PIM, RIP<br />Collapsed Single <br />Access/Aggregation Layer<br />Single Point of <br />Management for simplicity<br />VCS<br />Access<br />(fixed & bladed)<br />Layer 2/3<br />STP, OSPF, PLD, UDLD<br />
    52. 52. © 2010 Brocade Communications Systems, Inc. Company Proprietary Information<br />22<br />Virtual Cluster Switching<br />conf<br />conf<br />ETHERNETFABRIC<br />LOGICAL <br />CHASSIS<br />DISTRIBUTED INTELLIGENCE<br />DYNAMIC SERVICE INSERTION<br />Fully distributed control plane<br />Database replicated on each switch<br />Master-less control no re-convergence<br />Network-wide knowledge of all members, devices, VMs<br />Arbitrary topology, self-forming<br />Automatic Migration of Port Profiles (AMPP)<br />VM<br />NAS<br />iSCSI<br />FCoE<br />10/12/2010<br />
    53. 53. Virtual Cluster Switching<br /> Allows VM to move with the network automatically reconfiguring<br />Port Profiles created, managed in fabric; distributed<br />Discovered by BNA; pushed to orchestration tools<br />Server admin binds VM MAC address to Port Profile ID<br />MAC address/Port Profile ID association pulled by BNA; sent to fabric<br />Intra- and inter- host switching and profile enforcement offloaded from physical servers<br />Distributed Intelligence<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />23<br />Port Profile<br />Port Profile ID<br />QOS, ACLs, Policies<br />VLAN ID<br />Storage Zoning<br />Brocade Network Advisor (BNA)<br />Profile<br />Distribution<br />Port Profiles<br />MAC Bindings<br />MAC Bindings<br />Port Profiles<br />Server<br />Mgmt<br />
    54. 54. Virtual Cluster Switching<br />Distributed Intelligence<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />24<br />conf<br />conf<br />Physical<br />Virtual<br />Today, access to the network lives in the virtual hypervisor<br />Consumes valuable host resources<br />Lack of traffic visibility -security<br />No clear management control<br />VCS offloads to the physical switch<br />Eliminates the software switch; <br />Virtual Ethernet Port Aggregator (VEPA) technology<br />Virtual NICs offloaded to the physical NIC<br />Virtual Ethernet Bridging (VEB) technology<br />Host resources are freed up for applications<br />5-20% host resources back to applications<br />VMs have direct I/O with the network<br />vNIC<br />vNIC<br />vNIC<br />vNIC<br />Server<br />Virtual Switch<br />NIC<br />Switch<br />
    55. 55. © 2010 Brocade Communications Systems, Inc. Company Proprietary Information<br />25<br />Virtual Cluster Switching<br />ETHERNETFABRIC<br />LOGICAL CHASSIS<br />DISTRIBUTED<br />INTELLIGENCE<br />DYNAMIC SERVICE INSERTION<br />Reconfigure network via software<br />Hardware-based flow redirection<br />Incorporation of partner services<br />Service modules in a chassis<br />Available to the entire VCS fabric<br />Non-stop service insertion<br />Minimizes cost and physical moves<br />NetworkServices<br />Encryption<br />Layer 4-7<br />Extension<br />Security<br />VM<br />VM<br />10/12/2010<br />
    56. 56. © 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />26<br />Virtual Cluster Switching<br />Dynamic Service Insertion<br /><ul><li>VCS Fabric Extension capabilities
    57. 57. Delivers high performance accelerated connectivity with full line rate compression
    58. 58. Secures data in-flight with full line rate encryption
    59. 59. Load balances throughput and provides full failover across multiple connections</li></ul>Dynamic Service to connect Data Centers<br />Extend the layer 2 domain over distance<br />Maintains fabric separation while extending VCS services to secondary site (e.g. discovery, distributed configuration, AMPP)<br />Site B<br />Site A<br />Fabric Extension<br />Service<br />VCS<br />VCS<br />Public Routed<br />Network<br />Encryption, Compression, Multicasting<br />Fabric Extension<br />Service<br />
    60. 60. Vritual Cluster Switching<br />Native Fibre Channel Connectivity<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />27<br /><ul><li>VCS Native Fibre Channel Capabilities
    61. 61. Adds Brocade’s Fibre Channel functionality into the VCS fabric
    62. 62. 8 Gbps, 16Gbps FC, frame-level ISL Trunking, Virtual Channels with QoS, etc.</li></ul>Provide VCS Ethernet Fabric with native connectivity to FC storage<br />Connect FC storage locally<br />Leverage new or existing Fibre Channel SAN resources<br />FC SAN<br />LAN<br />Brocade DCX<br />VCS<br />Native Fibre Channel<br />FC Storage<br />FC Storage<br />
    63. 63. Power of an Open solution<br />10/12/2010<br />© 2010 Brocade Communications Systems, Inc. Company Proprietary Information<br />28<br />Virtual Cluster Switching<br />Hyper-V<br />HYPERVISOR<br />SERVER<br />NETWORK<br />BROCADE ONE ARCHITECTURE<br />iSCSI<br />NAS<br />FC<br />FCoE<br />STORAGE<br />
    64. 64. Virtual Cluster Switching<br />Simplified end-to-end Management<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />29<br /><ul><li>Single Data center-wide platform
    65. 65. Ethernet, Fibre Channel, and Data Center Bridging (DCB) element management
    66. 66. Open northbound APIs
    67. 67. Integration with leading orchestration tools
    68. 68. VMware and Microsoft hypervisor plug-ins</li></ul>NORTHBOUND APIs<br />Brocade Network Advisor<br />ELEMENT MANAGEMENT<br />LAN<br />Converged<br />SAN<br />
    69. 69. Brocade One<br />Virtual Cluster Switching- The Fabric<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />30<br />Vswitch<br />IP<br />3 to One<br />Network<br />One Converged <br />Ethernet Fabric<br />SAN<br />HPC<br />3 to One<br />One Flat <br />Network Layer<br />Complexity<br />20 to One<br />One Management point<br />for the Fabric<br />Management<br />HyperVisor Operation<br />3 to One<br />One Virtual Access Layer<br />Distributed Intelligence<br />VEB<br />VEPA<br />
    70. 70. Questions?<br />
    71. 71. Brocade One<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />32<br />Deployment scenarios<br />
    72. 72. VCS Deployment Scenarios -1<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />33<br />WAN <br />1/10 Gbps Top-of-Rack Access – Architecture <br />Preserves existing architecture<br />Leverages existing core/agg<br />Co-exists with existing ToR switches<br />Supports 1 and 10 Gbps server connectivity<br />Active-active network<br />Load splits across connections<br />No single point failure<br />Self healing<br />Fast link reconvergence<br />< 250 milliseconds<br />High-density access with flexible subscription ratios<br />Supports up to 36 servers per rack with 4:1 subscription<br />Core<br />MLX w/ MCT,<br />Cisco w/ vPC/VSS,<br />or other<br />Aggregation<br />Existing 1 Gbps<br />Access Switches<br />LAG<br />Access<br />VCS<br />VCS<br />2-switch<br />VCS at ToR<br />Servers<br />1/10 Gbps<br />Servers<br />10 Gbps<br />Servers<br />1 Gbps<br />Servers<br />
    73. 73. VCS Deployment Scenarios -1<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />34<br />1 GbE<br />10 GbE DCB<br />10 GbE<br />Passive Link<br />Logical Chassis<br />1/10 Gbps Top-of-Rack Access – Topology<br />Active/Active server connections<br />Servers only see one ToR switch<br />Half the server connections<br />Reduced switch management<br />Half the number of logical switches to manage<br />Unified uplinks<br />One LAG per VCS<br />MLX w/ MCT,<br />Cisco w/ vPC/VSS,<br />or other Aggregation<br />Classic 10 GbE Top-of-Rack<br />VCS 10 GbE<br />Top-of-Rack<br />LAG<br />LAG<br />2-switch VCS per Rack<br />4 links<br />20 ports<br />72 ports<br />vLAG<br />LAG<br />20 Gbps per server; Active/Passive<br />20 Gbps per server; Active/Active<br />4:1 10 Gbps Subscription Ratio<br />to Aggregation <br />Up to 36 Servers per Rack<br />
    74. 74. VCS Deployment Scenarios -1<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />35<br />Core<br />1/10 Gbps Top-of-Rack Access – Layout <br />Preserves existing network architecture<br />Leverage VCS technology in stages<br />2-switch VCS in each server rack<br />Managed as a single switch<br />1 Gbps and 10 Gbps connectivity<br />Highly available; active/active<br />High performance connectivity to End-of-Row Aggregation<br />One LAG to core for simplified management and rapid failover<br />Aggregation Switches at the End of Each Row<br />2-switch VCS at the Top of Each Rack<br />Servers with 1 Gbps or 10 Gbps Connectivity<br />
    75. 75. VCS Deployment Scenarios -2<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />36<br />WAN <br />10 Gbps Top-of-Rack Access for Blade Servers – Architecture <br />Preserves existing architecture<br />Leverages existing core/agg<br />Co-exists with existing ToR switches<br />Provides low-cost, first stage aggregation<br />High density blade servers without stress on existing aggregation<br />Reduces cabling out of rack<br />Active-active network<br />Load splits across connections<br />No single point failure<br />Self healing<br />Fast link reconvergence<br />< 250 milliseconds<br />High-density ToR aggregation with flexible subscription ratios<br />Supports up to 4 blade chassis per rack with 2:1 subscription<br />Core<br />MLX w/ MCT,<br />Cisco w/ vPC/VSS,<br />or other<br />Aggregation<br />LAG<br />Existing ToR Switches<br />Access<br />VCS<br />VCS<br />2-switch<br />VCS at ToR<br />Servers<br />Blade Servers<br />with 1 Gbps Switches<br />Blade Servers<br />with 10 Gbps Switches/Passthrough Modules<br />
    76. 76. VCS Deployment Scenarios -2<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />37<br />1 GbE<br />10 GbE DCB<br />Logical<br />Chassis<br />10 GbE<br />10 Gbps Top-of-Rack Access for Blade Servers – Topology<br />1st stage network aggregation<br />Ethernet fabric at ToR<br />Aggregates 4 blade server chassis per rack (8 access switches)<br />High performance 2:1 subscription through VCS<br />Reduced switch management<br />Half the number of logical ToR switches to manage<br />Unified uplinks<br />One LAG per VCS<br />Future: Blade switches become members of the VCS fabric<br />Drastic reduction in switch management<br />MLX w/ MCT,<br />Cisco w/ vPC/VSS,<br />or other Aggregation<br />LAG<br />2-switch VCS per Rack<br />8 links<br />32 ports<br />64 ports<br />4:1 10 Gbps Subscription Ratio<br />Through 1st Stage Aggregation <br />vLAG<br />8 links per Blade Switch<br />Dual 10 Gbps Switch Modules per Chassis (any vendor)<br />Up to 4 Blade Chassis per Rack = 64 Servers <br />
    77. 77. © 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />38<br />Core<br />VCS Deployment Scenarios -2<br />10 Gbps Top-of-Rack Access for Blade Servers – Layout <br />Preserves existing network architecture<br />Leverage VCS technology in stages<br />2-switch VCS in each server rack<br />Managed as a single switch<br />1st stage aggregation of 10 Gbps blade switches<br />High performance connectivity to End-of-Row Aggregation<br />One LAG to core for simplified management and rapid failover<br />Switches at the End of Each Row; 2nd Stage Aggregation<br />2-switch VCS at the Top of Each Rack; 1st Stage Aggregation<br />Blade Servers with 10 Gbps Connectivity<br />
    78. 78. VCS Deployment Scenarios -3<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />39<br />WAN <br />1/10 Gbps Access; Collapsed Network – Architecture <br />MLX w/ MCT,<br />Cisco w/ vPC/VSS,<br />or other<br />Flatter, simpler network design<br />Logical two-tier architecture<br />Ethernet fabrics at the edge<br />Greater layer 2 scalability/flexibility<br />Increased sphere of VM mobility<br />Seamless network expansion<br />Optimized multi-path network<br />All paths are active<br />No single point failure<br />STP not necessary<br />Core<br />LAG<br />VCS Edge Fabrics<br />SAN<br />Edge<br />Fibre Channel Connections to SAN <br />Servers<br />1/10 Gbps<br />Servers<br />10 Gbps<br />Servers<br />
    79. 79. VCS Deployment Scenarios -3<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />40<br />1 GbE<br />10 GbE DCB<br />Logical<br />Chassis<br />10 GbE<br />1/10 Gbps Access; Collapsed Network – Topology – ToR Mesh<br />Scale-out VCS edge fabric <br />Self aggregating, flattens the network<br />Clos Fabric topology for flexible subscription ratios<br />312 usable ports per 10-switch VCS<br />Supports 144 servers in 4 racks, all with 10 Gbps connections<br />Drastic reduction in management<br />Each VCS managed as a single logical chassis<br />Enables network convergence<br />DCB and TRILL capabilities for multi-hop FCoE and enhanced iSCSI<br />MLX w/ MCT,<br />Cisco w/ vPC/VSS,<br />or other Core<br />(<br />)<br />per<br />switch<br />1 Links per VCS member to Core Router (20 Total)<br />L3<br />ECMP<br />10 Switch VCS Fabric;<br />200 Usable Ports<br />4 links to other switch in rack; 9 links to adjacent switches<br />vLAG<br />2 ports<br />36 ports<br />Up to 36 Servers per Rack; 5 Racks per VCS<br />Servers with 1 Gbps, 10 Gbps, and DCB Connectivity<br />
    80. 80. VCS Deployment Scenarios -4<br />1/10 Gbps Access; Collapsed Network – Layout – ToR Mesh<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />41<br />Core<br />2 VCS fabric members in each rack<br />Dual connectivity into fabric for each server/storage array<br />Low cost Twinax cabling in rack<br />2nd stage VCS fabric members in a middle-of-row rack<br />Low cost Laserwire cabling from top-of-rack switches<br />1 VCS fabric per 4 racks of servers (assuming 36 servers per rack)<br />Fiber optic cabling only used for connectivity from edge VCS to core<br />Single vLAG per fabric<br />Reduced management and maximum resiliency<br />Horizontal Stacking Using ToR Mesh architecture<br />5 Racks<br />per Fabric<br />2 Fabric Members per Rack<br />Servers and Storage with 1 Gbps, <br />10 Gbps, and DCB Connectivity<br />
    81. 81. VCS Deployment Scenarios -4<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—REQUIRES NDA<br />42<br />1 GbE<br />10 GbE DCB<br />Logical<br />Chassis<br />10 GbE<br />1/10 Gbps Access; Collapsed Network – Topology – Clos Fabric<br />Scale-out VCS edge fabric <br />Self aggregating, flattens the network<br />Clos Fabric topology for flexible subscription ratios<br />312 usable ports per 10-switch VCS<br />Supports 144 servers in 4 racks, all with 10 Gbps connections<br />Drastic reduction in management<br />Each VCS managed as a single logical chassis<br />Enables network convergence<br />DCB and TRILL capabilities for multi-hop FCoE and enhanced iSCSI<br />MLX w/ MCT,<br />Cisco w/ vPC/VSS,<br />or other Core<br />(<br />)<br />(<br />)<br />per<br />switch<br />per<br />switch<br />L3<br />ECMP<br />6 Links per Trunk (24 Total)<br />10 Switch Fabric;<br />312 Usable Ports<br />48 Ports Available for FC SAN Connectivity or VCS Expansion <br />6:1 Subscription Ratio to Core<br />vLAG<br />12 ports<br />12 ports<br />36 ports<br />48 ports<br />Up to 36 Servers per Rack; 4 Racks per VCS<br />Servers with 1 Gbps, 10 Gbps, and DCB Connectivity<br />
    82. 82. Questions?<br />© 2010 Brocade Communications Systems, Inc. CONFIDENTIAL—For Internal Use Only<br />43<br />

    ×