New Data Center Fabrics Have Arrived


Published on

1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

New Data Center Fabrics Have Arrived

  1. 1. Scaling the Cloud— Fulcrum MicroSystems and BLADE Network Technologies Solution Brief In the new generation of virtual data centers, the cloud network must scale efficiently. BLADE Network Technologies along with Fulcrum Microsystems provides the right set of features that allow customers to "Scale the Cloud.” New Data Center Fabrics Have Arrived The new generation data center includes modern architectural elements such as multi-core servers, server and storage virtualization, fabric convergence and large-scale clustering. This places demands on the data center interconnect infrastructure like never before. In the past, the networking gear that was originally designed for interconnecting desktops in the enterprise was also used for connectivity in the data center. Today, that is no longer practical, and new fabric solutions have been developed specifically with the demands of the new data center -- and the highly-scalable cloud -- in mind. The key virtues that have become requirements in the cloud network and enable its massive scale-out include: • Clos Architecture • Low Latency • High Throughput • Lossless Fabric Qualities • Power Efficiency • Superior Price/Performance BLADE’s RackSwitch with Fulcrum's FocalPoint 10GbE switch chips embody all of these key virtues, enabling fabric solution providers to deliver innovative platforms that are the foundation for some of the largest, highest-performance data center fabrics in existence. This is in contrast to some networking equipment manufactures that are re-purposing switch fabrics designed for enterprise or telecom applications in an attempt to meet the scaling requirements of the cloud data center—leading to a much less efficient solution in terms of power, area and cost as detailed in the Scaling the Cloud White Paper. Recently, BLADE Network Technologies commissioned The Tolly Group to compare the BLADE RackSwitch G8100 and G8124 to the Cisco Catalyst 4900M. BLADE's RackSwitch is a true data center product, delivering the virtues that data center managers are demanding, whereas the Cisco switch was originally designed for the enterprise. This Tolly report is meaningful in that it illustrates the stark contrast between platforms that have been optimized for the data center and those that have not. The full report: BLADE RackSwitch G8100 and G8124 Competitive Evaluation versus Cisco Catalyst 4900M Switch. Below are examples of networking products that deliver the virtues required of data center fabrics:
  2. 2. RackSwitch G8100 BLADE’s RackSwitch G8100 and G8124 extend virtualization by mirroring the benefits of server virtualization within the network at the rack level. This saves energy and removes complexity through simplified management and fabric convergence. RackSwitch G8124 Clos Architectures Offer Superior Performance and Scale A new data center network model that is being promoted by the incumbent enterprise switch providers consists of a central monolithic router, which comes with high cost, limited performance and significant complexity. This solution may also include switch fabric access nodes (often in the form of top-of-rack switches) that have similar performance and complexity. The overall data center network scalability and performance is limited by the central router, and the complexity is aimed at locking in customers due to the large up-front investment and training required. The Clos architecture, which was originally implemented using proprietary fabrics, was first introduced to Ethernet with Fulcrum's FocalPoint 10GbE switch chips. This offers Ethernet's lower cost along with standards-compliant scalability without impacting performance. By interconnecting full-bandwidth multi- tier switches in a non-blocking fashion, a dense, fully-connected fabric can be created. The switch elements are simple and efficient, and connect together in a uniform fashion to create a large scalable fabric. Fulcrum's white paper on Clos architectures provides in-depth analysis of this novel approach. The Clos architecture leads to greater scale, lower cost and lower power while maintaining low end-to-end latency when using FocalPoint switches. 2
  3. 3. The Clos architecture, implemented with simple low-latency fabric building blocks, offer high scale and non- blocking throughput. Alternative architectures based on complex and costly central routers are limited in scale, throughput, and latency performance. The Case for Low Latency Although a certain amount of debate remains regarding the relative importance of low latency in the data center fabric, there is general agreement that lower latency leads to higher application performance. In the past, it was argued that the fabric latency was dwarfed by the latency elsewhere in the system (application, NIC, disk access, etc.). With each generation, the latency in the rest of the system continues to improve. Additionally, as the data center clusters continue to grow, tiers of switching are required to achieve new levels of density, and each tier adds additional latency. Because of this, the latency in the fabric matters, and if lower latency can be achieved in each switch, the more layers of switching that can be introduced without impacting the overall performance of the fabric. Low latency leads to greater scale. The new-generation data center fabrics are leveraging commercial silicon technologies, such as Fulcrum's FocalPoint 10GbE switch chips. The BLADE RackSwitch G8100 and G8124 are examples of such a system. The FocalPoint devices are especially compelling in terms of latency, which is achieved using a cut-through architecture and a latency-optimized internal data path. This leads to extremely low latency that is independent of packet size. 3
  4. 4. Data center fabrics (here represented by the BLADE RackSwitch G8100) deliver, on average, 12x lower latency than enterprise switches (here represented by the Cisco Catalyst 4900M). Further, because of the cut- through switching architecture, typical data center fabrics offer fixed low latency, regardless of packet size. In this case, packets from 64 Bytes to 9K Bytes have the same latency through the fabric of 300ns (G8100) to 680ns (G8124). Throughput Matters Just as low latency leads to higher performance and scale in a data center, so does high throughput, but for slightly different reasons. The new-generation data centers have embraced the notion of clustering multi-socket and multi-core computing elements for improved application performance. With as many as 32 threads or more running (each at GHz frequencies), it has become quite reasonable to expect that a single computer will be constrained by its I/O bandwidth, even when using 10G Ethernet. A 10G Ethernet pipe with only 50% utilization used to be compelling (especially when compared to the traditional approach of aggregating two or four 1G Ethernet ports together in a link-aggregation group). Today however, the additional bandwidth is needed. Without it, the compute systems are again I/O constrained and additional congestion occurs in the fabric. This limits performance and scale compared to a non- blocking environment. High throughput leads to greater scale. 4
  5. 5. Many enterprise switches are not fully provisioned and thus introduce congestion, which limits performance and scale. Data center fabrics, on the other hand, are generally architected to support fully non- blocking throughput, regardless of packet size and traffic patterns. Lossless Qualities Enable Convergence Whether you're talking about Data Center Bridging (DCB), Fibre Channel over Ethernet (FCoE), iSCSI or High Performance Computing (HPC), the notion of combining the compute, storage, and network traffic onto a single unified fabric has tremendous appeal in the data center. A well-implemented unified fabric can lead to reduced cost, greater efficiency, and greater simplicity, without compromising performance or scale. Several new IEEE initiatives have been introduced to enable Ethernet to simultaneously support various traffic types, each with unique characteristics. Some of those new initiatives include: • Priority Flow Control (PFC), which supports pause-based flow control on a per-priority basis, providing the ability to protect the flow of certain traffic types over others on the same link. • Enhanced Transmission Selection (ETS), which controls the allocation of bandwidth among traffic classes and provides bandwidth guarantees to certain traffic types. • Quantized Congestion Notification (QCN), which addresses the problem of sustained congestion by having congestion points generate congestion notification messages that drive corrective action at the ingress of the fabric • Data Center Bridging Exchange (DCBx) protocol, which allows neighbor switches to exchange information on their level of support for PFC, ETS and QCN. Additional discussion can be found in Fulcrum’s Data Center Bridging white paper or on BLADE’s IP SAN Solutions pages. 5
  6. 6. Power Efficiency is a Critical Scaling Factor The green data center has recently emerged as a popular topic of discussion, with incumbent enterprise switch providers often downplaying the importance of power efficiency within the interconnect infrastructure. There's no denying, though, that the new-generation data center fabrics are dramatically more power efficient than the traditional enterprise switches. Every data center has a power and thermal budget that must be adhered to. Higher-powered switches eat into the budget available for the compute and storage resources, which limits density, scale and overall performance. Power efficiency leads to greater scale. The more power efficient the fabric, the more of the power and thermal budget that can be dedicated to critical compute and storage resources. This improves density and scale along with the overall performance achievable from a given power and thermal profile in a data center. 6
  7. 7. Data Center Fabrics Offer Superior Price/Performance The new-generation data center fabrics, such as the BLADE RackSwitch G8100 & G8124, leverage the high level of integration available in Fulcrum's FocalPoint 10GbE switch chips. This allows system designers to deliver solutions with unparalleled price/performance characteristics by offering full line-rate performance at a fraction of the price of comparable enterprise switches. The BLADE RackSwitch G8100 offers 6.5x better price/performance than the Cisco 4900M, delivering greater than 2x the throughput at less than one-third the price. Fulcrum and BLADE are delivering the key virtues of the data center fabric today. To learn more: • BLADE RackSwitch series • Cloud Solutions • 10Gb Ethernet Solutions ©2009 Fulcrum Microsystems and BLADE Network Technologies, Inc. All rights reserved. Information in this document is subject to change without notice. BLADE Network Technologies assumes no responsibility for any errors that may appear in this document. All statements regarding BLADE’s future direction and intent are subject to change or withdrawal without notice, at BLADE’s sole discretion. MKT090817 7