Storage Performance Takes Off

431 views

Published on

This IT Brand Pulse analyst report describes the emergence of two new classes of network adapter: CNA HBAs and SSD HBAs

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
431
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
11
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Storage Performance Takes Off

  1. 1. Where IT perceptions are reality Industry Brief Storage Performance Takes Off: With Next Gen Server Adapters Featuring QLogic 2600 Series 16Gb Fibre Channel HBAs QLogic 10000 Series SSD Fibre Channel AdaptersDocument # INDUSTRY2013001 v7 January, 2013 Copyright 2012© IT Brand Pulse. All rights reserved.
  2. 2. Storage Performance Takes OffThe Network Adapter Industry RespondsSince 8Gb Fibre Channel HBAs and 10Gb Converged Network Adapters (CNAs) were introduced in 2008,storage performance has taken off behind the power of new Romley-based servers and SSD storage. Thenetworking industry responded in 2012 with a new generation of PCIe CNA/HBAs which support either 16Gbnative Fibre Channel or 10Gb Ethernet NAS, iSCSI and FCoE storage traffic on each port. The industry alsointroduced an innovative new class of adapter which merge the function of a non-HA PCIe SSD and a FibreChannel HBA into one high availability SSD/HBA. Network Adapter Industry Road Map In 2012 two new types of network adapters were introduced: CNA/HBAs and SSD/HBAs. By 2014 a future generation of Super CNAs will support native Fibre Channel and Ethernet storage protocols, and provide high-availability, shared SSD cache and SSD storage. Intel codename for the powerful Xeon E5 server platform with 2 more Romley cores, 8MB more of cache, 6 more DIMMs of faster DDR3-1600 memory, and twice as much I/O bandwidth with PCIe 3.0.Document # INDUSTRY2013001 v7, January, 2013 2
  3. 3. Best Storage Network PerformanceCNA/HBA—A New Class of Network AdapterIntroduced in 2008, CNAs support TCP/IP LAN and NAS storage traffic as well as iSCSI and FCoE SAN storagetraffic over 10Gb Ethernet. With a negligible performance difference between 8Gb Fibre Channel and 10GbFCoE, the IT community has maintained a strong preference for native Fibre Channel—effectively deciding tomaintain separate Ethernet and Fibre Channel networks. In 2012, the emergence of 16Gb Fibre Channelraised the bar for best storage network performance, and the introduction of server adapters functioning aseither a 16Gb Fibre Channel HBA or a 10Gb CNA gives data center managers the best of both worlds—complete network adapter hardware convergence, and separate Fibre Channel and Ethernet networks. Anatomy of a PCIe CNA/HBA Virtualized— With support for N_Port ID Virtualization or NPIV , the CNA/HBA can be configured as multiple virtual adapters, each with a different protocol, QoS and security policy. 10Gb CNA—When a port is VM VM VM Flexible—Backwards configured for Ethernet, it compatible with legacy will support 10Gbe-based 1GbE, 4Gb FC and 8Gb FC LAN, NAS and (iSCSI and networks. When migrating FCoE) SAN traffic to a converged network, simultaneously. ports are easily changed from Ethernet to Fibre Channel, or from Fibre Channel to Ethernet. 16Gb Fibre Channel HBA — Dual Core ASIC —One chip When a port is configured provides native Fibre for Fibre Channel, it is Channel protocol processing backwards compatible with and Ethernet protocol 4Gb and 8Gb FC SANs, as processing. well as new high- performance 16Gb SANs. PCIe 3.0 x4—for 4GB (32Gb) bandwidth to the processor. 16GB (128Gb) is effectively double the bandwidth of a PCIe 2.0 bus PCIe 3.0 and eight times the bandwidth of a 16Gb Fibre Channel network.Document # INDUSTRY2013001 v7, January, 2013 3
  4. 4. Best SSD PerformanceSSD/HBA-The First Enterprise-Class PCIe SSDsBy eliminating the need to deploy two types of adapters for Ethernet and Fibre Channel connectivity, thenew class of 10GbE CNA/16Gb FC HBAs represent an evolutionary and very useful change for serveradministrators. However, the new class of PCIe SSD/FC HBAs is a revolutionary development for adaptertechnology. Using the Fibre Channel network to share SSD SAN metadata, PCIe SSD cache in different serverscan now be shared and replicated for high-availability. PCIe SSD, already the highest bandwidth and lowestlatency storage possible, is now suited for enterprise-class applications. Anatomy of a PCIe SSD/HBA Virtualized— Virtual CNA/HBAs can be configured with different protocol, QoS and security policies—and each with its own SSD cache. Fibre Channel HBA—A port VM VM VM Shared Cache—PCIe SSD/ can be configured as a Fibre HBAs can be configured to Channel HBA to connect a cache frequently accessed server to a SAN. data on SAN disk arrays. The adapters can be installed in separate servers and the cache pooled into PCIe SSD —A port can be one cache area network. connected to the SAN so the PCIe SSD can serve as a cache to external disk, and so that shared cache or shared SSD storage metadata can be maintained on multiple servers. PCIe 3.0 x16—for 16GB (128Gb) High Availability—SSD cache and storage can be mirrored, bandwidth from the SSD directly to allowing the PCIe SSD/HBA to be deployed in pairs for the server processor. enterprise–class redundancy and high-availability. A 15,000 RPM HDD will deliver approximately 200 I/Os per second (IOPS) of IOPS performance. A single PCIe SSD will deliver about 100,000 IOPS.Document # INDUSTRY2013001 v7, January, 2013 4
  5. 5. The Next Wave of ConvergenceIts HereVastly different LAN and SAN products from Ethernet and Fibre Channel adapter vendors started merged intoone with the introduction of converged network adapters. The emergence of CNAs/HBAs and SSD/HBAs hascreated an enterprise storage adapter market with vastly different products again. Today, only QLogic isoffering a full suite of next gen enterprise storage adapters. 10GbE 10Gb/16Gb Non HA HA Server Adapter CNA CNA/HBA SSD SSD/HBA Connect servers to Connect servers to Single card, non- Redundant, shared, 10GbE LANs, NAS, 10GbE LANs, NAS, shared SSD cache SSD cache for HDDs, Function iSCSI SANs and FCoE iSCSI SANs and FCoE for HDDs or SSD or SSD storage, and SANs SANs plus 16Gb FC storage FC HBA SANs QLogic Broadcom Brocade Cisco Emulex Fusion-io LSI Micron The only enterprise storage adapter vendor with a Fibre Channel HBA, QLogic Ethernet CNA and PCIe SSD technology.Document # INDUSTRY2013001 v7, January, 2013 5
  6. 6. Where New Adapters FitSSD/HBAs Are in Class by ThemselvesUntil the advent of SSD/HBA technology, the performance of a storage systems was defined by the numberof HDDs and the speed of the HDDs and network connection. The pecking order in this scheme ranges from16Gb Fibre Channel down to 1Gb Ethernet. Now that SSD/HBA products are available, data center architectswill want to factor levels of SSD performance and availability into their application requirements. Architectswho need gigabytes to terabytes of capacity with the lowest latency for clustered applications will configurePCIe SSD which is closest to the server processor. Architects who need traditional mass storage capacity withthe highest bandwidth will configure SSD SAN arrays with the highest bandwidth (16Gb Fibre Channel) linksto the server. Enterprise Storage Adapter Quadrant Application Requirements vs. Infrastructure Capacity Takes traffic off the Supports highest network and onto low bandwidth Tier-1 -latency Tier-0 SSD for 16Gb FC SAN More IO- the most IO-intensive storage. intensive workloads. workloads Interoperable with FC infrastructure. Allows LAN/SAN Less-IO- convergence at 10GbE. intensive workloads Supports LAN and NAS at 1GbE Allows LAN/SAN convergence with cost- and 10GbE effective iSCSI storage at 1GbE and 10GbE Less Network Bandwidth More Network Bandwidth A tier where data is on-line, but unlike Tier 0 SSD storage, the data is Tier 1 stored on slower but less expensive HDDs.Document # INDUSTRY2013001 v7, January, 2013 6
  7. 7. The Ultimate in PerformanceQLogic 100000 SSD Fibre Channel Adapters On January 8th, 2013, QLogic introduced the Cache Captive to Server QLE10000, signaling its intention to step into the SSD market. The QLE10000 is a blend of SSD technology and Fibre Channel HBA technology forming the industry’s first SSD/HBA. Shared PCIe SSD The QLE10000 also is the first PCIe SSD product to offer high-availability shared SAN cache.Shared cache is the ability of a server to carve up its PCIe SSD intovirtual caches or storage LUNs, and provision the cache or LUNs toother servers as needed. Shared cache and storage is inherent inSAN SSD systems, but until now, non-existent for PCIe adapters With direct-attached cache, the cache is accessed by a single server. The expensive Flash memoryspread across multiple servers. cannot be provisioned to other servers if needed.In the old days storage consisted of non-shared direct-attachedstorage (DAS) inside of a server, and utilization averaged around30%. The invention of shared NAS and SAN storage drove the Shared, High-Availability, SAN Cacheutilization of storage to 80% and beyond as virtual disk drives weretailored for each server. The same principal applies to servervirtualization. Before VMware, average non-shared server CPUutilization hovered around 30%. Now IT pros are loading virtualmachines onto servers until CPU resources are fully utilized.QLogic is leading the industry from non-shared direct-attached cacheto a high-availability, shared SAN cache architecture. The value ofthis capability is intuitive to IT professionals and CFOs becausesharing IT resources to consolidate infrastructure is a basic bestpractice, and generates a powerful With shared cache in a SAN, the utilization of expensive SSD is maximized. The quantityreturn on investment. of cache, server access to the cache, and storage access to the cache is tailored exactly to the needs of servers on the SAN. Industry Shared cache and shared storage is inherent in SAN SSD systems, but until now, non-existent for PCIe adapters spread across multiple servers. FirstDocument # INDUSTRY2013001 v7, January, 2013 7
  8. 8. Killer App for SSD/HBABreathing Life into Existing StorageThe availability of affordable SSD is allowing data center managers to expand their use of the technologybeyond the most demanding I/O-intensive applications which can justify a much higher cost. One pervasiveexample is retro-fitting SSDs into storage environments with older, slower HDDs. A typical data center hasgroups of HDDs ganged together in LUNs to harness the aggregate IOPS performance of the HDDs. With anolder 7,200 RPM HDD delivering approximately 100 IOPS of performance, it takes 20 HDDs to form a LUNproviding 2,000 IOPS. Today, when user response time lags because the HDD LUN does not have enoughIOPS, IT organizations are installing PCIe SSDs to cache frequently accessed data on the HDD LUNs. Theresults are users are experiencing the dramatic improvement in response time that comes with the 300,000IOPS performance of an SSD, and expensive HDD upgrades are being deferred. Performance of SSD vs. Multi-HDD LUNs Using older 7,200 RPM HDDs to newer 15,000 RPM HDDs Improve user response time by caching frequently accessed data stored on HDDs The storage tier with the fastest access time for frequently accessed data Tier 0 such as a database index. DRAM and Flash SSD are storage media used for Tier 0 storage.Document # INDUSTRY2013001 v7, January, 2013 8
  9. 9. Killer App for SSD/HBAEnterprise-Class Cluster ApplicationsUntil now, enterprise-class cluster applications and server-based SSD were mutuallyexclusive because the failure of non-redundant PCIe SSD would cause the cluster to slow,and because it was impossible to maintain cache coherency between SSDs accessing thesame HDD LUNS on the SAN.The presence of SSD/HBAs now allows SAN architects to deploy the fastest SSD solutionpossible without sacrificing high-availability, or the flexibility of provisioning SAN resources. After installingan SSD/HBA in the cluster nodes, every SSD cache LUN is accessible to every HDD LUN. In addition, cachecoherency is maintained if an SSD/HBA fails and if cache LUNS are accessing the same HDD LUNs. 4-Node ClusterFor business critical applications running on database platforms such as Oracle RAC, frequently accessed Tempdb, Index and Log files arecached on high-performance, high-availability SAN SSD. Less frequently accessed data is stored on high-availability SAN disk. The storage tier with the fastest access time for frequently accessed data Tier 0 such as a database index. DRAM and Flash SSD are storage media used for Tier 0 storage.Document # INDUSTRY2013001 v7, January, 2013 9
  10. 10. Storage Performance Takes OffThe Bottom LineMost data center storage architectures include a design for providing the fastest I/O for the hottest data.The most common solution is multiple high-RPM HDDs configured in one LUN. Storage architects responsiblefor this design welcome the prospect of one light-weight, quiet, low-power and reliable SSD replacing racksof heavy, noisy, power-hungry, disk-crashing HDDs. As a result, SSD is fast displacing HDDs for Tier-0 storageof frequently accessed data. The new class of PCIe SSD/HBA adapters will accelerate that momentum byintegrating SSD functions into familiar Fibre Channel HBAs, and by transforming PCIe SSDs into shared, high-availability, enterprise-class storage.For straight-forward server connectivity to both Ethernet LANs, NAS and SANs plus native Fibre ChannelSANs, the new class of CNA/HBAs does both. I don’t know why an informed IT professional would useanything else.Related LinksTo learn more about the companies, technologies, and products mentioned in this report, visit the followingweb pages:QLogic CorporationMt. Rainier Press ReleaseQLogic 2600 Series 16Gb Fibre Channel HBAsSSD Buyer Behavior Survey Shared PCIe SSDCDs and HDDs Once RockedAbout the Author Frank Berry is founder and senior analyst for IT Brand Pulse, a trusted source of data and analysis about IT infrastructure, including servers, storage and networking. As former vice president of product marketing and corporate marketing for QLogic, and vice president of worldwide marketing for the automated tape library (ATL) division of Quantum, Mr. Berry has over 30 years experience in the development and marketing of IT infrastructure. If you have any questions or comments about this report, contact frank.berry@itbrandpulse.com.Document # INDUSTRY2013001 v7, January, 2013 10

×