2. Storage Performance Takes Off
The Network Adapter Industry Responds
Since 8Gb Fibre Channel HBAs and 10Gb Converged Network Adapters (CNAs) were introduced in 2008,
storage performance has taken off behind the power of new Romley-based servers and SSD storage. The
networking industry responded in 2012 with a new generation of PCIe CNA/HBAs which support either 16Gb
native Fibre Channel or 10Gb Ethernet NAS, iSCSI and FCoE storage traffic on each port. The industry also
introduced an innovative new class of adapter which merge the function of a non-HA PCIe SSD and a Fibre
Channel HBA into one high availability SSD/HBA.
Network Adapter Industry Road Map
In 2012 two new types of network adapters were introduced: CNA/HBAs and SSD/HBAs. By 2014 a future generation of Super CNAs
will support native Fibre Channel and Ethernet storage protocols, and provide high-availability, shared SSD cache and SSD storage.
Intel codename for the powerful Xeon E5 server platform with 2 more
Romley cores, 8MB more of cache, 6 more DIMMs of faster DDR3-1600
memory, and twice as much I/O bandwidth with PCIe 3.0.
Document # INDUSTRY2013001 v7, January, 2013 2
3. Best Storage Network Performance
CNA/HBA—A New Class of Network Adapter
Introduced in 2008, CNAs support TCP/IP LAN and NAS storage traffic as well as iSCSI and FCoE SAN storage
traffic over 10Gb Ethernet. With a negligible performance difference between 8Gb Fibre Channel and 10Gb
FCoE, the IT community has maintained a strong preference for native Fibre Channel—effectively deciding to
maintain separate Ethernet and Fibre Channel networks. In 2012, the emergence of 16Gb Fibre Channel
raised the bar for best storage network performance, and the introduction of server adapters functioning as
either a 16Gb Fibre Channel HBA or a 10Gb CNA gives data center managers the best of both worlds—
complete network adapter hardware convergence, and separate Fibre Channel and Ethernet networks.
Anatomy of a PCIe CNA/HBA
Virtualized— With support for N_Port ID Virtualization or NPIV , the CNA/HBA can be configured as
multiple virtual adapters, each with a different protocol, QoS and security policy.
10Gb CNA—When a port is VM VM VM Flexible—Backwards
configured for Ethernet, it compatible with legacy
will support 10Gbe-based 1GbE, 4Gb FC and 8Gb FC
LAN, NAS and (iSCSI and networks. When migrating
FCoE) SAN traffic to a converged network,
simultaneously. ports are easily changed
from Ethernet to Fibre
Channel, or from Fibre
Channel to Ethernet.
16Gb Fibre Channel HBA — Dual Core ASIC —One chip
When a port is configured provides native Fibre
for Fibre Channel, it is Channel protocol processing
backwards compatible with and Ethernet protocol
4Gb and 8Gb FC SANs, as processing.
well as new high-
performance 16Gb SANs.
PCIe 3.0 x4—for 4GB (32Gb) bandwidth to the processor.
16GB (128Gb) is effectively double the bandwidth of a PCIe 2.0 bus
PCIe 3.0 and eight times the bandwidth of a 16Gb Fibre Channel network.
Document # INDUSTRY2013001 v7, January, 2013 3
4. Best SSD Performance
SSD/HBA-The First Enterprise-Class PCIe SSDs
By eliminating the need to deploy two types of adapters for Ethernet and Fibre Channel connectivity, the
new class of 10GbE CNA/16Gb FC HBAs represent an evolutionary and very useful change for server
administrators. However, the new class of PCIe SSD/FC HBAs is a revolutionary development for adapter
technology. Using the Fibre Channel network to share SSD SAN metadata, PCIe SSD cache in different servers
can now be shared and replicated for high-availability. PCIe SSD, already the highest bandwidth and lowest
latency storage possible, is now suited for enterprise-class applications.
Anatomy of a PCIe SSD/HBA
Virtualized— Virtual CNA/HBAs can be configured with different protocol, QoS and security policies—and each with its
own SSD cache.
Fibre Channel HBA—A port VM VM VM Shared Cache—PCIe SSD/
can be configured as a Fibre HBAs can be configured to
Channel HBA to connect a cache frequently accessed
server to a SAN. data on SAN disk arrays.
The adapters can be
installed in separate servers
and the cache pooled into
PCIe SSD —A port can be one cache area network.
connected to the SAN so
the PCIe SSD can serve as a
cache to external disk, and
so that shared cache or
shared SSD storage
metadata can be
maintained on multiple
servers.
PCIe 3.0 x16—for 16GB (128Gb)
High Availability—SSD cache and storage can be mirrored, bandwidth from the SSD directly to
allowing the PCIe SSD/HBA to be deployed in pairs for the server processor.
enterprise–class redundancy and high-availability.
A 15,000 RPM HDD will deliver approximately 200 I/Os per second (IOPS) of
IOPS performance. A single PCIe SSD will deliver about 100,000 IOPS.
Document # INDUSTRY2013001 v7, January, 2013 4
5. The Next Wave of Convergence
Its Here
Vastly different LAN and SAN products from Ethernet and Fibre Channel adapter vendors started merged into
one with the introduction of converged network adapters. The emergence of CNAs/HBAs and SSD/HBAs has
created an enterprise storage adapter market with vastly different products again. Today, only QLogic is
offering a full suite of next gen enterprise storage adapters.
10GbE 10Gb/16Gb Non HA HA
Server Adapter
CNA CNA/HBA SSD SSD/HBA
Connect servers to Connect servers to Single card, non- Redundant, shared,
10GbE LANs, NAS, 10GbE LANs, NAS, shared SSD cache SSD cache for HDDs,
Function iSCSI SANs and FCoE iSCSI SANs and FCoE for HDDs or SSD or SSD storage, and
SANs SANs plus 16Gb FC storage FC HBA
SANs
QLogic
Broadcom
Brocade
Cisco
Emulex
Fusion-io
LSI
Micron
The only enterprise storage adapter vendor with a Fibre Channel HBA,
QLogic Ethernet CNA and PCIe SSD technology.
Document # INDUSTRY2013001 v7, January, 2013 5
6. Where New Adapters Fit
SSD/HBAs Are in Class by Themselves
Until the advent of SSD/HBA technology, the performance of a storage systems was defined by the number
of HDDs and the speed of the HDDs and network connection. The pecking order in this scheme ranges from
16Gb Fibre Channel down to 1Gb Ethernet. Now that SSD/HBA products are available, data center architects
will want to factor levels of SSD performance and availability into their application requirements. Architects
who need gigabytes to terabytes of capacity with the lowest latency for clustered applications will configure
PCIe SSD which is closest to the server processor. Architects who need traditional mass storage capacity with
the highest bandwidth will configure SSD SAN arrays with the highest bandwidth (16Gb Fibre Channel) links
to the server.
Enterprise Storage Adapter Quadrant
Application Requirements vs. Infrastructure Capacity
Takes traffic off the Supports highest
network and onto low bandwidth Tier-1
-latency Tier-0 SSD for 16Gb FC SAN
More IO- the most IO-intensive storage.
intensive workloads.
workloads
Interoperable with FC
infrastructure. Allows LAN/SAN
Less-IO- convergence at 10GbE.
intensive
workloads
Supports LAN and NAS at 1GbE Allows LAN/SAN convergence with cost-
and 10GbE effective iSCSI storage at 1GbE and 10GbE
Less Network Bandwidth More Network Bandwidth
A tier where data is on-line, but unlike Tier 0 SSD storage, the data is
Tier 1 stored on slower but less expensive HDDs.
Document # INDUSTRY2013001 v7, January, 2013 6
7. The Ultimate in Performance
QLogic 100000 SSD Fibre Channel Adapters
On January 8th, 2013, QLogic introduced the Cache Captive to Server
QLE10000, signaling its intention to step into the
SSD market. The QLE10000 is a blend of SSD
technology and Fibre Channel HBA technology
forming the industry’s first SSD/HBA.
Shared PCIe SSD
The QLE10000 also is the first PCIe SSD product
to offer high-availability shared SAN cache.
Shared cache is the ability of a server to carve up its PCIe SSD into
virtual caches or storage LUNs, and provision the cache or LUNs to
other servers as needed. Shared cache and storage is inherent in
SAN SSD systems, but until now, non-existent for PCIe adapters With direct-attached cache, the cache is accessed by
a single server. The expensive Flash memory
spread across multiple servers.
cannot be provisioned to other servers if needed.
In the old days storage consisted of non-shared direct-attached
storage (DAS) inside of a server, and utilization averaged around
30%. The invention of shared NAS and SAN storage drove the Shared, High-Availability, SAN Cache
utilization of storage to 80% and beyond as virtual disk drives were
tailored for each server. The same principal applies to server
virtualization. Before VMware, average non-shared server CPU
utilization hovered around 30%. Now IT pros are loading virtual
machines onto servers until CPU resources are fully utilized.
QLogic is leading the industry from non-shared direct-attached cache
to a high-availability, shared SAN cache architecture. The value of
this capability is intuitive to IT professionals and CFOs because
sharing IT resources to consolidate infrastructure is a basic best
practice, and generates a powerful With shared cache in a SAN, the utilization of expensive SSD is maximized. The quantity
return on investment. of cache, server access to the cache, and storage access to the cache is tailored exactly to
the needs of servers on the SAN.
Industry Shared cache and shared storage is inherent in SAN SSD systems, but until
now, non-existent for PCIe adapters spread across multiple servers.
First
Document # INDUSTRY2013001 v7, January, 2013 7
8. Killer App for SSD/HBA
Breathing Life into Existing Storage
The availability of affordable SSD is allowing data center managers to expand their use of the technology
beyond the most demanding I/O-intensive applications which can justify a much higher cost. One pervasive
example is retro-fitting SSDs into storage environments with older, slower HDDs. A typical data center has
groups of HDDs ganged together in LUNs to harness the aggregate IOPS performance of the HDDs. With an
older 7,200 RPM HDD delivering approximately 100 IOPS of performance, it takes 20 HDDs to form a LUN
providing 2,000 IOPS. Today, when user response time lags because the HDD LUN does not have enough
IOPS, IT organizations are installing PCIe SSDs to cache frequently accessed data on the HDD LUNs. The
results are users are experiencing the dramatic improvement in response time that comes with the 300,000
IOPS performance of an SSD, and expensive HDD upgrades are being deferred.
Performance of SSD vs. Multi-HDD LUNs
Using older 7,200 RPM HDDs to newer 15,000 RPM HDDs
Improve user response time by caching
frequently accessed data stored on HDDs
The storage tier with the fastest access time for frequently accessed data
Tier 0 such as a database index. DRAM and Flash SSD are storage media used for
Tier 0 storage.
Document # INDUSTRY2013001 v7, January, 2013 8
9. Killer App for SSD/HBA
Enterprise-Class Cluster Applications
Until now, enterprise-class cluster applications and server-based SSD were mutually
exclusive because the failure of non-redundant PCIe SSD would cause the cluster to slow,
and because it was impossible to maintain cache coherency between SSDs accessing the
same HDD LUNS on the SAN.
The presence of SSD/HBAs now allows SAN architects to deploy the fastest SSD solution
possible without sacrificing high-availability, or the flexibility of provisioning SAN resources. After installing
an SSD/HBA in the cluster nodes, every SSD cache LUN is accessible to every HDD LUN. In addition, cache
coherency is maintained if an SSD/HBA fails and if cache LUNS are accessing the same HDD LUNs.
4-Node Cluster
For business critical applications running on database platforms such as Oracle RAC, frequently accessed Tempdb, Index and Log files are
cached on high-performance, high-availability SAN SSD. Less frequently accessed data is stored on high-availability SAN disk.
The storage tier with the fastest access time for frequently accessed data
Tier 0 such as a database index. DRAM and Flash SSD are storage media used for
Tier 0 storage.
Document # INDUSTRY2013001 v7, January, 2013 9
10. Storage Performance Takes Off
The Bottom Line
Most data center storage architectures include a design for providing the fastest I/O for the hottest data.
The most common solution is multiple high-RPM HDDs configured in one LUN. Storage architects responsible
for this design welcome the prospect of one light-weight, quiet, low-power and reliable SSD replacing racks
of heavy, noisy, power-hungry, disk-crashing HDDs. As a result, SSD is fast displacing HDDs for Tier-0 storage
of frequently accessed data. The new class of PCIe SSD/HBA adapters will accelerate that momentum by
integrating SSD functions into familiar Fibre Channel HBAs, and by transforming PCIe SSDs into shared, high-
availability, enterprise-class storage.
For straight-forward server connectivity to both Ethernet LANs, NAS and SANs plus native Fibre Channel
SANs, the new class of CNA/HBAs does both. I don’t know why an informed IT professional would use
anything else.
Related Links
To learn more about the companies, technologies, and products mentioned in this report, visit the following
web pages:
QLogic Corporation
Mt. Rainier Press Release
QLogic 2600 Series 16Gb Fibre Channel HBAs
SSD Buyer Behavior Survey Shared PCIe SSD
CDs and HDDs Once Rocked
About the Author
Frank Berry is founder and senior analyst for IT Brand Pulse, a trusted source of data
and analysis about IT infrastructure, including servers, storage and networking. As
former vice president of product marketing and corporate marketing for QLogic, and
vice president of worldwide marketing for the automated tape library (ATL) division of
Quantum, Mr. Berry has over 30 years experience in the development and marketing
of IT infrastructure. If you have any questions or comments about this report, contact
frank.berry@itbrandpulse.com.
Document # INDUSTRY2013001 v7, January, 2013 10