SlideShare a Scribd company logo
1 of 55
Download to read offline
KeyTerminologySAN/LANTechnologyOverviewHighAvailability/FaultTolerancePerformanceSecurityManagement
Management
Mind Meld  What Network Administrators Need
to Know About Storage Management
What Network
Administrators Need
to Know About
Storage Management
What Network Administrators Need to Know about Storage Management
iWhat Network Administrators Need to Know about Storage Management i
Abstract
Data center administrators face a major networking challenge from the
combination of high bandwidth requirements, increasing network sprawl and
the need for a more adaptive networking infrastructure. Most data centers today
have:
•	 Multiple network fabrics, each dedicated to a specific type of traffic
•	 High numbers of adapters and switch port deployments
•	 Complex cabling infrastructure
•	 Complex management of switch and adapter firmware and associated
service contracts
Data centers are implementing a new consolidated network technology for data
and storage, called “converged networking.” Converged networking combines
existing Local Area Networks (LANs) and Storage Area Networks (SANs) into a
single, high-performance 10Gb/s Ethernet (10GbE) framework that intelligently
connects every server, network and storage device within the data center,
thereby enabling unified I/O.
Converged networking results in an overlap of network and storage
administrators’ responsibilities. This guide explains networking and storage
basics to help each administrator better understand the changes resulting from
converged networking and how it will impact their role in the data center. The
following sections are provided in this guide:
•	 Introduction with Key Terminology: General SAN/LAN Technology Overview
•	 High Availability/Fault Tolerance
•	 Performance
•	 Security
•	 Management
•	 Emulex Components
•	 Conclusion
What Network Administrators Need to Know about Storage Managementiiii
iiiWhat Network Administrators Need to Know about Storage Management iii
Table of Contents
Abstract .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . i
Chapter 1: Evolution of the Data Center .  .  .  .  .  .  .  .  .  .  .  .  .  . 1
Drivers for Network Convergence .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 1
The Data Center Networking Challenge .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 1
Chapter 2: 10 Gigabit Ethernet, the Enabling Technology for
Convergence  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 3
Chapter 3: Technology Overview  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 5
Fibre Channel over Ethernet  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  5
Fibre Channel Characteristics Preserved .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  6
iSCSI .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  6
Chapter 4: Storage Area Networks  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 8
SAN  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  8
Logical Unit Number .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  9
Fibre Channel Protocol .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  10
Layers of Fibre Channel Protocol .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  10
Internet FCP (iFCP) .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 11
OSI Model vs. FC/FCoE  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 11
World Wide Name .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 11
Converged Networking .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 12
Data Center Bridging (DCB)  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  12
Priority Flow Control (PFC) .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  12
Enhanced Transmission Selection (ETS) .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  13
How FCoE Ties FC Protocol with Network Protocol .  .  .  .  .  .  .  .  .  .  .  . 13
Requirements to Deploy Loss-less Ethernet  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  13
Non Fibre Channel Based Storage Protocols .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  13
Chapter 5: SAN Availability .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 14
Key Terminology .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  14
SAN Trunking  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  15
Failover and Load Balancing .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  15
Configuring Failover in a SAN  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 16
Effect of Converged Network  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 16
QoS .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  17
Data Center Bridging eXchange (DCBX) .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  17
Failover .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 17
What Network Administrators Need to Know about Storage Managementiv
Chapter 6: Performance .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 18
SAN performance and capacity management .  .  .  .  .  .  .  .  .  .  .  .  .  .  18
Effect of Converged Network  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 18
Industry Benchmarks  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 19
Storage Performance Council (SPC)  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 19
Transaction Processing Performance Council (TPC) .  .  .  .  .  .  .  .  .  .  19
Benchmarking Software .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  21
Iometer .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 21
IOzone .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  21
Ixia IxChariot .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  21
Key Terminology .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  22
CPU Efficiency .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  22
Performance Tuning .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  23
Driver Parameters .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  23
Queue depth setting .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  23
Interrupt coalescing .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  23
Key Metrics .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 24
IOPS .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  25
Latency .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 25
Chapter 7: Security  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 26
Security in Converged Networking Environments  .  .  .  .  .  .  .  .  .  .  .  .  . 26
Security Breaches .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  27
Methods of Protecting a SAN  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 28
Zoning .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  28
Virtual SAN .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 29
LUN Masking .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 30
Security Protocols  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 30
Encapsulating Security Payload over Fibre Channel .  .  .  .  .  .  .  .  .  .  31
Securing iSCSI, iFCP and FCIP over IP Networks  .  .  .  .  .  .  .  .  .  .  .  31
Effect of Converged Network  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 32
Native FCoE Storage  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 32
Zoning .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  32
LUN Masking .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 33
Compliance  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  33
vWhat Network Administrators Need to Know about Storage Management
Chapter 8: Management: Configuration and Diagnostics  .  .  . 34
SAN provisioning .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  34
Adapter Management  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  36
Installation .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  36
Configuration .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 36
Management  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  37
Diagnostics .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 37
Key Terminology .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  37
HBA and CNA configuration .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  37
Port Configuration  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 38
Boot from SAN  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 38
vPorts .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  38
SMI-S .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  38
CIM  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  39
Effect of Converged Network  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 39
Fibre Channel Initialization Protocol (FIP) .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  39
Port Configuration  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 39
Chapter 9: Emulex Solutions  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 41
Chapter 10: Conclusion .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  .  . 45
What Network Administrators Need to Know about Storage Managementvi What Network Administrators Need to Know about Storage Managementvi
1What Network Administrators Need to Know about Storage Management
Chapter 1:
Evolution of the Data Center
Drivers for Network Convergence
The combination of high bandwidth demand, increasing network sprawl and the
need for more adaptive networking infrastructure is posing a major challenge for
data center managers. Pain points in today’s data center networks include:
•	 Multiple network fabrics, each dedicated to a specific type of traffic
(see Figure 1)
•	 High numbers of adapters and switch port deployments
•	 Complex cabling infrastructure
•	 Storage network provisioning times as a result of static configurations
•	 Complexity of managing switch and adapter firmware and associated
service contracts
HCA
Core
Ethernet
Network
FC SAN
Ethernet Switch
FC Switch
Server
NIC
Infiniband Switch
IB Network
HBA
Figure 1: Dedicated networks for SAN and LAN
The Data Center Networking Challenge
Data center managers are clearly in need of networking solutions that contain
the sprawl of network infrastructure and enable an adaptive next-generation
network. The solution for optimizing the data center network must be capable
of addressing the following high-level requirements:
1.	Consolidate: The network solution must be capable of consolidating
multiple low-bandwidth links into a faster high-bandwidth infrastructure
and significantly reducing the number of switch and adapter ports and
cables.
2.	Converge: The network solution must be capable of converging or
What Network Administrators Need to Know about Storage Management2
unifying networking and storage traffic to a single network, eliminating the
need for dedicated networks for each traffic type. This functionality will
further contribute toward reduction in network ports and cables, while
simplifying deployment and management.
3.	Virtualize: The network solution must be capable of virtualizing the
underlying physical network infrastructure and providing service level
guarantees for each type of traffic. In addition, the solution must
be capable of responding to dynamic changes in network services
depending on the business demands of the data center applications.
3What Network Administrators Need to Know about Storage Management
Chapter 2:
10 Gigabit Ethernet, the Enabling
Technology for Convergence
The 10GbE networking standard, ratified in 2002, enables multiple traffic types
over a single link, as shown in Figure 2. In order to facilitate network convergence
and carry Fibre Channel traffic over 10GbE, Ethernet technology had to support
a “no-drop” behavior because SAN traffic requires a loss-less transmission.
To alleviate the “lossy” nature of traditional Ethernet environments, 10Gb Data
Center Bridging (DCB) was developed to provide a loss-less connection, making
it ideal for storage networking applications.
10GbE can operate both as a “loss-less” and “lossy” network. Ports can be
configured to carry various protocols:
•	 TCP/IP
•	 Internet Small Computer System Interface (iSCSI)
•	 Fibre Channel over Ethernet (FCoE)
Figure 2: 10GbE enables multiple traffic types over a single link
The DCB Task Group of IEEE 802.1 Working Group (LANs) provides the
necessary framework for enabling 10GbE converged networking within a data
center. The recent innovations of this task group that support the loss-less
characteristic in 10GbE are summarized below:
What Network Administrators Need to Know about Storage Management4
10GbE Innovations
•	 Enhanced physical media
	 o	 10Gb/s connectivity over UTP cabling
	 o	 10Gb/s connectivity over Direct Attach Twin-ax Copper cabling
•	 Optimizations in 10Gb/s transceiver technology (SFP+ form factor)
•	 Support for loss-less Ethernet infrastructure
•	 New physical network designs such as top-of-rack switch architectures
•	 Isolate and prioritize different traffic types using Priority Flow Control
(PFC)
•	 Maintain bandwidth guarantees for multiple traffic types
•	 Assure that end-points and switches know about each other’s
capabilities through an enhanced management protocol using DCB
These innovations rely on the following four key protocols:
Protocols Key Functionality Business Value
Priority Flow Control
(PFC) P802.1Qbb
Management of bursty,
single traffic source on
a multi-protocol link
Enables storage traffic
over 10GbE link with
“no-drop” in the network
Enhanced
Transmission
Selection (ETS)  
P802.1Qaz
Bandwidth
management between
traffic types for multi-
protocol links
Enables bandwidth
assignments per
traffic type. Bandwidth
is configurable on-
demand.
Data Center Bridging
Capabilities Exchange
Protocol (DCBCXP)
802.1Qaz
Auto exchange of
Ethernet parameters
between peers (switch
to NIC, switch to switch)
Facilitates
interoperability by
exchanging capabilities
supported across the
nodes.
Congestion
Management (CM)  
P802.1Qau
Addresses problem of
sustained congestion,
driving corrective action
to the edge
Facilitates larger end-
to-end deployment of
network convergence.
Table 1: Protocol standards are enabling convergence
In addition to providing lowered costs, 10GbE enables much-needed scalability
by providing additional network bandwidth. 10GbE also simplifies management
by reducing the number of ports and facilitating flexible bandwidth assignments
for individual traffic types.
5What Network Administrators Need to Know about Storage Management
Chapter 3:
Technology Overview
Fibre Channel over Ethernet
In parallel with the emergence of loss-less 10GbE, the emergence of newer
standards, such as the FCoE standard, is accelerating the adoption of Ethernet
as the medium of network convergence. FCoE is a standard developed by INCITS
T11 that fully leverages the enhanced features of 10GbE for I/O consolidation in
the data center.
10GbE networks address the requirements of consolidation, convergence
and virtualization. FCoE expands Fibre Channel into the Ethernet environment,
combining two leading technologies, Fibre Channel and Ethernet, to provide
more options to end users for SAN connectivity and networking. Network
convergence, enabled by FCoE, helps address the network infrastructure
sprawl, while fully complementing server consolidation efforts and improving
the efficiency of the enterprise data center.
FCoE is a new protocol that encapsulates Fibre Channel frames within an
Ethernet frame traveling on a 10GbE DCB network. FCoE leverages 10Gb DCB
connections. Although FCoE traffic shares the physical Ethernet link with other
types of data traffic, FCoE data delivery is ensured, as it is given a loss-less
priority status, matching the loss-less behavior guaranteed in Fibre Channel.
FCoE is one of the technologies that makes I/O convergence possible, enabling
a single network to support storage and traditional network traffic.
Figure 3: Ability of technology to meet needs of network segments
What Network Administrators Need to Know about Storage Management6
Fibre Channel Characteristics Preserved
The FCoE protocol specification maps a complete Fibre Channel frame
(including checksum, framing bits) directly onto the Ethernet payload and avoids
the overhead of any intermediate protocols.
Figure 4: FCoE encapsulation in Ethernet)
This light-weight encapsulation ensures that FCoE-capable Ethernet switches
are less compute-intensive, thus providing the high performance and low
latencies of a typical Fibre Channel network. By retaining Fibre Channel as
the upper layer protocol, the technology fully leverages existing Fibre Channel
constructs such as fabric login, zoning and logical unit number (LUN) masking,
and ensures secure access to the networked storage.
Data center managers are looking for solutions to transition to a more dynamically
provisioned network that is highly responsive and addresses the quality and
service level requirements of business applications.
iSCSI
The iSCSI protocol ratified by Internet Engineering Task Force (IETF) in 2003
brought SANs within the reach of small and mid-sized businesses. The protocol
encapsulates native SCSI commands using TCP/IP and transmits the packets
over the Ethernet network infrastructure. The emergence of 10GbE addressed
the IT manager’s concerns regarding the bandwidth and latency issues of 1
Gb Ethernet and laid the foundation for more widespread adoption of network
convergence in data centers.
iSCSI-enabled convergence offers several advantages:
•	 Highly suitable for convergence in small and medium businesses,
remote offices and department-level data centers where customers are
transitioning from Direct Attach Storage (DAS) to SANs.
•	 Reduces labor and management costs while increasing reach.
•	 The ubiquitous nature of Ethernet means that IP networks can be
deployed quickly and easily in organizations of all sizes. Ethernet is
7What Network Administrators Need to Know about Storage Management
also readily understood, so IT personnel can deploy and maintain an IP
environment without specialized Fibre Channel training.
•	 Major operating systems include an iSCSI driver in their distribution.
•	 iSCSI performance can be improved by deploying adapters that support
iSCSI offload or TCP/IP offload to reduce the CPU demands for packet
processing.
Although optimal for small and medium businesses, iSCSI-enabled convergence
does have limitations:
•	 Because the underlying Ethernet network is prone to packet losses with
network congestion, network designers typically recommend the use
of separate Ethernet networks for storage and data networking. This
reduces some of the cost advantages of convergence.
•	 Large enterprise data centers have a sizable deployment of Fibre
Channel SANs and use Fibre Channel-specific tools to effectively manage
storage assets. From the perspective of these customers, iSCSI is a
different storage technology that requires an incremental investment in
hardware, software and training.
The decision to deploy iSCSI or FCoE is largely based on current deployments.
Enterprise data centers with Fibre Channel SANs already in place typically
choose FCoE, while smaller data centers with no Fibre Channel typically choose
iSCSI.
What Network Administrators Need to Know about Storage Management8
Chapter 4:
Storage Area Networks
Understanding SAN technology requires familiarity with the terms and
components described in this section.
SAN
A SAN is an architecture that attaches remote computer storage devices (such
as disk arrays, tape libraries and optical jukeboxes) to servers in a manner where
the devices appear as locally attached to the operating system (OS). A SAN
generally is its own network of storage devices that are typically not accessible
through the LAN by typical devices.
Historically, by virtue of their design, data centers first created “islands” of SCSI
disk arrays as DAS, each dedicated to an application, and visible as a number of
“virtual hard drives” (i.e., LUNs, defined below). Essentially, a SAN consolidates
such storage islands together using a high-speed network (see Figure 5).
Figure 5: Storage Area Network
9What Network Administrators Need to Know about Storage Management
Common uses of a SAN include provision of transactionally accessed data that
require high-speed, block-level access to the hard drives such as e-mail servers,
databases and high-usage file servers. Storage sharing typically simplifies
storage administration and adds flexibility, since cables and storage devices do
not have to be physically moved to shift storage from one server to another.
Other benefits include the ability to allow servers to boot from the SAN itself.
This allows for a quick and easy replacement of faulty servers since the SAN can
be reconfigured so that a replacement server can use the boot LUN of the faulty
server. This process can take as little as half an hour and is a relatively new idea
being pioneered in newer data centers. SANs also tend to enable more effective
and robust disaster recovery capabilities. A SAN can also span a distant location
enabling more effective data replication implemented by disk array controllers,
by server software or by specialized SAN devices. Since IP based Wide Area
Networks (WANs) are often the least costly method of long-distance transport,
the Fibre Channel over IP (FCIP) and iSCSI protocols have been developed to
allow physical extension of a SAN over overcoming the distance limitations of
the physical SCSI layer, ensuring business continuance in a disaster.
The economic consolidation of disk arrays has accelerated the advancement
of several features, including I/O caching, snapshotting and volume cloning
(Business Continuance Volumes, or BCVs).
Logical Unit Number
In computer storage, a LUN is the identifier of a SCSI logical unit, and by extension,
of a Fibre Channel or iSCSI logical unit. A logical unit is a SCSI protocol entity
that performs classic storage operations such as read and write. Each SCSI
target provides one or more logical units. A logical unit typically corresponds to
a storage volume and is represented within an OS as a device.
In current SCSI, a LUN is a 64-bit identifier. Note that even though named
“Logical Unit Number,” it is not a number. It is divided into four 16-bit pieces
that reflect a multilevel addressing scheme, and it is unusual to see any but the
first of these used.
To provide a practical example, a typical disk array has multiple physical
SCSI ports, each with one SCSI target address assigned. Then the disk array
is formatted as a redundant array of independent disks, or also known as
redundant array of inexpensive disks (RAID), and then this RAID is partitioned
into several separated storage volumes. To represent each volume, a SCSI
target is configured to provide a logical unit. Each SCSI target may provide
multiple logical units and thus represent multiple volumes, but this does not
mean that those volumes are concatenated. The computer that accesses a
volume on the disk array identifies which volume to read or write with the LUN
of the associated logical unit.
Another example is a single disk drive with one physical SCSI port. It usually
provides just a single target, which, in turn, usually provides just a single logical
What Network Administrators Need to Know about Storage Management10
unit whose LUN is zero. This logical unit represents the entire storage of the
disk drive.
Fibre Channel Protocol
Fibre Channel is a high-speed network technology primarily used for storage
networking. It uses Fibre Channel Protocol (FCP) transport protocol to transport
SCSI commands over Fibre Channel networks. The following provides a
summary of the differences between Fibre Channel and Ethernet:
•	 Fibre Channel passes block data, similar to FCoE and talking to target
devices, whereas Ethernet passes files/packets. Block data is much
larger and moved in a loss-less manner. Ethernet is smaller and “lossy”
and can be sent out of order.
•	 Fibre Channel talks to target devices (storage device), whereas
Ethernet typically talks to other hosts (servers). In the storage world, the
distinction between the “target” and the “initiator” is important. Ethernet
looks as these as one and the same.
•	 With storage connectivity, there is a finite number of end points,
whereas in LANs, there are an infinite number of end points that need to
talk to each other.
•	 In a LAN, the bandwidth requirement to any particular endpoint is
generally much smaller than the bandwidth requirement for storage
networks. The significance of this fact is that in a SAN, you have better
predictability of traffic patterns and requirements, and you would likely
create traffic zones between the finite number of host connections and
storage connections.
Layers of Fibre Channel Protocol
Fibre Channel protocol consists of five layers. Given that Fibre Channel is also a
type of “networking” protocol, there are some similarities to the Open Systems
Interconnect (OSI) model used in networks. The Fibre Channel layers are noted
below:
Layer Description
FC0
This is the physical layer, which covers cables, transceivers, connectors,
pin-outs, etc.
FC1 This is the data link layer, which encodes and decodes signals.
FC2
This is the network layer, consisting of the core of Fibre Channel, and
defining the main protocols.
FC3
This is the common services layer, which is a thin layer that could, in the
future, support functions like encryption or RAID.
FC4
This is the protocol mapping layer. This layer encapsulates other protocols
such as SCSI into an information unit for delivery to the network (FC2) layer.
11What Network Administrators Need to Know about Storage Management
Internet FCP (iFCP)
The iFCP protocol enables the implementation of Fibre Channel functionality
over an IP network, within which the Fibre Channel switching and routing
infrastructure is replaced by IP components and technology. Congestion control,
error detection and recovery are provided through the use of TCP (Transmission
Control Protocol). The primary objective of iFCP is to allow existing Fibre Channel
devices to be networked and interconnected over an IP-based network at wire
speeds.
OSI Model vs. FC/FCoE
The OSI Layered Model is an architectural abstraction that helps to describe the
operation of protocols. Unfortunately, the Fibre Channel protocol layers cannot
be mapped to OSI layers in a straightforward manner. FCoE, which leverages
the Fibre Channel protocol, has an inherent awkwardness when applied to
Ethernet networks, whereas the iSCSI protocol originated from a traditional
Ethernet and IP environment. Figure 6 shows the mapping of Fibre Channel
layers to OSI layers.
World Wide Name
A World Wide Name (WWN) is a 64-bit address used in Fibre Channel networks
to uniquely identify each element in a Fibre Channel network. The use of WWNs
for security purposes is inherently insecure, because the WWN of a device is a
user-configurable parameter.
Figure 6: Storage protocols mapped to the OSI model
What Network Administrators Need to Know about Storage Management12
Converged Networking
Data Center Bridging (DCB)
The DCB Task Group is a part of the IEEE 802.1 Working Group. DCB is based
on a collection of open-standard Ethernet extensions. It is designed to improve
and expand Ethernet networking and management capabilities within the data
center. DCB helps to ensure data delivery over loss-less fabrics, consolidate
I/O over a unified fabric and improve bandwidth through multipathing at Layer
2 (the Datalink Layer).
With DCB, Ethernet will provide solutions for consolidating I/O and carrying
multiple protocols, such as IP and FCoE on the same network fabric, as opposed
to separate networks. The ability to consolidate traffic is now available with the
deployment of 10GbE networks due to the following components of DCB:
1.	Priority-based Flow Control (PFC) – Enables management of bursty,
single traffic source on a multiprotocol link
2.	Enhanced Transmission Selection (ETS) – Enables management of
bandwidth by traffic category for multi-protocol links
3.	Date Center Bridging Exchange (DCBX) protocol – Allows auto-exchange
of Ethernet parameters between switches and endpoints
4.	Congestion notification – Resolves sustained congestion by moving
corrective action to the network edge
5.	Layer 2 Multipathing – Uses all bisectional bandwidth of Layer 2 topologies
6.	Loss-less Service – Helps ensure guaranteed delivery service for
applications that require it
With DCB, a 10GbE connection can support multiple traffic types simultaneously,
while preserving the respective traffic treatments. The same 10GbE link can
also support Fibre Channel storage traffic by offering a “no data drop” capability
via FCoE.
Priority Flow Control (PFC)
PFC is an enhancement to the existing pause mechanism in Ethernet. The
current Ethernet pause option stops all traffic on a link; essentially, it is a link
pause for the entire link. Unlike traditional Ethernet, DCB enables a link to be
partitioned into multiple logical links with the ability to assign each link a specific
priority setting (loss-less or lossy). The devices within the network can then
detect whether traffic is “lossy” or “loss-less”. If the traffic is lossy, then it is
treated in typical Ethernet fashion. If it is loss-less, then PFC is used to guarantee
that none of the data is lost.
In short, PFC allows any of virtual links to be paused and restarted independently,
enabling the network to create a no-drop class of service for an individual virtual
13What Network Administrators Need to Know about Storage Management
link. It also allows differentiated Quality of Service (QoS) policies for the eight
unique virtual links. PFC is also referred to as Per Priority Pause (PPP).
Enhanced Transmission Selection (ETS)
ETS is a new standard that enables a more structured method of assigning
bandwidth based on traffic class. This way, an IT administrator can allocate
a specific percentage of bandwidth to SAN, LAN and inter-processor
communication (IPC) traffic.
How FCoE Ties FC Protocol with Network Protocol
FCoE transports Fibre Channel frames over an Ethernet network while
preserving existing Fibre Channel management modes. A loss-less network
fabric is a requirement for proper operation. FCoE leverages DCB extensions to
address congestion, traffic spikes and support multiple data flows on one cable
to achieve unified I/O.
Requirements to Deploy Loss-less Ethernet
Loss-less Ethernet environment requires the means to pause the link, such as
PFC (as described above) in a DCB environment. It also requires the means
to tie the pause commands from the ingress to the egress port across the
internal switch fabric. The pause option in Ethernet and PFC in DCB take care of
providing loss-less Ethernet on each link. Finally, a loss-less intra-switch fabric
architecture is required.
Non Fibre Channel Based Storage Protocols
iSCSI is an IP-based storage networking standard for linking data storage arrays
to servers. iSCSI, like Fibre Channel, is a method of transporting high-volume
data storage traffic and is designed to be a direct block-level protocol that reads
and writes directly to storage. However, unlike Fibre Channel, iSCSI carries
SCSI commands over Ethernet networks instead of a Fibre Channel network.
Because of the ubiquity of IP networks, iSCSI can be used to transmit data over
LANs, WANs or the Internet. iSCSI passes block data, similar to FCoE, and
communicates to target devices.
What Network Administrators Need to Know about Storage Management14
Chapter 5:
SAN Availability
In the event of an unexpected disruption, each IT infrastructure must be
designed to ensure the continuity of business operations. From a data center
storage perspective, this means that your SAN fabric must be extremely reliable,
for data must be accessible at all times, whether to do scheduled backups or
unexpected recoveries.
Within the SAN fabric, high availability is needed across adapters, switches,
servers and storage. If a problem occurs with any of these components, a
combination of aggregation and failover techniques are used to meet availability
and reliability requirements. Figure 7 shows an example of multipathing and
failover in a SAN.
Figure 7: Multipathing and failover
Key Terminology
The following terminology is important to ensuring SAN high availability/fault
tolerance:
15What Network Administrators Need to Know about Storage Management
SAN Trunking
Trunking (also referred to as aggregation, link aggregation or port aggregation)
combines ports to form faster logical communication links between devices. For
example, by aggregating up to four inter-switch links (ISLs) into a single logical
8Gb/s trunk group, you optimize available switch resources, thereby decreasing
congestion.
Trunking increases data availability even if an individual link failure occurs. In
such an instance, the I/O traffic continues, though at a reduced bandwidth,
as long as at least one link in the trunk group remains available. Although this
type of aggregation requires more cabling and switch ports, it offers the benefit
of faster performance, load balancing and redundancy. It is often possible to
aggregate links between a host server and switch, or between a storage system
and a switch, or even between ISLs.
Failover and Load Balancing
Failover and load balancing in storage networks go hand-in-hand. By having
multiple physical connections, a failure in one adapter port or cable won’t
completely disrupt data traffic. Instead, data flow can continue at a reduced
speed until the failure is repaired.
Another benefit of multiple physical connections is load balancing. Normally,
unrelated physical links can transfer data at independent and frequently
unpredictable speeds, allowing a bottleneck on one or more of the physical
connections, which, in turn, can impact the overall performance of the SAN.
Once multiple physical connections are aggregated into a logical data path,
data can be distributed equally across the member links to balance the load and
reduce bottlenecks within the network.
SAN failover is a configuration where multiple connections are made; however,
not all of the connections carry data simultaneously. For example, a storage
array may be connected using two 8Gb/s Fibre Channel links, but only one of
the links might be active. The second link is connected, but is inactive. If the
first link fails, the data communication then fails over to the second link, allowing
communication to continue at the same speed until the original connection is
repaired.
SAN QoS
The server, adapter, switch and storage array are critical components when
attempting to deploy QoS within the SAN. The optimum QoS solution should be
based on an overall view of the SAN, be fully interoperable and focus on critical
bottlenecks.
Fibre Channel adapters usually have excess bandwidth and short response
times, and, as a result, do not impact overall QoS. This is particularly the
case when following best practices and installing multiple adapters for high
availability.
What Network Administrators Need to Know about Storage Management16
Storage arrays are often the limiting factor for I/O and are a critical component for
overall performance tuning. Array QoS is usually based on LUNs. For example,
high-priority applications could be combined on a LUN with RAID striping, high-
performance drives, a large amount of cache memory and a high QoS priority.
Another LUN could be used to support less critical background tasks with
inexpensive, lower performance disks and a lower QoS priority. When used in
combination with all of these variables, array-based QoS management can be a
very effective tool for storage administrators.
Switch-based QoS can be used to prioritize traffic within the SAN. Some
switches provide a variety of options to implement QoS. They include Fibre
Channel zones, virtual SANs (VSANs) and individual ports. Fibre Channel
switches are designed to be fully interoperable with industry-standard server-
to-SAN connectivity adapters. For example, Cisco QoS provides extensive
capabilities to create classes of traffic and assign the relative weight for queues.
Other switches have more proprietary designs. I/O traffic between the server and
switch is not likely to be a bottleneck, which is the case with high-performance
adapters that usually have surplus bandwidth.
Configuring Failover in a SAN
At many levels, IP and storage networks share similar failover configuration
steps. The following are a few of the basic methods to configure failover in a
storage environment:
•	 Servers configured with a dual-port adapter connected to a switch
-	 Each port connected to two different switches
-	 Create virtual ports (vPorts) on top of the physical ports and have them
associated with a switch
•	 Servers configured with two dual-port adapters connected to two
different switches
-	 Each port of an adapter is connected to a port on one of the two
switches
-	 Create vPorts on top of the physical ports and have them associated
with different switches
•	 Server Clusters
-	 A group of independent servers working together as a single system to
provide high availability of services. When a failure occurs on a server
within the cluster, resources are rerouted, redistributing the workload
to another server within the cluster. Server clusters are designed to
increase availability of critical applications.
Effect of Converged Network
Converged networking will introduce new technologies and methodologies
17What Network Administrators Need to Know about Storage Management
that will change data center reliability and business resilience processes. The
following describes some of the changes to be considered.
QoS
Networks require much more than just “speeds and feeds.” 10GbE offers
increased speed and bandwidth, but you still need to control it. QoS technologies
are the means by which it can be controlled, and vendors will be providing these
technologies for converged networks.
Data Center Bridging eXchange (DCBX)
DCBX is used by DCB devices to exchange configuration information with
directly connected peers. The protocol may also be used for misconfiguration
detection and for configuration of the peer.
Ethernet is designed to be a “best-effort” network. This means data packets
may be dropped or delivered out of order if the network or devices are busy.
DCBX is an Ethernet discovery and configuration protocol that guarantees link
end points are configured in a manner that averts “soft errors.” DCBX enables:
•	 End-point consistency
•	 Identification of configuration irregularities
•	 Basic configuration capabilities to correct end-point misconfigurations
DCBX protocol is used for transmission of configurations between neighbors
within an Ethernet network to ensure reliable configuration across the network.
It uses Link Layer Discovery Protocol (LLDP) to exchange parameters between
two link peers.
Failover
IT administrators typically use failover solutions supplied by their storage OEMs
or those integrated into the OS platform. Their implementation and management
may also be different in Fibre Channel and iSCSI environments. For Microsoft
Windows environments, some network interface card (NIC) vendors provide a
NIC teaming driver that provides failover capabilities. It is expected that this
capability may also be made available through the OS platform.
What Network Administrators Need to Know about Storage Management18
Chapter 6:
Performance
SAN performance and capacity management
SAN performance can be adversely affected when storage resources are low
or become constrained. This can cause application performance problems
and service level issues. Many IT organizations attempt to avert such issues
by over purchasing and over provisioning storage.  However this methodology
frequently results in wasted capital since the additional storage investment may
not necessarily be fully utilized.   An alternative approach is performance and
capacity planning practices to avoid unexpected storage costs and disruptive
upgrades. The objective is to predict storage needs over time and then budget
capital and labor to make regular improvements to the storage infrastructure.
In practice, SAN performance and capacity planning can be quite challenging as
predicting the storage needs of an application or department over time without
a careful assessment of past growth and a comprehensive evaluation of future
plans is virtually impossible.   Many organizations tend to forego the expense
and effort of a formalized process unless a mission-critical project or serious
performance problem require it. Organizations choosing to sustain an ongoing
performance and capacity planning effort will need either comprehensive storage
resource management (SRM)-type tool or a capacity planning application.
With regards to performance monitoring and tuning tools, there are various
benchmarking tools available.  Below are just some examples.
Effect of Converged Network
Converged networking will impact a data center’s performance processes,
where today there are more questions than answers.
1. How will traffic be segregated on a 10GbE pipe so that you can allocate
bandwidth for storage and network traffic?
2. What monitoring tools will track utilization? Currently, you independently
monitor loads on the Ethernet and Fibre Channel cables. So, in
converged environments, how do you do this?
3. Specific to Universal Converged Network Adapters (UCNAs), if the HBA
is configured as FCoE, can I also run software iSCSI off it? Will TOE
capabilities be available?
4. How will multipathing configurations be deployed? We currently have:
	 i. IP multipathing (two NIC connected to two switches)
	 ii. Fibre Channel multipathing
19What Network Administrators Need to Know about Storage Management
5. Will converged environments have special cabling requirements? (e.g.,
TYPE: CAT 5, CAT 6 or any special type cables, distance)
6. How do you implement and monitor QoS? Hardware-based network
analyzers at the network level need to support converged networks
to monitor traffic utilization. In converged environments, how can the
analyzers tell apart the traffic on a single physical cable?
Industry Benchmarks
Storage Performance Council (SPC)
SPC Benchmark 1:  Consists of a single workload designed to demonstrate
the performance of a storage subsystem while performing the typical functions
of business critical applications. Those applications are characterized by
predominately random I/O operations and require both queries as well as update
operations. Examples of those types of applications include OLTP, database
operations, and mail server implementations.
SPC Benchmark 2:  SPC-2 consists of three distinct workloads designed to
demonstrate the performance of a storage subsystem during the execution of
business critical applications that require the large-scale, sequential movement
of data. Those applications are characterized predominately by large I/Os
organized into one or more concurrent sequential patterns. A description of
each of the three SPC-2 workloads is listed below as well as examples of
applications characterized by each workload.
• Large File Processing: Applications in a wide range of fields, which require
simple sequential process of one or more large files such as scientific
computing and large-scale financial processing.
• Large Database Queries: Applications that involve scans or joins of large
relational tables, such as those performed for data mining or business
intelligence.
• Video on Demand: Applications that provide individualized video
entertainment to a community of subscribers by drawing from a digital film
library.
For more information on Storage Performance Council benchmarks, please visit
www.storageperformance.org
Transaction Processing Performance Council (TPC)
TPC-C:  Simulates a complete computing environment where a population of
users executes transactions against a database. The benchmark is centered
around the principal activities (transactions) of an order-entry environment.
These transactions include entering and delivering orders, recording payments,
checking the status of orders, and monitoring the level of stock at the warehouses.
While the benchmark portrays the activity of a wholesale supplier, TPC-C is not
What Network Administrators Need to Know about Storage Management20
limited to the activity of any particular business segment, but, rather represents
any industry that must manage, sell, or distribute a product or service.
TPC-C involves a mix of five concurrent transactions of different types and
complexity either executed on-line or queued for deferred execution. It does
so by exercising a breadth of system components associated with such
environments, which are characterized by:
• The simultaneous execution of multiple transaction types that span a
breadth of complexity
• On-line and deferred transaction execution modes
• Multiple on-line terminal sessions
• Moderate system and application execution time
• Significant disk input/output
• Transaction integrity (ACID properties)
• Non-uniform distribution of data access through primary and secondary
keys
• Databases consisting of many tables with a wide variety of sizes, attributes,
and relationships
• Contention on data access and update
• TPC-C performance is measured in new-order transactions per minute. The
primary metrics are the transaction rate (tpmC), the associated price per
transaction ($/tpmC), and the availability date of the priced configuration.
TPC-E:
TPC Benchmark™
E (TPC-E) is a new On-Line Transaction Processing (OLTP)
workload developed by the TPC. The TPC-E benchmark uses a database to
model a brokerage firm with customers who generate transactions related
to trades, account inquiries, and market research. The brokerage firm in turn
interacts with financial markets to execute orders on behalf of the customers
and updates relevant account information.
The benchmark is “scalable,” meaning that the number of customers defined
for the brokerage firm can be varied to represent the workloads of different-
size businesses. The benchmark defines the required mix of transactions the
benchmark must maintain. The TPC-E metric is given in transactions per second
(tps). It specifically refers to the number of Trade-Result transactions the server
can sustain over a period of time.
Although the underlying business model of TPC-E is a brokerage firm, the
database schema, data population, transactions, and implementation rules
have been designed to be broadly representative of modern OLTP systems.
21What Network Administrators Need to Know about Storage Management
Benchmarking Software
Iometer
Iometer is an I/O subsystem measurement and characterization tool for single
and clustered systems. It is used as a benchmark and troubleshooting tool and
is easily configured to replicate the behavior of many popular applications.  One
commonly quoted measurement provided by the tool is I/O per second (IOPs).
Iometer is one of the most popular tool among storage vendors and is available
free from www.iometer.org
IOzone
IOzone is a file system benchmark tool. The benchmark generates and measures
a variety of file operations.  Iozone has been ported to many machines and runs
under many operating systems,  performing a broad file system analysis of a
vendor’s computer platform.
IOzone is available free from www.iozone.org
While running benchmarks, care should be taken avoid the following common
mistakes:
• Testing storage performance with file copy commands
• Comparing storage devices back-to-back w/o clearing server cache
• Testing where the data set is so small the benchmark rarely goes beyond
server to storage cache
• Forgetting to monitor processor utilization during testing
• Monitoring the wrong server’s performance
This will ensure a more realistic and representative assessment of your
environment.
Ixia IxChariot
IxChariot is a fee based benchmarking tool which simulates applications
workloads to predict device and system performance under realistic load
conditions. IxChariot performs thorough network performance assessment and
device testing by simulating hundreds of protocols across thousands of network
endpoints.
When vendors utilize such benchmarking tools to asses performance, they
take into consideration the entire network, as the server, network and storage
system all play a part in application performance. It’s important to understand
how to identify and eliminate latency bottlenecks to ensure superior application
performance. While it may be logical to look for sources of performance
degradation outside the server – in the network connectivity or storage
components – it’s important to understand that performance degradation
What Network Administrators Need to Know about Storage Management22
can also occur within the server. For example the cycles the server CPU has
available to process application workloads can impact performance. This is
referred to as a server’s CPU efficiency. What affects CPU efficiency is further
discussed below.
Therefore a properly designed SAN can improve storage utilization, high
availability and data protection.
When evaluating SAN performance, the following need to be considered:
•	 Latency
•	 Bandwidth
•	 Throughput
•	 Input/Output operations per second (IOPS)
Fibre Channel has evolved over the years, delivering faster and faster
performance, as measured by throughput (megabits per second). Today;
however, 10Gb based Ethernet networks now provide performance equal to
Fibre Channel based networks. 10GbE is currently the fastest of the Ethernet
standards, with a nominal data rate of 10Gb/s or 10 times as fast as Gigabit
Ethernet. The following table provides a performance summary of Fibre Channel
evolution, along with 10GbE for comparison.
Name
Throughput
(MBps)*
Line-Rate –
1Gb Fibre Channel 200 MB/s 1.0625 GBaud –
2Gb Fibre Channel 400 MB/s 2.125    GBaud –
4Gb Fibre Channel 800 MB/s   4.25     GBaud –
8Gb Fibre Channel 1600 MB/s   8.50     GBaud –
16b  Fibre Channel 3200 MB/s 17.00     GBaud –
1Gig bit Ethernet 1Gb second 1Gigabit / sec.
10Gig bit Ethernet 10Gb second 10Gigabits / sec.
40Gig bit Ethernet 40Gb second 40Gigabits / sec.
* - Throughput for duplex connections
Key Terminology
The following terminology is important to understanding SAN performance:
CPU Efficiency
CPU efficiency has various definitions. In context of this document, CPU
23What Network Administrators Need to Know about Storage Management
efficiency is referring to the server processor’s ability to process application
workloads - or simply put application workload IOP requirements divided by
the server’s CPU speed (GHz). The more IOPS that can be processed by each
GHz, the higher the CPU’s efficiency. A factor that can impact a server’s CPU
efficiency is the HBA selection. Some HBAs off-load certain processes onto
the server’s processer. By doing so, the server processor has less cycles
available for application workload processing, which can in turn lowers network
performance. Therefore, proper HBA selection can be one of the simplest
methods of improving overall performance. CPU efficiency also affords other
benefits, including reduction of capital and operational expenditures.
Performance Tuning
Storage systems rely on a number of performance tuning processes described
below.
Driver Parameters
Another factor that can impact performance is the driver parameter (also known
as adapter parameter) settings. The optimum settings are either dynamically
managed by the driver or configured automatically during the adapter installation
using the adapter’s management application.
Queue depth setting
Queuing refers to the ability of a storage system to queue storage commands
for later processing. Queuing can take place at various points in your storage
environment, from the Host Bus Adapter (HBA) to the storage processor/
controller. For example, modifying the “HBA Queue Depth” is a performance
tuning tip for servers that are connected to SANs. Since the HBA is the storage
equivalent of a network card, the Queue Depth parameter controls how much
data is allowed to be “in flight” on the storage network from that card. Most cards
default to a queue depth of 32, which is perfect for a general purpose server
and prevents the SAN from getting too busy. Queue depth can be adjustable.
Note that a little queuing may be acceptable depending on the transaction
workload, but too many outstanding I/Os can negatively impact performance,
as measured in latency.
Interrupt coalescing
Interrupt coalescing batches up kernel interrupts from the NIC to the kernel,
reducing per packet overhead. Interrupt coalescing represents a trade-off
between latency and throughput. Coalescing interrupts always adds latency to
arriving messages, but the resulting efficiency gains may be desirable where high
throughput is desired over low latency. Troubleshooting latency problems often
point to interrupt coalescing in Gigabit Ethernet NIC hardware. Fortunately, the
behavior of interrupt coalescing is configurable and can generally be adjusted to
What Network Administrators Need to Know about Storage Management24
the particular needs of an application. The default for some NICs or drivers is an
“adaptive” or “dynamic” interrupt coalescing setting that seems to significantly
favor high throughput over low latency. The details of configuring interrupt
coalescing behavior will vary depending on the OS and perhaps even the type
of NIC in use.
Key Metrics
The following are key SAN performance metrics:
Latency: I/O latency, also known as I/O response time, measures how fast an
I/O request can be processed by the disk I/O subsystem. For a given I/O path, it
is in proportion to the size of the I/O request. That is, a larger I/O request takes
longer to complete.
Bandwidth: The amount of available end-to-end SAN bandwidth is dependent
on back-end storage capacity on the SAN side. Improving SAN bandwidth
requires consideration of such factors as how the storage is configured, what
the application workload is and where a current bottleneck exists. For example,
if each server accesses a separate unique LUN, adding a second HBA would
add more bandwidth, but you might not see a performance improvement. This
would be the case if the LUN is being accessed via a single adapter path as well
as if the adapter or the LUN are not the bottlenecks. Or consider if each server
accesses multiple LUNs; if the LUNs are load balanced across adapters, there
is the potential for performance improvement.
Throughput: Throughput measures how much data can be pumped through
the disk I/O path. If you view the I/O path as a pipeline, throughput measures
how big the pipeline is and how much pressure it can sustain. So, the bigger
the pipeline is and the more pressure it can handle, the more data it can push
through. For a given I/O path, throughput is in direct proportion to the size of the
I/O requests. That is, the larger the I/O requests, the higher the megabytes per
second (MBps). Larger I/Os give you better throughput because they incur less
disk seek time penalty than smaller I/Os.
IOPS: I/O Operations Per Second (also known as IOPS) is a measure of a device
or a network ability to send and receive pieces of data. The size for these prices of
data depends on the application (ie: transactional, data base, etc.) and generally
range in size from 512byte to 8kilo bytes. IOPS have a known performance
profile of raising CPU utilization from a combination of CPU interrupt and wait
times. The specific number of IOPS possible in any server configuration will vary
greatly depending upon the variables entered into the program, including the
balance of read and write operations, the mix of random or sequential access
patterns and the number of worker threads and queue depth, as well as the
data block sizes.
Transfer Rate: Transfer rate is the amount of data that can be transferred on a
specific technology (ie: 2Gb, 4Gb or 8Gb Fibre Channel) within a specific time
period. In storage related tests, the transfer rate is expressed in megabytes or
25What Network Administrators Need to Know about Storage Management
gigabytes per second; MB/s and GB/s respectively. High sustainable transfer
rate play a critical in applications which “stream” data. These include backup
and restore, continuous data protection, RAID, video streaming, file copy and
data duplication applications.
CPU Efficiency (based on IOPS): This metric examines the ratio of IOPS
divided by average CPU utilization. This ratio illustrates the efficiency of a
given technology in terms of CPU utilization. Higher numbers of CPU efficiency
show that the given technology is friendlier to the host system’s processors.
Higher bandwidth or IOPS with lower CPU utilization is the desired result.
This is important, as users are try­ing to maximize their investments, and CPU
utilization.
IOPS
The most common performance characteristics that are measured or defined
are:
• Total IOPS: Total number of I/O operations per second (when performing a
mix of read and write tests)
• Random Read IOPS: Average number of random read I/O operations per
second
• Random Write IOPS: Average number of random write I/O operations per
second
• Sequential Read IOPS: Average number of sequential read I/O operations
per second
• Sequential Write IOPS: Average number of sequential write I/O operations
per second
Latency
SANs cannot tolerate delay. The performance of storage networks is extremely
sensitive to data/frame loss. While LAN traffic is less sensitive, slowing down
access to storage has a significant impact on server and application performance.
In addition, such delays also negatively impact server-to-server traffic. For that
reason, Fibre Channel has been the network protocol of choice for storage
networking, providing high-performance connectivity between servers and their
storage resources. Fibre Channel is an example of a loss-less network in the
sense that a data transmission from the sender (initiator/server) is only allowed if
the recipient (target/storage array) has sufficient buffer (memory) to receive the
data. This ensures data is not “dropped” by the recipient.
What Network Administrators Need to Know about Storage Management26
Chapter 7:
Security
Due to compliance or risk concerns, storage administrators must be aware of
the accessibility and vulnerabilities that storage systems are exposed to via
network interconnections. Protecting sensitive data residing in and flowing
through storage networks should be part of risk management assessments.
Defense in depth approaches to security include applying solutions that balance
the risks and costs with the desire to apply best practices for securing storage
systems.
Security  controls, whether they are preventive, detective, deterrent, or corrective
measures can be categorized into physical, procedural, technical, or legal/
regulatory compliance controls. There are several documents that promote
good security practices and define frameworks to structure the analysis and
design for managing information security controls. These include documents
from ISO (27001/2) and NIST. SNIA publishes best practices for storage system
security.
Technical solutions are available to implement controls of the confidentiality,
integrity, and availability of information. In addition concerns about accountability
and non-repudiation should be considered. Access controls and authorization
controls can prevent accidents and restrict privileges. Authentication of users
and devices can provide network access controls. Protecting management
interfaces, including replacement of default passwords, assures protection
from unauthorized changes. Audit and logging support provides for support for
validation of security configurations and support for organizations’ policies.
Security in Converged Networking Environments
Many IT organizations are acknowledging the benefits and advantages of
converged networking environments, primarily the sharing of infrastructure
and the reduction of costs. Network convergence allows unprecedented
connectivity options to information via platforms that are capable of supporting
block storage traffic such as iSCSI, FC, and FCoE, as well as file service traffic
for NAS (NFS/CIFS/SMB2 ) storage. As networks and storage increasingly share
the same infrastructures, the security aspects such as confidentiality, integrity,
and availability   are to be considered in risk assessments. Authentication,
confidentiality, user ID and credential management, audit support, and other
solutions relevant to converged or virtualized traffic flows can provide new
opportunities for efficiency by considering common security solutions whenever
possible. Many customers are finding that protocol agnostic and storage
agnostic solutions will prove to be economical solutions to assist them in
meeting security and compliance requirements.
27What Network Administrators Need to Know about Storage Management
Security Breaches
The inherent architecture of Fibre Channel SAN affords it greater degree of
security. However this is not to say a SAN is impervious to security breaches.  
Common risks include:
• Compromised management path can occur when the organization has a:
- Mal-intentioned administrators
- Compromised management console  
- Unsecure management interfaces
To avoid such situations, organizations typically implement management
authorization and access control access processes as well as authentication
measures. Therefore it is critical to select components that support role based
policies and authentication features.
• Unauthorized data access
- This typically occurs when a storage LUN becomes accessible beyond
the authorized hosts. The implication of such an event means that
people who should not have access to certain data will now be able to
access it. LUN masking/mapping, typically done at the array level, is
how such conditions are addressed.
• Impersonations and identity spoofing
- This condition occurs when initiators fake their identity through
worldwide name (WWN) spoofing which enable a session to
be hijacked. To protect against such occurrences in the SAN,
organizations leverage DH-CHAP, a type of authentication and IKE,
which establishes shared security information between two network
entities to support secure communication.
Applying tighter controls to overall SAN configurations is also helpful as it would
prevent administrative errors which could leave a SAN vulnerable to such
attacks.
• Compromised communication
- This can be one of the most costliest breaches for an organization.  
Not only are there regulatory implications, in terms of fines, but also
business implications, in terms of loss of intellectual property and
loss of customer confidence.  Therefore great care must be taken to
protect data from interception or eavesdropping.  Loss of data integrity
is another way “communication” can be compromised as data can be
intercepted, modified and then sent on its way.
Therefore great care must be taken to prevent compromised
communication and loss of data integrity. Organization should
leverage data encryption to encrypt their data. Although there are
various encryption methodologies, host based encryption is the most
What Network Administrators Need to Know about Storage Management28
effective as it encrypts data at source of its origin, protecting the data
in flight and at rest. Even in case of a lost or stolen hard disk drive,
the data remains encrypted. To reduce data integrity incidents, SAN
administrators are showing greater interest in products from vendors
who support industry initiatives such as Data Integrity Initiative (DII),
which provides application to disk data integrity protection.
Methods of Protecting a SAN
The following are methods storage administrators leverage to augment security
within SANs.
Zoning
Fabric Zoning
The zoning service within a Fibre Channel fabric was designed to provide
security between devices sharing the same fabric. The primary goal was to
prevent certain devices from accessing other devices within the fabric. With
many different types of servers and storage devices on the network, the need
for security is critical. For example, if a host were to gain access to a disk
being used by another host, potentially with a different OS, the data on this disk
could become corrupted. To avoid any compromise of critical data within the
SAN, zoning allows the user to overlay a security map dictating which devices,
namely hosts, can see which targets, thereby reducing the risk of data loss.
Zoning does, however, have its limitations. Zoning was designed to do nothing
more than prevent devices from communicating with other unauthorized
devices. It is a distributed service that is common throughout the fabric. Any
installed changes to a zoning configuration are therefore disruptive to the entire
connected fabric. Zoning also was not designed to address availability or
scalability of a Fibre Channel infrastructure. Therefore, while zoning provides
a necessary service within a fabric, the use of VSANs, described below, along
with zoning, provides an optimal solution.
WWN Zoning
WWN zoning uses name servers in the switches to either allow or block access
to particular WWNs in the fabric. A major advantage of WWN zoning is the ability
to re-cable the fabric without having to redo the zone information. However,
WWN zoning is susceptible to unauthorized access, as a zone can be bypassed
if an attacker is able to spoof the WWN of an authorized adapter.
SAN zoning
SAN zoning is a method of arranging Fibre Channel devices into logical groups
over the physical configuration of the fabric. SAN zoning can be used to
compartmentalize data for security purposes. SAN zoning also enables each
device in a SAN to be placed into multiple zones.
29What Network Administrators Need to Know about Storage Management
Hard Zoning
Hard zoning occurs in hardware; therefore, the zone is physically isolated,
blocking access to the zone from any device outside of the zone.
Soft Zoning
Soft zoning occurs at the software level; thus, it is more flexible than hard zoning,
making rezoning processes easier. Soft zoning uses filtering implemented in
Fibre Channel switches to prevent ports from being seen from outside of their
assigned zones. It uses WWNs to assign security permissions. The security
vulnerability in soft zoning is that the ports are still accessible if the user in
another zone correctly guesses the Fibre Channel address.
Port Zoning
Port zoning uses physical ports to define security zones, enabling IT
administrators to control data access through port connections. With port
zoning, zone information must be updated every time a user changes switch
ports. In addition, port zoning does not allow zones to overlap. Port zoning is
normally implemented using hard zoning, but could also be implemented using
soft zoning.
Virtual SAN
VSAN is a Cisco technology, designed to enhance scalability and availability
within the Fibre Channel networks. It augments the security services available
through fabric zoning. VSANs enable IT administrators to take a physical SAN
and establish multiple VSANs on top of it, creating completely isolated fabric
topologies, each with its own set of fabric services. Since individual VSANs
possess their own zoning services, each is independent of the other and does
not affect zoning services of other VSANs.
Some benefits of VSANs include:
a.	Increased utilization of existing assets and reduced need to build
additional physically isolated SANs
b.	Improved SAN availability by not only providing hardware-based isolation,
but also the ability to fully replicate a set of Fibre Channel services for each
VSAN
c.	Greater flexibility through selective addition or deletion of VSANs from a
trunk link, controlling the propagation of VSANs through the fabric
As a side note, VLANs allows the extension of a LAN over the WAN interface,
overcoming the physical limitations of a regular LAN. Just as with VSANs,
VLANs enable IT administrators to take a physical LAN and overlay on top
multiple VLANs. VLAN technology also allows IT administrators to deploy several
VLANs over a single switch in such a manner that all the LANs will operate as
independent networks.
What Network Administrators Need to Know about Storage Management30
LUN Masking
LUN masking is an authorization process that makes a LUN available to some
hosts and unavailable to other hosts. LUN masking is implemented primarily
at the HBA level. LUN masking implemented at this level is vulnerable to any
attack that compromises the HBA. Some storage controllers also support LUN
masking. An additional benefit to LUN masking is that it prevents Windows
operating systems to write volume labels on all available/visible LUNs within the
network, which can render the LUNs unusable by other operating systems or
result in data loss.
Security Protocols
Fibre Channel Authentication Protocol
Fibre Channel Authentication Protocol (FCAP) is an optional authentication
mechanism used between any two devices or entities on a Fibre Channel
network using certificates or optional keys.
Fibre Channel Password Authentication Protocol
Fibre Channel Password Authentication Protocol (FCPAP) is an optional
password-based authentication and key exchange protocol that is utilized in
Fibre Channel networks. FCPAP is used to mutually authenticate Fibre Channel
ports to each other. This includes E_Ports, N_Ports and Domain Controllers.
Switch Link Authentication Protocol
Switch Link Authentication Protocol (SLAP) was designed to prevent the
unauthorized addition of switches into a Fibre Channel network. It is an
authentication method for Fibre Channel switches that uses digital certificates
to authenticate switch ports.
Fibre Channel - Security Protocol
Fibre Channel - Security Protocol (FC-SP) is a security protocol for Fibre Channel
Protocol (FCP) and fiber connectivity (Ficon). FC-SP is a project of Technical
Committee T11 of the InterNational Committee for Information Technology
Standards (INCITS). FC-SP is a security framework that includes protocols to
enhance Fibre Channel security in several areas, including authentication of Fibre
Channel devices, cryptographically secure key exchange and cryptographically
secure communication between Fibre Channel devices. FC-SP is focused on
protecting data in transit throughout the Fibre Channel network. FC-SP does
not address the security of data stored on the Fibre Channel network.
Diffie Hellman - Challenge Handshake Authentication Protocol
FC-SP defines Diffie Hellman - Challenge Handshake Authentication Protocol
(DH-CHAP) as the baseline authentication scheme. DH-CHAP prevents World
31What Network Administrators Need to Know about Storage Management
Wide Name (WWN) spoofing (i.e., impersonation, masquerading attacks) and is
designed to withstand replay, offline dictionary password lookup and challenge
reflection attacks. (See Figure 8 for an illustration of the threats prevented
through the implementation of DH-CHAP authentication by the HBA/CNA.) DH-
CHAP supports algorithm-based authentication such as MD-5 and SHA-1.
Figure 8: Host threats prevented by implementation of DH-CHAP authentication by the
HBA or UCNA.
Encapsulating Security Payload over Fibre Channel
Encapsulating Security Payload (ESP) is an Internet standard for the
authentication and encryption of IP packets. ESP is widely deployed in IP
networks and has been adapted for use in Fibre Channel networks. The Internet
Engineering Task Force (IETF) iSCSI proposal specifies ESP link authentication
and optional encryption. ESP over Fibre Channel is focused on protecting data
in transit throughout the Fibre Channel network. ESP over Fibre Channel does
not address the security of data stored on the Fibre Channel network.
Securing iSCSI, iFCP and FCIP over IP Networks
The IETF IP Storage (IPS) Working Group is responsible for defining standards
for the encapsulation and transport of Fibre Channel and SCSI protocols over
IP networks. The IPS Working Group’s charter includes responsibility for data
security, security including authentication, keyed cryptographic data integrity
and confidentiality, sufficient to defend against threats up to and including those
that can be expected on a public network. Implementation of basic security
functionality will be required, although usage may be optional. The IPS Working
Group defines the use of the existing IPsec and Internet Key Exchange (IKE)
protocols to secure block storage protocols over IP.
What Network Administrators Need to Know about Storage Management32
Effect of Converged Network
Given the unified nature within a converged environment, precautions have to
be put in place to address access control, preventing the network administrator
undoing something the server administrator did. Currently SAN, server and
network administration are independent of each other; however, in converged
environments, management of these areas will overlap.
Native FCoE Storage
Storage arrays supporting native FCoE interfaces will enable end-to-end
network convergence and are expected to be the next logical progression
in the converged network environment. Besides the change in physical layer
connectivity that encapsulates Fibre Channel frames over Ethernet, the
functionality provided by native FCoE arrays remains equivalent to that of a Fibre
Channel array. The native FCoE arrays will leverage the proven performance of
Fibre Channel stack and retain the existing processes required for LUN masking
and storage backup (see Figure 9).
Zoning
Zoning practices used in Fibre Channel networking typically remain unaffected
in a converged network environment. Processes are transparently carried over
to the FCoE-capable lossless Ethernet switch.
33What Network Administrators Need to Know about Storage Management
Figure 9: A native FCoE storage connected to FCoE-enabled network
LUN Masking
LUN masking practices used by the storage administrators in Fibre Channel
storage remain unaffected in a converged network environment. Processes are
transparently carried over to native FCoE storage.
Compliance
Internal business initiatives and external regulations are constantly adding to
compliance challenges and are testing the capabilities of status quo networks.
Although IT managers could continue to deploy multiple networks and ensure
compliance, the process gets tedious with the changing dynamics of SAN
expansion driven by virtual servers and blade servers. A simplified approach
to networking provides competitive advantages in the face of new business
initiatives and helps meet regulatory compliance obligations.
What Network Administrators Need to Know about Storage Management34
Chapter 8:
Management:
Configuration and Diagnostics
Network administrators are concerned with movement of data, or to be more
specific, the reliable of user data from one point to another point within the
network. Therefore the network administrator is interested in factors that
affect management. Examples of such factors include bandwidth utilization,
provisioning of redundant links to ensure secondary data paths, support for
multiple protocols and so forth.
Storage administrators on the other hand, are less concerned about data
transport than about the organization and placement of data once it arrives at
its destination. LUN mapping, RAID levels, file integrity, data backup, storage
utilization and so forth comprise the bulk of a storage administrator’s daily
management routines.
These different views of management converge in a SAN, since the proper
operationofaSANrequiresbothmanagementofdatatransportandmanagement
of data placement. By introducing networking between servers and storage, a
SAN forces traditional storage management to broaden its scope to include
network administration and encourages traditional network management to
extend its reach to data placement and organization. Some of the most frequent
questions SAN administrators need to answer are:
•	 How much storage do I have available for my applications?
•	 Which applications, users and databases are the primary consumers of
storage?
•	 When do I need to buy more storage?
•	 How is storage being used?
SAN’s storage resources can be managed centrally, allowing administrators to
organize, provision and allocate that storage to users or applications operating
on the network across an organization. Centralization also allows administrators
to monitor performance, troubleshoot problems and manage the demands of
storage growth.
SAN provisioning
To centralize storage on a SAN while restricting access to authorized users
or applications; the entire storage environment should not be accessible to
every user. Administrators must carve up the storage space into segments
35What Network Administrators Need to Know about Storage Management
that are only accessible to specific users. This management process is known
as provisioning. For example, some amount of data center storage may be
provisioned for a purchasing related application that may only be accessible
by the purchasing department, while other space may be apportioned for
personnel records accessible only to the human resources department.
The major challenge with provisioning relates to storage utilization. Once
space is allocated, it cannot easily be changed. Thus, administrators typically
provision ample space for an application’s future use. Unfortunately, storage
capacity that is provisioned for one application cannot be used by another, so
space that is allocated, but unused, is basically wasted until called for by the
application. This need to allocate for future expansion often leads to significant
storage waste on the storage area network. One way to alleviate this problem is
through thin provisioning, which essentially allows an administrator to “tell” an
application that some amount of storage is available but actually commit far less
drive space — expanding that storage in later increments as the application’s
needs increase.
Provisioning is accomplished through the use of software tools. Tools typically
accompany major storage products. The issue for administrators is to seek
a provisioning tool that offers heterogeneous support supporting the storage
platforms currently in their datacenter.
Creating a SAN involves more than simply cabling servers and storage systems
together. Resources must be configured, allocated, tested and maintained.  
Introduction of  new devices to the SAN can change the requirements, therefore
management is a key consideration and it’s important to select solutions that
can minimize the time and effort needed to keep a SAN running.
Manageability has a significant impact on data centers. Streamlining deployment,
installation and configuration processes to improve efficiency are critical for IT
organizations that are challenged with servicing increasing business demands
with shrinking resources. Another key aspect of management is the ability to
monitor, diagnose and obtain information on the health of the SAN.
It is important to understand that storage traffic does not tolerate data loss;
therefore, it requires advanced management granularity. To that end, a more
comprehensive set of tools have been developed to provide switch fabric,
initiators, targets (storage arrays) and LUN administrative capabilities. This
enables the storage network to be kept at an optimum level of performance.
In addition, like Ethernet networks, Fibre Channel-based SANs have a robust set
of error checking and diagnostic capabilities designed to ensure the highest level
of network performance and connectivity. In addition, there are a broad range
of tools that enable storage administrators to address any issues that may arise
within their networks. These include diagnostic tools that help troubleshoot:
•	 Port functionality (initiator and target)
-	 Adapter port level
What Network Administrators Need to Know about Storage Management36
-	 Storage port level
-	 LUN and spindle
-	 Switch port
•	 I/O diagnostics
-	 Performance from IOPS perspective
-	 Performance from latency perspective
-	 Error detection
Adapter Management
Adapter management can be broken down in the following manner:
Installation
This entails the physical installation of the adapter within the server as well as
the adapter’s software components. It is important to select adapters which
provide greatest installation flexibility as such capabilities can significantly help
to streamline deployments, improve server availability and reduce costs. Some
examples of such capabilities include the ability to pre-configure a server with
the adapter’s software without the adapter being present in the server. This helps
to pre-stage server resources for rapid deployment. Installation automation is
also another feature which should be taken into consideration. Automation can
speedup and streamline adapter installation by deploying software components
in a “batch” fashion.
Configuration
Once the HBA has been installed, it must be configured. Using the HBA’s
management application, SAN administrators set the “driver parameter” settings
to customize the HBAs capabilities to match the needs of their environment.  
There is a host of setting which administrators can set to activate features and
change performance characteristics of the adapter. Some examples include
queue depth settings for optimal operation with existing storage resources,
security settings, virtualization, time outs, etc. Boot from SAN settings can also
be set during the configuration process. As server vendors shift to diskless
server designs, a boot device must then be assigned to the server from the SAN.
Such servers can also be assigned with a secondary boot device, in case the
primary boot device become inaccessible. Certain adapter vendors also provide
configuration automation capabilities as well, enabling SAN administrators to
streamline management capabilities. An example of configuration automation
is the ability to centrally propagate adapter firmware and driver updates across
the entire network, helping to reduce server re-boots, maximize network uptime
and increase overall management flexibility.
37What Network Administrators Need to Know about Storage Management
Management
Adapter management should be a critical consideration in selecting an adapter
for the server. IT administrators in general are tasked to do more with same level
of resources. To that end, they need management tools which help them improve
administration of adapters within the data center. Convergence introduces a
new layer of management requirements. Fibre Channel adapter vendors are
now also offering FCoE, NIC and iSCSI solutions as well. However, some
vendors have yet to integrate central management of their server to network
connectivity solutions. That is why it is important to select adapter vendors
which provide a centralized, cross platform management solution for unified
administration of adapters, regardless of the protocol (Fibre Channel, FCoE,
iSCSI or NIC). Such solutions can centrally display all adapters within a SAN,
enabling effective and efficient management of adapters. By selecting the right
adapter, SAN administrators can simplify administrative tasks and improve data
center responsiveness support demands of dynamic business environments.
Diagnostics
Given the critical nature of a SAN, robust set of diagnostics are a must for
the various pieces that comprise a SAN, which includes the adapter. While
there are a common set of diagnostic tools offered by adapter vendors, some
vendors have developed advanced set of diagnostic and I/O Management
applications designed to truly optimize network availability, asset utilization and
responsiveness. Such tools can be used to identify and address intermittent
SAN issues, over subscription conditions, and end-to-end I/O performance
degradations.
Key Terminology
The following section defines some common terms and management functions
used by storage administrators.
HBA and CNA configuration
Relative to IP networks, there is more involved in configuring connectivity (HBAs
and CNAs) for storage networks (relative to NICs used in IP networks). For
example, when configuring storage adapters, storage administrators need to:
•	 Know how to plan and provision storage resources
•	 Allocate storage resources based on user requirements, which requires
understanding of the user’s requirements (capacity needed, performance
required, availability, etc.)
•	 Tune adapter and storage fabric to match the optimum I/O transactional
capabilities of the storage arrays
What Network Administrators Need to Know about Storage Management38
Port Configuration
Initially, you have to make sure the port’s world wide port name (WWPN) is part
of a storage network zone. This ensures the server can access the storage on
the SAN fabric.
Boot from SAN
Similar to PXE boot in IP networks, Fibre Channel networks also support booting
of the server from a non-local hard disk. This is called “boot from SAN.” While
Ethernet networks require a host of intermediary (DHCP, PXE, along with an FTP
or HTTP) services, Fibre Channel does not have such a requirement. In Fibre
Channel networks, the server has direct access to the highly available storage
devices within the SAN, which it can use for booting. Enabling boot from SAN
requires configuring the storage device, such as the HBA, with the boot image
and boot disk information and then installing the OS.
vPorts
Similar to creating virtual end-points in Ethernet environments, storage
administrators can create Fibre Channel vPorts. Using N_Port ID Virtualization
(NPIV), multiple vPorts can be assigned to one physical port. NPIV allows
each vPort to have its own WWPN, a unique identifier. Storage administrators
use vPorts to apply SAN best practices, such as zoning, in virtual server
environments.
SMI-S
Storage Management Initiative Specification (SMI-S) defines Distributed
Management Task Force (DMTF) management profiles for storage systems. A
profile describes the behavior characteristics of an autonomous, self-contained
management domain. SMI-S includes profiles for adapters, arrays, switches,
storage virtualizer, volume management and many other domains. A “provider”
is an implementation for a specific profile.
At a very basic level, SMI-S entities are divided into two categories:
•	 Clients are management software applications that can reside virtually
anywhere within a network provided they have a physical link (either
within the data path or outside the data path) to providers.
•	 Servers are the devices under management within the storage fabric.
Clients can be host-based management applications (storage resource
management, or SRM), enterprise management applications or SAN appliance-
based management applications (e.g., virtualization engines). Servers can be
disk arrays, host bus adapters, switches, tape drives, etc.
By leveraging SMI-S, vendors offer open, standards-based interfaces and
solutions (hardware or software), enabling easier integration, interoperability
and management.
39What Network Administrators Need to Know about Storage Management
CIM
The Common Information Model (CIM) is an open standard, and part of the
DMTF standard, that defines how managed elements in an IT environment
are represented as a common set of objects and relationships between them.
This is intended to allow consistent management of these managed elements,
independent of their manufacturer or SMI-S provider. It is also the basis for the
SMI-S standard for storage management.
Effect of Converged Network
Currently, storage administrators have a distinct set of diagnostic tools and
processes for fault isolation and diagnoses of issues within Fibre Channel
networks. Given that there will be a common infrastructure in converged
environments, fault isolation procedures must be adjusted or changed to
determine the best method to effectively identify and resolve issues within the
converged network. For example, determining if Fibre Channel end-device
(storage) can be accessed, its response time, etc.
Other impacts include the following:
•	 Administrators need to configure 10GbE DCB ports to carry LAN and
storage traffic, as well as allocate bandwidth.
•	 When running Fibre Channel or iSCSI over Ethernet, both have direct
booting capabilities.
•	 As 10GbE DCB will be used for multi-traffic types, any physical disruption
will adversely affect storage, LAN and any other forms of data traffic.
Fibre Channel Initialization Protocol (FIP)
Fibre Channel Initialization Protocol (FIP) discovers all Fibre Channel devices
within an Ethernet network. It is the FCoE “control” protocol responsible
for establishing and maintaining Fibre Channel virtual links between FCoE
devices.
Port Configuration
The following describes new port configuration processes.
FCoE Port Configuration Process:
The FCoE port configuration mirrors that of Fibre Channel port configuration.
The major difference, however, is that before port configuration can take place,
we need to make sure there is a Converged Ethernet connection to an FCF
provider through an FCoE switch. FCF provider establishes a connection
between the FCoE adapter and the FCoE switch. When this is operational, the
adapter will discover the presented SAN fabric and all targets will be visible on
the FCoE switch.
What Network Administrators Need to Know about Storage Management40
iSCSI Port Configuration Process
Using the iSCSI adapter’s management application, the adapter must be given
an iSCSI-qualified name (IQN). The IQN of the iSCSI target device and the IP
address of the target portal must also be available. With the iSCSI adapter’s
management application, a connection to the target can then be initiated via
the target’s IP address.
41What Network Administrators Need to Know about Storage Management
Chapter 9:
Emulex Solutions
About Emulex
Emulex®
creates enterprise-class products that intelligently connect storage,
servers and networks, and is the leader in converged networking solutions for
the data center. Expanding on its traditional Fibre Channel solutions, Emulex’s
Connectivity Continuum architecture now provides intelligent networking
services that transition today’s infrastructure into tomorrow’s unified network
ecosystem. Through strategic collaboration and integrated partner solutions,
Emulex provides its customers with industry-leading business value, operational
flexibility and strategic advantage.
Emulex Server-to-network Connectivity Solutions
Emulex designs and offers a broad range of server-to-network connectivity
solutions, qualified for use with offerings from major server and storage OEMs.
The Emulex family of LightPulse™
Fibre Channel HBAs and OneConnect™
UCNAs
provide IT administrators the flexibility, performance and reliability they need to
keep pace with demanding and dynamic business environments.
Emulex OneConnect UCNA
The Emulex OneConnect UCNA is a single-chip, high-
performance 10GbE adapter with support for TCP/IP, FCoE
and iSCSI, enabling one adapter to support a broad range of
network protocols. OneConnect is designed to address the key
challenges of the evolving data center and improve the overall
operational efficiency. OneConnect UCNA is a flexible server
connectivity platform that enables IT administrators to consolidate multiple
1GbE links onto a single 10GbE link. With support for
TCP/IP, FCoE, iSCSI and Internet Wide Area RDMA
Protocol (RoCE) on a single platform, IT administrators
can also meet the connectivity requirements of all
networking, storage and clustering applications. Such
flexibility simplifies server hardware configurations and
significantly reduces standard server configurations
deployed in the data center.
For greater performance, at adapter and server
level, OneConnect leverages iSCSI and FCoE offload technology. This not
only improves adapter performance, but also leaves more of the server’s
CPU cycles available for application workload processing. The end result is
more effective utilization of existing IT assets, which helps to reduce capital
What Network Administrators Need to Know about Storage Management42
expenditures. In fact, Emulex’s OneConnect UCNA design is so innovative that
Network Computing not only recognized it as the “New Product of the Year”
but also the “Network Infrastructure Product of the Year”.  But the true measure
of OneConnect’s success has been its acceptance and deployment in data
centers large and small.
Emulex LightPulse Fibre Channel HBAs
EmulexLightPulseHBAsleverageeightgenerationsofadvanced,
field-proven technologies to deliver a distinctive set of benefits
that are relied upon by the world’s largest enterprises. From
the unique firmware upgradeable architecture, to the common
driver model, Emulex is considered to provide the most reliable
and scalable Fibre Channel HBAs, and has received various
industry accolade
Emulex LightPulse 8Gb/s Fibre Channel HBAs provide the bandwidth required
to support the increase in data traffic brought about by organizations that are:
-	 Consolidating server resources through deployment of virtualization and
blade server technologies
-	 Leveraging higher performance next-generation server platforms
-	 Deploying or enhancing storage networking infrastructure to address
transaction intensive and data streaming applications
-	 Increasing data center power efficiency
43What Network Administrators Need to Know about Storage Management
Emulex Fibre Channel HBAs are designed with the enterprise customer in mind.
Working in close collaboration with IT organizations and system-level OEMs,
Emulex integrates features that streamline the deployment and simplify the
management of Fibre Channel HBAs within the data center.
Interoperability
Emulex server connectivity solutions are based on industry standards. Emulex
works closely with server, switch, storage and software OEMs to ensure highest
level of interoperability within heterogeneous data center environments. This
is just one of the reasons why Emulex HBAs and UCNAs have been broadly
adopted and deployed by IT organizations large and small.
Broad Operating System Support with Investment Protection
Emulex provides support for the major enterprise class operating systems.
Leveraging the exclusive “common driver” model, Emulex ensures Fibre
Channel driver interoperability between generations of LightPulse HBAs and
OneConnect UCNAs. This approach helps to preserve IT investment, as well as
simplifying redeployment.
Emulex’s Service Level Interface (SLI™) architecture was developed to allow
deployment of new firmware releases on one server or multiple servers
throughout the network without rebooting. Firmware independence and the
common driver model also mean that Emulex adapters can easily be redeployed
in servers running different operating systems.
OneCommand™
Manager –
Centralized, Multi-protocol Adapter Management
Emulex server connectivity solutions are not only designed for performance and
scalability, but also manageability. Emulex consolidated the management of its
HBAs and UCNAs under a single management application – OneCommand
Manager. With OneCommand Manager, IT administrators can remotely manage
Emulex Fibre Channel, iSCSI, FCoE and NIC resources from a centralized
location. Furthermore, powerful diagnostic and automation functions within
this application further help streamline administration functions, thus improving
management efficiency.
Regardless of the protocol, OneCommand Manager simplifies the administration,
maintenance and monitoring of server connectivity across the entire data
center.
Emulex – The Solution of Choice
With over 25 years of storage networking experience, Emulex server connectivity
solutionsdelivertheperformance,flexibility,scalabilityandreliabilityorganizations
need to address the demands of today’s dynamic business environment.
This experience, combined with close development partnerships with the
What Network Administrators Need to Know about Storage Management44
industry’s leading hardware and software OEMs, has made Emulex’s family of
LightPulse Fibre Channel HBAs and OneConnect CNAs the solution of choice for
the enterprise data center. Emulex HBA and CNA solutions have been qualified
and are used in a broad range of standard and blade server platforms.
Regardless of whether you are using a pure Fibre Channel network, or
transitioning to a converged network environment using 10GbE, Emulex has
the server to network connectivity to address your challenging needs. For more
information on Emulex solutions, please visit Emulex.com.
45What Network Administrators Need to Know about Storage Management
Chapter 10:
Conclusion
Converged networking is an emerging technology that will change the
way data center managers install and operate equipment, processes and
manpower. Converged networking results in an overlap of network and storage
administrators’ responsibilities. This guide explains networking and storage
basics to help each administrator better understand the changes resulting from
converged networking and how it will impact their role in the data center. Figure
10 provides an example of a converged network environment.
Figure 10: Converged network deployment
Look to Emulex to provide not only adapters to serve a converged network
environment, but also to help educate the industry as it evolves.
For more information on converged networking, download the Emulex
Convergenomics Guide from Emulex.com.
What Network Administrators Need to Know about Storage Management46

More Related Content

What's hot

What's hot (17)

Manual del usuario Satloc bantam
Manual del usuario Satloc bantamManual del usuario Satloc bantam
Manual del usuario Satloc bantam
 
SEAMLESS MPLS
SEAMLESS MPLSSEAMLESS MPLS
SEAMLESS MPLS
 
XORP manual
XORP manualXORP manual
XORP manual
 
Advanced Networking Concepts Applied Using Linux on IBM System z
Advanced Networking  Concepts Applied Using  Linux on IBM System zAdvanced Networking  Concepts Applied Using  Linux on IBM System z
Advanced Networking Concepts Applied Using Linux on IBM System z
 
Smart otdr JDSU
Smart otdr JDSUSmart otdr JDSU
Smart otdr JDSU
 
Jdsu mts 2000_manual
Jdsu mts 2000_manualJdsu mts 2000_manual
Jdsu mts 2000_manual
 
IBM Flex System Interoperability Guide
IBM Flex System Interoperability GuideIBM Flex System Interoperability Guide
IBM Flex System Interoperability Guide
 
Win plc engine-en
Win plc engine-enWin plc engine-en
Win plc engine-en
 
Hdclone
HdcloneHdclone
Hdclone
 
49529487 abis-interface
49529487 abis-interface49529487 abis-interface
49529487 abis-interface
 
thesis
thesisthesis
thesis
 
ScreenOS Idp policy creation en
ScreenOS Idp policy creation enScreenOS Idp policy creation en
ScreenOS Idp policy creation en
 
Manual quagga
Manual quaggaManual quagga
Manual quagga
 
Swi prolog-6.2.6
Swi prolog-6.2.6Swi prolog-6.2.6
Swi prolog-6.2.6
 
Embedded linux barco-20121001
Embedded linux barco-20121001Embedded linux barco-20121001
Embedded linux barco-20121001
 
Subscriber mgmt-solution-layer2-wholesale
Subscriber mgmt-solution-layer2-wholesaleSubscriber mgmt-solution-layer2-wholesale
Subscriber mgmt-solution-layer2-wholesale
 
인터맥프린터 Intermec PB50 감열 모바일프린터 매뉴얼
인터맥프린터 Intermec PB50 감열 모바일프린터 매뉴얼인터맥프린터 Intermec PB50 감열 모바일프린터 매뉴얼
인터맥프린터 Intermec PB50 감열 모바일프린터 매뉴얼
 

Viewers also liked

Bibliotecas ante el siglo XXI: nuevos medios y caminos
Bibliotecas ante el siglo XXI: nuevos medios y caminosBibliotecas ante el siglo XXI: nuevos medios y caminos
Bibliotecas ante el siglo XXI: nuevos medios y caminosJulián Marquina
 
Logaritmos caderno de exercícios
Logaritmos   caderno de exercíciosLogaritmos   caderno de exercícios
Logaritmos caderno de exercíciosprof. Renan Viana
 
Taller de Preparación para la Certificación (PMI-RMP)® - Realizar el Análisis...
Taller de Preparación para la Certificación (PMI-RMP)® - Realizar el Análisis...Taller de Preparación para la Certificación (PMI-RMP)® - Realizar el Análisis...
Taller de Preparación para la Certificación (PMI-RMP)® - Realizar el Análisis...David Salomon Rojas Llaullipoma
 
INFORME DE AUDITORIA GUBERNAMENTAL
INFORME DE  AUDITORIA GUBERNAMENTALINFORME DE  AUDITORIA GUBERNAMENTAL
INFORME DE AUDITORIA GUBERNAMENTALmalbertorh
 
Currículo Nacional de la Educación Básica
Currículo Nacional de la Educación BásicaCurrículo Nacional de la Educación Básica
Currículo Nacional de la Educación BásicaDiego Ponce de Leon
 
Magazine Het Ondernemersbelang de Baronie 0212
Magazine Het Ondernemersbelang de Baronie 0212Magazine Het Ondernemersbelang de Baronie 0212
Magazine Het Ondernemersbelang de Baronie 0212HetOndernemersBelang
 
Proyectos_de_innovacion
Proyectos_de_innovacionProyectos_de_innovacion
Proyectos_de_innovacionWebMD
 
Actualiteiten ICT Contracten en Partnerships (2012)
Actualiteiten ICT Contracten en Partnerships (2012)Actualiteiten ICT Contracten en Partnerships (2012)
Actualiteiten ICT Contracten en Partnerships (2012)Advocatenkantoor LEGALZ
 
Training Schrijven voor het Web
Training Schrijven voor het WebTraining Schrijven voor het Web
Training Schrijven voor het WebSimone Levie
 
Marco del buen desempeño docente
Marco del buen desempeño docenteMarco del buen desempeño docente
Marco del buen desempeño docente0013
 
Primer Paquete Económico 2017 Zacatecas (2/9)
Primer Paquete Económico 2017 Zacatecas (2/9)Primer Paquete Económico 2017 Zacatecas (2/9)
Primer Paquete Económico 2017 Zacatecas (2/9)Zacatecas TresPuntoCero
 
De Reis van de Heldin december 2015
De Reis van de Heldin december 2015De Reis van de Heldin december 2015
De Reis van de Heldin december 2015Peter de Kuster
 
Error messages
Error messagesError messages
Error messagesrtinkelman
 
Gfpi f-019 guia de aprendizaje 01 tda orientar fpi
Gfpi f-019 guia de aprendizaje 01 tda orientar fpiGfpi f-019 guia de aprendizaje 01 tda orientar fpi
Gfpi f-019 guia de aprendizaje 01 tda orientar fpilisbet bravo
 
Portafolio de Evidencias de mi Práctica Docente
Portafolio de Evidencias de mi Práctica DocentePortafolio de Evidencias de mi Práctica Docente
Portafolio de Evidencias de mi Práctica DocenteNorma Vega
 
JULIOPARI - Elaborando un Plan de Negocios
JULIOPARI - Elaborando un Plan de NegociosJULIOPARI - Elaborando un Plan de Negocios
JULIOPARI - Elaborando un Plan de NegociosJulio Pari
 

Viewers also liked (20)

Bibliotecas ante el siglo XXI: nuevos medios y caminos
Bibliotecas ante el siglo XXI: nuevos medios y caminosBibliotecas ante el siglo XXI: nuevos medios y caminos
Bibliotecas ante el siglo XXI: nuevos medios y caminos
 
Logaritmos caderno de exercícios
Logaritmos   caderno de exercíciosLogaritmos   caderno de exercícios
Logaritmos caderno de exercícios
 
Taller de Preparación para la Certificación (PMI-RMP)® - Realizar el Análisis...
Taller de Preparación para la Certificación (PMI-RMP)® - Realizar el Análisis...Taller de Preparación para la Certificación (PMI-RMP)® - Realizar el Análisis...
Taller de Preparación para la Certificación (PMI-RMP)® - Realizar el Análisis...
 
INFORME DE AUDITORIA GUBERNAMENTAL
INFORME DE  AUDITORIA GUBERNAMENTALINFORME DE  AUDITORIA GUBERNAMENTAL
INFORME DE AUDITORIA GUBERNAMENTAL
 
Currículo Nacional de la Educación Básica
Currículo Nacional de la Educación BásicaCurrículo Nacional de la Educación Básica
Currículo Nacional de la Educación Básica
 
Magazine Het Ondernemersbelang de Baronie 0212
Magazine Het Ondernemersbelang de Baronie 0212Magazine Het Ondernemersbelang de Baronie 0212
Magazine Het Ondernemersbelang de Baronie 0212
 
Proyectos_de_innovacion
Proyectos_de_innovacionProyectos_de_innovacion
Proyectos_de_innovacion
 
Actualiteiten ICT Contracten en Partnerships (2012)
Actualiteiten ICT Contracten en Partnerships (2012)Actualiteiten ICT Contracten en Partnerships (2012)
Actualiteiten ICT Contracten en Partnerships (2012)
 
Training Schrijven voor het Web
Training Schrijven voor het WebTraining Schrijven voor het Web
Training Schrijven voor het Web
 
Marco del buen desempeño docente
Marco del buen desempeño docenteMarco del buen desempeño docente
Marco del buen desempeño docente
 
Primer Paquete Económico 2017 Zacatecas (2/9)
Primer Paquete Económico 2017 Zacatecas (2/9)Primer Paquete Económico 2017 Zacatecas (2/9)
Primer Paquete Económico 2017 Zacatecas (2/9)
 
"Protección de la salud mental luego del terremoto y tsunami del 27 de febrer...
"Protección de la salud mental luego del terremoto y tsunami del 27 de febrer..."Protección de la salud mental luego del terremoto y tsunami del 27 de febrer...
"Protección de la salud mental luego del terremoto y tsunami del 27 de febrer...
 
Relatietips
RelatietipsRelatietips
Relatietips
 
De Reis van de Heldin december 2015
De Reis van de Heldin december 2015De Reis van de Heldin december 2015
De Reis van de Heldin december 2015
 
Error messages
Error messagesError messages
Error messages
 
Gfpi f-019 guia de aprendizaje 01 tda orientar fpi
Gfpi f-019 guia de aprendizaje 01 tda orientar fpiGfpi f-019 guia de aprendizaje 01 tda orientar fpi
Gfpi f-019 guia de aprendizaje 01 tda orientar fpi
 
Portafolio de Evidencias de mi Práctica Docente
Portafolio de Evidencias de mi Práctica DocentePortafolio de Evidencias de mi Práctica Docente
Portafolio de Evidencias de mi Práctica Docente
 
Geheugen verbeteren
Geheugen verbeterenGeheugen verbeteren
Geheugen verbeteren
 
JULIOPARI - Elaborando un Plan de Negocios
JULIOPARI - Elaborando un Plan de NegociosJULIOPARI - Elaborando un Plan de Negocios
JULIOPARI - Elaborando un Plan de Negocios
 
De impact van adhd
De impact van adhdDe impact van adhd
De impact van adhd
 

Similar to Emulex - Management Mind Meld (A. Ordoubadian)

VMware Network Virtualization Design Guide
VMware Network Virtualization Design GuideVMware Network Virtualization Design Guide
VMware Network Virtualization Design GuideEMC
 
Nsr Userguide
Nsr UserguideNsr Userguide
Nsr Userguidekerklaanm
 
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...Juniper Networks
 
NIC Virtualization on IBM Flex Systems
NIC Virtualization on IBM Flex SystemsNIC Virtualization on IBM Flex Systems
NIC Virtualization on IBM Flex SystemsAngel Villar Garea
 
Junipe 1
Junipe 1Junipe 1
Junipe 1Ugursuz
 
Intelligent Storage Enables Next Generation Surveillance & Security Infrastru...
Intelligent Storage Enables Next Generation Surveillance & Security Infrastru...Intelligent Storage Enables Next Generation Surveillance & Security Infrastru...
Intelligent Storage Enables Next Generation Surveillance & Security Infrastru...Personal Interactor
 
Cisco routers for the small business a practical guide for it professionals...
Cisco routers for the small business   a practical guide for it professionals...Cisco routers for the small business   a practical guide for it professionals...
Cisco routers for the small business a practical guide for it professionals...Mark Smith
 
Business and Economic Benefits of VMware NSX
Business and Economic Benefits of VMware NSXBusiness and Economic Benefits of VMware NSX
Business and Economic Benefits of VMware NSXAngel Villar Garea
 
Network Virtualization and Security with VMware NSX - Business Case White Pap...
Network Virtualization and Security with VMware NSX - Business Case White Pap...Network Virtualization and Security with VMware NSX - Business Case White Pap...
Network Virtualization and Security with VMware NSX - Business Case White Pap...Błażej Matusik
 
IBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data CenterIBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data CenterIBM India Smarter Computing
 
AltiGen Acm Administration Manual
AltiGen Acm Administration ManualAltiGen Acm Administration Manual
AltiGen Acm Administration ManualCTI Communications
 
Gdfs sg246374
Gdfs sg246374Gdfs sg246374
Gdfs sg246374Accenture
 
Creating a VMware Software-Defined Data Center Reference Architecture
Creating a VMware Software-Defined Data Center Reference Architecture Creating a VMware Software-Defined Data Center Reference Architecture
Creating a VMware Software-Defined Data Center Reference Architecture EMC
 
Troubleshooting guide
Troubleshooting guideTroubleshooting guide
Troubleshooting guidemsaleh1234
 
2000402 en juniper good
2000402 en juniper good2000402 en juniper good
2000402 en juniper goodAchint Saraf
 
Ibm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configurationIbm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configurationgagbada
 

Similar to Emulex - Management Mind Meld (A. Ordoubadian) (20)

VMware Network Virtualization Design Guide
VMware Network Virtualization Design GuideVMware Network Virtualization Design Guide
VMware Network Virtualization Design Guide
 
Nsr Userguide
Nsr UserguideNsr Userguide
Nsr Userguide
 
8000 guide
8000 guide8000 guide
8000 guide
 
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
MetaFabric™ Architecture Virtualized Data Center: Design and Implementation G...
 
NIC Virtualization on IBM Flex Systems
NIC Virtualization on IBM Flex SystemsNIC Virtualization on IBM Flex Systems
NIC Virtualization on IBM Flex Systems
 
Junipe 1
Junipe 1Junipe 1
Junipe 1
 
Intelligent Storage Enables Next Generation Surveillance & Security Infrastru...
Intelligent Storage Enables Next Generation Surveillance & Security Infrastru...Intelligent Storage Enables Next Generation Surveillance & Security Infrastru...
Intelligent Storage Enables Next Generation Surveillance & Security Infrastru...
 
Cisco routers for the small business a practical guide for it professionals...
Cisco routers for the small business   a practical guide for it professionals...Cisco routers for the small business   a practical guide for it professionals...
Cisco routers for the small business a practical guide for it professionals...
 
Business and Economic Benefits of VMware NSX
Business and Economic Benefits of VMware NSXBusiness and Economic Benefits of VMware NSX
Business and Economic Benefits of VMware NSX
 
Network Virtualization and Security with VMware NSX - Business Case White Pap...
Network Virtualization and Security with VMware NSX - Business Case White Pap...Network Virtualization and Security with VMware NSX - Business Case White Pap...
Network Virtualization and Security with VMware NSX - Business Case White Pap...
 
IBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data CenterIBM Flex System Networking in an Enterprise Data Center
IBM Flex System Networking in an Enterprise Data Center
 
AltiGen Acm Administration Manual
AltiGen Acm Administration ManualAltiGen Acm Administration Manual
AltiGen Acm Administration Manual
 
Gdfs sg246374
Gdfs sg246374Gdfs sg246374
Gdfs sg246374
 
Begining j2 me
Begining j2 meBegining j2 me
Begining j2 me
 
Creating a VMware Software-Defined Data Center Reference Architecture
Creating a VMware Software-Defined Data Center Reference Architecture Creating a VMware Software-Defined Data Center Reference Architecture
Creating a VMware Software-Defined Data Center Reference Architecture
 
Troubleshooting guide
Troubleshooting guideTroubleshooting guide
Troubleshooting guide
 
2000402 en juniper good
2000402 en juniper good2000402 en juniper good
2000402 en juniper good
 
Ibm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configurationIbm power vc version 1.2.3 introduction and configuration
Ibm power vc version 1.2.3 introduction and configuration
 
Designing an ibm storage area network sg245758
Designing an ibm storage area network sg245758Designing an ibm storage area network sg245758
Designing an ibm storage area network sg245758
 
Air fiber af5_af5u_ug
Air fiber af5_af5u_ugAir fiber af5_af5u_ug
Air fiber af5_af5u_ug
 

Emulex - Management Mind Meld (A. Ordoubadian)

  • 2. What Network Administrators Need to Know About Storage Management
  • 3. What Network Administrators Need to Know about Storage Management
  • 4. iWhat Network Administrators Need to Know about Storage Management i Abstract Data center administrators face a major networking challenge from the combination of high bandwidth requirements, increasing network sprawl and the need for a more adaptive networking infrastructure. Most data centers today have: • Multiple network fabrics, each dedicated to a specific type of traffic • High numbers of adapters and switch port deployments • Complex cabling infrastructure • Complex management of switch and adapter firmware and associated service contracts Data centers are implementing a new consolidated network technology for data and storage, called “converged networking.” Converged networking combines existing Local Area Networks (LANs) and Storage Area Networks (SANs) into a single, high-performance 10Gb/s Ethernet (10GbE) framework that intelligently connects every server, network and storage device within the data center, thereby enabling unified I/O. Converged networking results in an overlap of network and storage administrators’ responsibilities. This guide explains networking and storage basics to help each administrator better understand the changes resulting from converged networking and how it will impact their role in the data center. The following sections are provided in this guide: • Introduction with Key Terminology: General SAN/LAN Technology Overview • High Availability/Fault Tolerance • Performance • Security • Management • Emulex Components • Conclusion
  • 5. What Network Administrators Need to Know about Storage Managementiiii
  • 6. iiiWhat Network Administrators Need to Know about Storage Management iii Table of Contents Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i Chapter 1: Evolution of the Data Center . . . . . . . . . . . . . . 1 Drivers for Network Convergence . . . . . . . . . . . . . . . . . . . . . 1 The Data Center Networking Challenge . . . . . . . . . . . . . . . . . . 1 Chapter 2: 10 Gigabit Ethernet, the Enabling Technology for Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Chapter 3: Technology Overview . . . . . . . . . . . . . . . . . 5 Fibre Channel over Ethernet . . . . . . . . . . . . . . . . . . . . . . . . 5 Fibre Channel Characteristics Preserved . . . . . . . . . . . . . . . . . . 6 iSCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 Chapter 4: Storage Area Networks . . . . . . . . . . . . . . . . 8 SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 Logical Unit Number . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Fibre Channel Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . 10 Layers of Fibre Channel Protocol . . . . . . . . . . . . . . . . . . . . . 10 Internet FCP (iFCP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 OSI Model vs. FC/FCoE . . . . . . . . . . . . . . . . . . . . . . . . . . 11 World Wide Name . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Converged Networking . . . . . . . . . . . . . . . . . . . . . . . 12 Data Center Bridging (DCB) . . . . . . . . . . . . . . . . . . . . . . . 12 Priority Flow Control (PFC) . . . . . . . . . . . . . . . . . . . . . . . 12 Enhanced Transmission Selection (ETS) . . . . . . . . . . . . . . . . 13 How FCoE Ties FC Protocol with Network Protocol . . . . . . . . . . . . 13 Requirements to Deploy Loss-less Ethernet . . . . . . . . . . . . . . . 13 Non Fibre Channel Based Storage Protocols . . . . . . . . . . . . . . . 13 Chapter 5: SAN Availability . . . . . . . . . . . . . . . . . . . . 14 Key Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 SAN Trunking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Failover and Load Balancing . . . . . . . . . . . . . . . . . . . . . . 15 Configuring Failover in a SAN . . . . . . . . . . . . . . . . . . . . . . . 16 Effect of Converged Network . . . . . . . . . . . . . . . . . . . . . . . 16 QoS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Data Center Bridging eXchange (DCBX) . . . . . . . . . . . . . . . . 17 Failover . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
  • 7. What Network Administrators Need to Know about Storage Managementiv Chapter 6: Performance . . . . . . . . . . . . . . . . . . . . . . 18 SAN performance and capacity management . . . . . . . . . . . . . . 18 Effect of Converged Network . . . . . . . . . . . . . . . . . . . . . . . 18 Industry Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Storage Performance Council (SPC) . . . . . . . . . . . . . . . . . . 19 Transaction Processing Performance Council (TPC) . . . . . . . . . . 19 Benchmarking Software . . . . . . . . . . . . . . . . . . . . . . . . . 21 Iometer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 IOzone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Ixia IxChariot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Key Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 CPU Efficiency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 Performance Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Driver Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Queue depth setting . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Interrupt coalescing . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Key Metrics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 IOPS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Latency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Chapter 7: Security . . . . . . . . . . . . . . . . . . . . . . . . 26 Security in Converged Networking Environments . . . . . . . . . . . . . 26 Security Breaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Methods of Protecting a SAN . . . . . . . . . . . . . . . . . . . . . . . 28 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Virtual SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 LUN Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Security Protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 Encapsulating Security Payload over Fibre Channel . . . . . . . . . . 31 Securing iSCSI, iFCP and FCIP over IP Networks . . . . . . . . . . . 31 Effect of Converged Network . . . . . . . . . . . . . . . . . . . . . . . 32 Native FCoE Storage . . . . . . . . . . . . . . . . . . . . . . . . . . 32 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 LUN Masking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Compliance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
  • 8. vWhat Network Administrators Need to Know about Storage Management Chapter 8: Management: Configuration and Diagnostics . . . 34 SAN provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 Adapter Management . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Diagnostics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Key Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 HBA and CNA configuration . . . . . . . . . . . . . . . . . . . . . . 37 Port Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 Boot from SAN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 vPorts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 SMI-S . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 CIM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Effect of Converged Network . . . . . . . . . . . . . . . . . . . . . . . 39 Fibre Channel Initialization Protocol (FIP) . . . . . . . . . . . . . . . . 39 Port Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Chapter 9: Emulex Solutions . . . . . . . . . . . . . . . . . . . 41 Chapter 10: Conclusion . . . . . . . . . . . . . . . . . . . . . . 45
  • 9. What Network Administrators Need to Know about Storage Managementvi What Network Administrators Need to Know about Storage Managementvi
  • 10. 1What Network Administrators Need to Know about Storage Management Chapter 1: Evolution of the Data Center Drivers for Network Convergence The combination of high bandwidth demand, increasing network sprawl and the need for more adaptive networking infrastructure is posing a major challenge for data center managers. Pain points in today’s data center networks include: • Multiple network fabrics, each dedicated to a specific type of traffic (see Figure 1) • High numbers of adapters and switch port deployments • Complex cabling infrastructure • Storage network provisioning times as a result of static configurations • Complexity of managing switch and adapter firmware and associated service contracts HCA Core Ethernet Network FC SAN Ethernet Switch FC Switch Server NIC Infiniband Switch IB Network HBA Figure 1: Dedicated networks for SAN and LAN The Data Center Networking Challenge Data center managers are clearly in need of networking solutions that contain the sprawl of network infrastructure and enable an adaptive next-generation network. The solution for optimizing the data center network must be capable of addressing the following high-level requirements: 1. Consolidate: The network solution must be capable of consolidating multiple low-bandwidth links into a faster high-bandwidth infrastructure and significantly reducing the number of switch and adapter ports and cables. 2. Converge: The network solution must be capable of converging or
  • 11. What Network Administrators Need to Know about Storage Management2 unifying networking and storage traffic to a single network, eliminating the need for dedicated networks for each traffic type. This functionality will further contribute toward reduction in network ports and cables, while simplifying deployment and management. 3. Virtualize: The network solution must be capable of virtualizing the underlying physical network infrastructure and providing service level guarantees for each type of traffic. In addition, the solution must be capable of responding to dynamic changes in network services depending on the business demands of the data center applications.
  • 12. 3What Network Administrators Need to Know about Storage Management Chapter 2: 10 Gigabit Ethernet, the Enabling Technology for Convergence The 10GbE networking standard, ratified in 2002, enables multiple traffic types over a single link, as shown in Figure 2. In order to facilitate network convergence and carry Fibre Channel traffic over 10GbE, Ethernet technology had to support a “no-drop” behavior because SAN traffic requires a loss-less transmission. To alleviate the “lossy” nature of traditional Ethernet environments, 10Gb Data Center Bridging (DCB) was developed to provide a loss-less connection, making it ideal for storage networking applications. 10GbE can operate both as a “loss-less” and “lossy” network. Ports can be configured to carry various protocols: • TCP/IP • Internet Small Computer System Interface (iSCSI) • Fibre Channel over Ethernet (FCoE) Figure 2: 10GbE enables multiple traffic types over a single link The DCB Task Group of IEEE 802.1 Working Group (LANs) provides the necessary framework for enabling 10GbE converged networking within a data center. The recent innovations of this task group that support the loss-less characteristic in 10GbE are summarized below:
  • 13. What Network Administrators Need to Know about Storage Management4 10GbE Innovations • Enhanced physical media o 10Gb/s connectivity over UTP cabling o 10Gb/s connectivity over Direct Attach Twin-ax Copper cabling • Optimizations in 10Gb/s transceiver technology (SFP+ form factor) • Support for loss-less Ethernet infrastructure • New physical network designs such as top-of-rack switch architectures • Isolate and prioritize different traffic types using Priority Flow Control (PFC) • Maintain bandwidth guarantees for multiple traffic types • Assure that end-points and switches know about each other’s capabilities through an enhanced management protocol using DCB These innovations rely on the following four key protocols: Protocols Key Functionality Business Value Priority Flow Control (PFC) P802.1Qbb Management of bursty, single traffic source on a multi-protocol link Enables storage traffic over 10GbE link with “no-drop” in the network Enhanced Transmission Selection (ETS) P802.1Qaz Bandwidth management between traffic types for multi- protocol links Enables bandwidth assignments per traffic type. Bandwidth is configurable on- demand. Data Center Bridging Capabilities Exchange Protocol (DCBCXP) 802.1Qaz Auto exchange of Ethernet parameters between peers (switch to NIC, switch to switch) Facilitates interoperability by exchanging capabilities supported across the nodes. Congestion Management (CM) P802.1Qau Addresses problem of sustained congestion, driving corrective action to the edge Facilitates larger end- to-end deployment of network convergence. Table 1: Protocol standards are enabling convergence In addition to providing lowered costs, 10GbE enables much-needed scalability by providing additional network bandwidth. 10GbE also simplifies management by reducing the number of ports and facilitating flexible bandwidth assignments for individual traffic types.
  • 14. 5What Network Administrators Need to Know about Storage Management Chapter 3: Technology Overview Fibre Channel over Ethernet In parallel with the emergence of loss-less 10GbE, the emergence of newer standards, such as the FCoE standard, is accelerating the adoption of Ethernet as the medium of network convergence. FCoE is a standard developed by INCITS T11 that fully leverages the enhanced features of 10GbE for I/O consolidation in the data center. 10GbE networks address the requirements of consolidation, convergence and virtualization. FCoE expands Fibre Channel into the Ethernet environment, combining two leading technologies, Fibre Channel and Ethernet, to provide more options to end users for SAN connectivity and networking. Network convergence, enabled by FCoE, helps address the network infrastructure sprawl, while fully complementing server consolidation efforts and improving the efficiency of the enterprise data center. FCoE is a new protocol that encapsulates Fibre Channel frames within an Ethernet frame traveling on a 10GbE DCB network. FCoE leverages 10Gb DCB connections. Although FCoE traffic shares the physical Ethernet link with other types of data traffic, FCoE data delivery is ensured, as it is given a loss-less priority status, matching the loss-less behavior guaranteed in Fibre Channel. FCoE is one of the technologies that makes I/O convergence possible, enabling a single network to support storage and traditional network traffic. Figure 3: Ability of technology to meet needs of network segments
  • 15. What Network Administrators Need to Know about Storage Management6 Fibre Channel Characteristics Preserved The FCoE protocol specification maps a complete Fibre Channel frame (including checksum, framing bits) directly onto the Ethernet payload and avoids the overhead of any intermediate protocols. Figure 4: FCoE encapsulation in Ethernet) This light-weight encapsulation ensures that FCoE-capable Ethernet switches are less compute-intensive, thus providing the high performance and low latencies of a typical Fibre Channel network. By retaining Fibre Channel as the upper layer protocol, the technology fully leverages existing Fibre Channel constructs such as fabric login, zoning and logical unit number (LUN) masking, and ensures secure access to the networked storage. Data center managers are looking for solutions to transition to a more dynamically provisioned network that is highly responsive and addresses the quality and service level requirements of business applications. iSCSI The iSCSI protocol ratified by Internet Engineering Task Force (IETF) in 2003 brought SANs within the reach of small and mid-sized businesses. The protocol encapsulates native SCSI commands using TCP/IP and transmits the packets over the Ethernet network infrastructure. The emergence of 10GbE addressed the IT manager’s concerns regarding the bandwidth and latency issues of 1 Gb Ethernet and laid the foundation for more widespread adoption of network convergence in data centers. iSCSI-enabled convergence offers several advantages: • Highly suitable for convergence in small and medium businesses, remote offices and department-level data centers where customers are transitioning from Direct Attach Storage (DAS) to SANs. • Reduces labor and management costs while increasing reach. • The ubiquitous nature of Ethernet means that IP networks can be deployed quickly and easily in organizations of all sizes. Ethernet is
  • 16. 7What Network Administrators Need to Know about Storage Management also readily understood, so IT personnel can deploy and maintain an IP environment without specialized Fibre Channel training. • Major operating systems include an iSCSI driver in their distribution. • iSCSI performance can be improved by deploying adapters that support iSCSI offload or TCP/IP offload to reduce the CPU demands for packet processing. Although optimal for small and medium businesses, iSCSI-enabled convergence does have limitations: • Because the underlying Ethernet network is prone to packet losses with network congestion, network designers typically recommend the use of separate Ethernet networks for storage and data networking. This reduces some of the cost advantages of convergence. • Large enterprise data centers have a sizable deployment of Fibre Channel SANs and use Fibre Channel-specific tools to effectively manage storage assets. From the perspective of these customers, iSCSI is a different storage technology that requires an incremental investment in hardware, software and training. The decision to deploy iSCSI or FCoE is largely based on current deployments. Enterprise data centers with Fibre Channel SANs already in place typically choose FCoE, while smaller data centers with no Fibre Channel typically choose iSCSI.
  • 17. What Network Administrators Need to Know about Storage Management8 Chapter 4: Storage Area Networks Understanding SAN technology requires familiarity with the terms and components described in this section. SAN A SAN is an architecture that attaches remote computer storage devices (such as disk arrays, tape libraries and optical jukeboxes) to servers in a manner where the devices appear as locally attached to the operating system (OS). A SAN generally is its own network of storage devices that are typically not accessible through the LAN by typical devices. Historically, by virtue of their design, data centers first created “islands” of SCSI disk arrays as DAS, each dedicated to an application, and visible as a number of “virtual hard drives” (i.e., LUNs, defined below). Essentially, a SAN consolidates such storage islands together using a high-speed network (see Figure 5). Figure 5: Storage Area Network
  • 18. 9What Network Administrators Need to Know about Storage Management Common uses of a SAN include provision of transactionally accessed data that require high-speed, block-level access to the hard drives such as e-mail servers, databases and high-usage file servers. Storage sharing typically simplifies storage administration and adds flexibility, since cables and storage devices do not have to be physically moved to shift storage from one server to another. Other benefits include the ability to allow servers to boot from the SAN itself. This allows for a quick and easy replacement of faulty servers since the SAN can be reconfigured so that a replacement server can use the boot LUN of the faulty server. This process can take as little as half an hour and is a relatively new idea being pioneered in newer data centers. SANs also tend to enable more effective and robust disaster recovery capabilities. A SAN can also span a distant location enabling more effective data replication implemented by disk array controllers, by server software or by specialized SAN devices. Since IP based Wide Area Networks (WANs) are often the least costly method of long-distance transport, the Fibre Channel over IP (FCIP) and iSCSI protocols have been developed to allow physical extension of a SAN over overcoming the distance limitations of the physical SCSI layer, ensuring business continuance in a disaster. The economic consolidation of disk arrays has accelerated the advancement of several features, including I/O caching, snapshotting and volume cloning (Business Continuance Volumes, or BCVs). Logical Unit Number In computer storage, a LUN is the identifier of a SCSI logical unit, and by extension, of a Fibre Channel or iSCSI logical unit. A logical unit is a SCSI protocol entity that performs classic storage operations such as read and write. Each SCSI target provides one or more logical units. A logical unit typically corresponds to a storage volume and is represented within an OS as a device. In current SCSI, a LUN is a 64-bit identifier. Note that even though named “Logical Unit Number,” it is not a number. It is divided into four 16-bit pieces that reflect a multilevel addressing scheme, and it is unusual to see any but the first of these used. To provide a practical example, a typical disk array has multiple physical SCSI ports, each with one SCSI target address assigned. Then the disk array is formatted as a redundant array of independent disks, or also known as redundant array of inexpensive disks (RAID), and then this RAID is partitioned into several separated storage volumes. To represent each volume, a SCSI target is configured to provide a logical unit. Each SCSI target may provide multiple logical units and thus represent multiple volumes, but this does not mean that those volumes are concatenated. The computer that accesses a volume on the disk array identifies which volume to read or write with the LUN of the associated logical unit. Another example is a single disk drive with one physical SCSI port. It usually provides just a single target, which, in turn, usually provides just a single logical
  • 19. What Network Administrators Need to Know about Storage Management10 unit whose LUN is zero. This logical unit represents the entire storage of the disk drive. Fibre Channel Protocol Fibre Channel is a high-speed network technology primarily used for storage networking. It uses Fibre Channel Protocol (FCP) transport protocol to transport SCSI commands over Fibre Channel networks. The following provides a summary of the differences between Fibre Channel and Ethernet: • Fibre Channel passes block data, similar to FCoE and talking to target devices, whereas Ethernet passes files/packets. Block data is much larger and moved in a loss-less manner. Ethernet is smaller and “lossy” and can be sent out of order. • Fibre Channel talks to target devices (storage device), whereas Ethernet typically talks to other hosts (servers). In the storage world, the distinction between the “target” and the “initiator” is important. Ethernet looks as these as one and the same. • With storage connectivity, there is a finite number of end points, whereas in LANs, there are an infinite number of end points that need to talk to each other. • In a LAN, the bandwidth requirement to any particular endpoint is generally much smaller than the bandwidth requirement for storage networks. The significance of this fact is that in a SAN, you have better predictability of traffic patterns and requirements, and you would likely create traffic zones between the finite number of host connections and storage connections. Layers of Fibre Channel Protocol Fibre Channel protocol consists of five layers. Given that Fibre Channel is also a type of “networking” protocol, there are some similarities to the Open Systems Interconnect (OSI) model used in networks. The Fibre Channel layers are noted below: Layer Description FC0 This is the physical layer, which covers cables, transceivers, connectors, pin-outs, etc. FC1 This is the data link layer, which encodes and decodes signals. FC2 This is the network layer, consisting of the core of Fibre Channel, and defining the main protocols. FC3 This is the common services layer, which is a thin layer that could, in the future, support functions like encryption or RAID. FC4 This is the protocol mapping layer. This layer encapsulates other protocols such as SCSI into an information unit for delivery to the network (FC2) layer.
  • 20. 11What Network Administrators Need to Know about Storage Management Internet FCP (iFCP) The iFCP protocol enables the implementation of Fibre Channel functionality over an IP network, within which the Fibre Channel switching and routing infrastructure is replaced by IP components and technology. Congestion control, error detection and recovery are provided through the use of TCP (Transmission Control Protocol). The primary objective of iFCP is to allow existing Fibre Channel devices to be networked and interconnected over an IP-based network at wire speeds. OSI Model vs. FC/FCoE The OSI Layered Model is an architectural abstraction that helps to describe the operation of protocols. Unfortunately, the Fibre Channel protocol layers cannot be mapped to OSI layers in a straightforward manner. FCoE, which leverages the Fibre Channel protocol, has an inherent awkwardness when applied to Ethernet networks, whereas the iSCSI protocol originated from a traditional Ethernet and IP environment. Figure 6 shows the mapping of Fibre Channel layers to OSI layers. World Wide Name A World Wide Name (WWN) is a 64-bit address used in Fibre Channel networks to uniquely identify each element in a Fibre Channel network. The use of WWNs for security purposes is inherently insecure, because the WWN of a device is a user-configurable parameter. Figure 6: Storage protocols mapped to the OSI model
  • 21. What Network Administrators Need to Know about Storage Management12 Converged Networking Data Center Bridging (DCB) The DCB Task Group is a part of the IEEE 802.1 Working Group. DCB is based on a collection of open-standard Ethernet extensions. It is designed to improve and expand Ethernet networking and management capabilities within the data center. DCB helps to ensure data delivery over loss-less fabrics, consolidate I/O over a unified fabric and improve bandwidth through multipathing at Layer 2 (the Datalink Layer). With DCB, Ethernet will provide solutions for consolidating I/O and carrying multiple protocols, such as IP and FCoE on the same network fabric, as opposed to separate networks. The ability to consolidate traffic is now available with the deployment of 10GbE networks due to the following components of DCB: 1. Priority-based Flow Control (PFC) – Enables management of bursty, single traffic source on a multiprotocol link 2. Enhanced Transmission Selection (ETS) – Enables management of bandwidth by traffic category for multi-protocol links 3. Date Center Bridging Exchange (DCBX) protocol – Allows auto-exchange of Ethernet parameters between switches and endpoints 4. Congestion notification – Resolves sustained congestion by moving corrective action to the network edge 5. Layer 2 Multipathing – Uses all bisectional bandwidth of Layer 2 topologies 6. Loss-less Service – Helps ensure guaranteed delivery service for applications that require it With DCB, a 10GbE connection can support multiple traffic types simultaneously, while preserving the respective traffic treatments. The same 10GbE link can also support Fibre Channel storage traffic by offering a “no data drop” capability via FCoE. Priority Flow Control (PFC) PFC is an enhancement to the existing pause mechanism in Ethernet. The current Ethernet pause option stops all traffic on a link; essentially, it is a link pause for the entire link. Unlike traditional Ethernet, DCB enables a link to be partitioned into multiple logical links with the ability to assign each link a specific priority setting (loss-less or lossy). The devices within the network can then detect whether traffic is “lossy” or “loss-less”. If the traffic is lossy, then it is treated in typical Ethernet fashion. If it is loss-less, then PFC is used to guarantee that none of the data is lost. In short, PFC allows any of virtual links to be paused and restarted independently, enabling the network to create a no-drop class of service for an individual virtual
  • 22. 13What Network Administrators Need to Know about Storage Management link. It also allows differentiated Quality of Service (QoS) policies for the eight unique virtual links. PFC is also referred to as Per Priority Pause (PPP). Enhanced Transmission Selection (ETS) ETS is a new standard that enables a more structured method of assigning bandwidth based on traffic class. This way, an IT administrator can allocate a specific percentage of bandwidth to SAN, LAN and inter-processor communication (IPC) traffic. How FCoE Ties FC Protocol with Network Protocol FCoE transports Fibre Channel frames over an Ethernet network while preserving existing Fibre Channel management modes. A loss-less network fabric is a requirement for proper operation. FCoE leverages DCB extensions to address congestion, traffic spikes and support multiple data flows on one cable to achieve unified I/O. Requirements to Deploy Loss-less Ethernet Loss-less Ethernet environment requires the means to pause the link, such as PFC (as described above) in a DCB environment. It also requires the means to tie the pause commands from the ingress to the egress port across the internal switch fabric. The pause option in Ethernet and PFC in DCB take care of providing loss-less Ethernet on each link. Finally, a loss-less intra-switch fabric architecture is required. Non Fibre Channel Based Storage Protocols iSCSI is an IP-based storage networking standard for linking data storage arrays to servers. iSCSI, like Fibre Channel, is a method of transporting high-volume data storage traffic and is designed to be a direct block-level protocol that reads and writes directly to storage. However, unlike Fibre Channel, iSCSI carries SCSI commands over Ethernet networks instead of a Fibre Channel network. Because of the ubiquity of IP networks, iSCSI can be used to transmit data over LANs, WANs or the Internet. iSCSI passes block data, similar to FCoE, and communicates to target devices.
  • 23. What Network Administrators Need to Know about Storage Management14 Chapter 5: SAN Availability In the event of an unexpected disruption, each IT infrastructure must be designed to ensure the continuity of business operations. From a data center storage perspective, this means that your SAN fabric must be extremely reliable, for data must be accessible at all times, whether to do scheduled backups or unexpected recoveries. Within the SAN fabric, high availability is needed across adapters, switches, servers and storage. If a problem occurs with any of these components, a combination of aggregation and failover techniques are used to meet availability and reliability requirements. Figure 7 shows an example of multipathing and failover in a SAN. Figure 7: Multipathing and failover Key Terminology The following terminology is important to ensuring SAN high availability/fault tolerance:
  • 24. 15What Network Administrators Need to Know about Storage Management SAN Trunking Trunking (also referred to as aggregation, link aggregation or port aggregation) combines ports to form faster logical communication links between devices. For example, by aggregating up to four inter-switch links (ISLs) into a single logical 8Gb/s trunk group, you optimize available switch resources, thereby decreasing congestion. Trunking increases data availability even if an individual link failure occurs. In such an instance, the I/O traffic continues, though at a reduced bandwidth, as long as at least one link in the trunk group remains available. Although this type of aggregation requires more cabling and switch ports, it offers the benefit of faster performance, load balancing and redundancy. It is often possible to aggregate links between a host server and switch, or between a storage system and a switch, or even between ISLs. Failover and Load Balancing Failover and load balancing in storage networks go hand-in-hand. By having multiple physical connections, a failure in one adapter port or cable won’t completely disrupt data traffic. Instead, data flow can continue at a reduced speed until the failure is repaired. Another benefit of multiple physical connections is load balancing. Normally, unrelated physical links can transfer data at independent and frequently unpredictable speeds, allowing a bottleneck on one or more of the physical connections, which, in turn, can impact the overall performance of the SAN. Once multiple physical connections are aggregated into a logical data path, data can be distributed equally across the member links to balance the load and reduce bottlenecks within the network. SAN failover is a configuration where multiple connections are made; however, not all of the connections carry data simultaneously. For example, a storage array may be connected using two 8Gb/s Fibre Channel links, but only one of the links might be active. The second link is connected, but is inactive. If the first link fails, the data communication then fails over to the second link, allowing communication to continue at the same speed until the original connection is repaired. SAN QoS The server, adapter, switch and storage array are critical components when attempting to deploy QoS within the SAN. The optimum QoS solution should be based on an overall view of the SAN, be fully interoperable and focus on critical bottlenecks. Fibre Channel adapters usually have excess bandwidth and short response times, and, as a result, do not impact overall QoS. This is particularly the case when following best practices and installing multiple adapters for high availability.
  • 25. What Network Administrators Need to Know about Storage Management16 Storage arrays are often the limiting factor for I/O and are a critical component for overall performance tuning. Array QoS is usually based on LUNs. For example, high-priority applications could be combined on a LUN with RAID striping, high- performance drives, a large amount of cache memory and a high QoS priority. Another LUN could be used to support less critical background tasks with inexpensive, lower performance disks and a lower QoS priority. When used in combination with all of these variables, array-based QoS management can be a very effective tool for storage administrators. Switch-based QoS can be used to prioritize traffic within the SAN. Some switches provide a variety of options to implement QoS. They include Fibre Channel zones, virtual SANs (VSANs) and individual ports. Fibre Channel switches are designed to be fully interoperable with industry-standard server- to-SAN connectivity adapters. For example, Cisco QoS provides extensive capabilities to create classes of traffic and assign the relative weight for queues. Other switches have more proprietary designs. I/O traffic between the server and switch is not likely to be a bottleneck, which is the case with high-performance adapters that usually have surplus bandwidth. Configuring Failover in a SAN At many levels, IP and storage networks share similar failover configuration steps. The following are a few of the basic methods to configure failover in a storage environment: • Servers configured with a dual-port adapter connected to a switch - Each port connected to two different switches - Create virtual ports (vPorts) on top of the physical ports and have them associated with a switch • Servers configured with two dual-port adapters connected to two different switches - Each port of an adapter is connected to a port on one of the two switches - Create vPorts on top of the physical ports and have them associated with different switches • Server Clusters - A group of independent servers working together as a single system to provide high availability of services. When a failure occurs on a server within the cluster, resources are rerouted, redistributing the workload to another server within the cluster. Server clusters are designed to increase availability of critical applications. Effect of Converged Network Converged networking will introduce new technologies and methodologies
  • 26. 17What Network Administrators Need to Know about Storage Management that will change data center reliability and business resilience processes. The following describes some of the changes to be considered. QoS Networks require much more than just “speeds and feeds.” 10GbE offers increased speed and bandwidth, but you still need to control it. QoS technologies are the means by which it can be controlled, and vendors will be providing these technologies for converged networks. Data Center Bridging eXchange (DCBX) DCBX is used by DCB devices to exchange configuration information with directly connected peers. The protocol may also be used for misconfiguration detection and for configuration of the peer. Ethernet is designed to be a “best-effort” network. This means data packets may be dropped or delivered out of order if the network or devices are busy. DCBX is an Ethernet discovery and configuration protocol that guarantees link end points are configured in a manner that averts “soft errors.” DCBX enables: • End-point consistency • Identification of configuration irregularities • Basic configuration capabilities to correct end-point misconfigurations DCBX protocol is used for transmission of configurations between neighbors within an Ethernet network to ensure reliable configuration across the network. It uses Link Layer Discovery Protocol (LLDP) to exchange parameters between two link peers. Failover IT administrators typically use failover solutions supplied by their storage OEMs or those integrated into the OS platform. Their implementation and management may also be different in Fibre Channel and iSCSI environments. For Microsoft Windows environments, some network interface card (NIC) vendors provide a NIC teaming driver that provides failover capabilities. It is expected that this capability may also be made available through the OS platform.
  • 27. What Network Administrators Need to Know about Storage Management18 Chapter 6: Performance SAN performance and capacity management SAN performance can be adversely affected when storage resources are low or become constrained. This can cause application performance problems and service level issues. Many IT organizations attempt to avert such issues by over purchasing and over provisioning storage. However this methodology frequently results in wasted capital since the additional storage investment may not necessarily be fully utilized. An alternative approach is performance and capacity planning practices to avoid unexpected storage costs and disruptive upgrades. The objective is to predict storage needs over time and then budget capital and labor to make regular improvements to the storage infrastructure. In practice, SAN performance and capacity planning can be quite challenging as predicting the storage needs of an application or department over time without a careful assessment of past growth and a comprehensive evaluation of future plans is virtually impossible. Many organizations tend to forego the expense and effort of a formalized process unless a mission-critical project or serious performance problem require it. Organizations choosing to sustain an ongoing performance and capacity planning effort will need either comprehensive storage resource management (SRM)-type tool or a capacity planning application. With regards to performance monitoring and tuning tools, there are various benchmarking tools available. Below are just some examples. Effect of Converged Network Converged networking will impact a data center’s performance processes, where today there are more questions than answers. 1. How will traffic be segregated on a 10GbE pipe so that you can allocate bandwidth for storage and network traffic? 2. What monitoring tools will track utilization? Currently, you independently monitor loads on the Ethernet and Fibre Channel cables. So, in converged environments, how do you do this? 3. Specific to Universal Converged Network Adapters (UCNAs), if the HBA is configured as FCoE, can I also run software iSCSI off it? Will TOE capabilities be available? 4. How will multipathing configurations be deployed? We currently have: i. IP multipathing (two NIC connected to two switches) ii. Fibre Channel multipathing
  • 28. 19What Network Administrators Need to Know about Storage Management 5. Will converged environments have special cabling requirements? (e.g., TYPE: CAT 5, CAT 6 or any special type cables, distance) 6. How do you implement and monitor QoS? Hardware-based network analyzers at the network level need to support converged networks to monitor traffic utilization. In converged environments, how can the analyzers tell apart the traffic on a single physical cable? Industry Benchmarks Storage Performance Council (SPC) SPC Benchmark 1: Consists of a single workload designed to demonstrate the performance of a storage subsystem while performing the typical functions of business critical applications. Those applications are characterized by predominately random I/O operations and require both queries as well as update operations. Examples of those types of applications include OLTP, database operations, and mail server implementations. SPC Benchmark 2: SPC-2 consists of three distinct workloads designed to demonstrate the performance of a storage subsystem during the execution of business critical applications that require the large-scale, sequential movement of data. Those applications are characterized predominately by large I/Os organized into one or more concurrent sequential patterns. A description of each of the three SPC-2 workloads is listed below as well as examples of applications characterized by each workload. • Large File Processing: Applications in a wide range of fields, which require simple sequential process of one or more large files such as scientific computing and large-scale financial processing. • Large Database Queries: Applications that involve scans or joins of large relational tables, such as those performed for data mining or business intelligence. • Video on Demand: Applications that provide individualized video entertainment to a community of subscribers by drawing from a digital film library. For more information on Storage Performance Council benchmarks, please visit www.storageperformance.org Transaction Processing Performance Council (TPC) TPC-C: Simulates a complete computing environment where a population of users executes transactions against a database. The benchmark is centered around the principal activities (transactions) of an order-entry environment. These transactions include entering and delivering orders, recording payments, checking the status of orders, and monitoring the level of stock at the warehouses. While the benchmark portrays the activity of a wholesale supplier, TPC-C is not
  • 29. What Network Administrators Need to Know about Storage Management20 limited to the activity of any particular business segment, but, rather represents any industry that must manage, sell, or distribute a product or service. TPC-C involves a mix of five concurrent transactions of different types and complexity either executed on-line or queued for deferred execution. It does so by exercising a breadth of system components associated with such environments, which are characterized by: • The simultaneous execution of multiple transaction types that span a breadth of complexity • On-line and deferred transaction execution modes • Multiple on-line terminal sessions • Moderate system and application execution time • Significant disk input/output • Transaction integrity (ACID properties) • Non-uniform distribution of data access through primary and secondary keys • Databases consisting of many tables with a wide variety of sizes, attributes, and relationships • Contention on data access and update • TPC-C performance is measured in new-order transactions per minute. The primary metrics are the transaction rate (tpmC), the associated price per transaction ($/tpmC), and the availability date of the priced configuration. TPC-E: TPC Benchmark™ E (TPC-E) is a new On-Line Transaction Processing (OLTP) workload developed by the TPC. The TPC-E benchmark uses a database to model a brokerage firm with customers who generate transactions related to trades, account inquiries, and market research. The brokerage firm in turn interacts with financial markets to execute orders on behalf of the customers and updates relevant account information. The benchmark is “scalable,” meaning that the number of customers defined for the brokerage firm can be varied to represent the workloads of different- size businesses. The benchmark defines the required mix of transactions the benchmark must maintain. The TPC-E metric is given in transactions per second (tps). It specifically refers to the number of Trade-Result transactions the server can sustain over a period of time. Although the underlying business model of TPC-E is a brokerage firm, the database schema, data population, transactions, and implementation rules have been designed to be broadly representative of modern OLTP systems.
  • 30. 21What Network Administrators Need to Know about Storage Management Benchmarking Software Iometer Iometer is an I/O subsystem measurement and characterization tool for single and clustered systems. It is used as a benchmark and troubleshooting tool and is easily configured to replicate the behavior of many popular applications. One commonly quoted measurement provided by the tool is I/O per second (IOPs). Iometer is one of the most popular tool among storage vendors and is available free from www.iometer.org IOzone IOzone is a file system benchmark tool. The benchmark generates and measures a variety of file operations. Iozone has been ported to many machines and runs under many operating systems, performing a broad file system analysis of a vendor’s computer platform. IOzone is available free from www.iozone.org While running benchmarks, care should be taken avoid the following common mistakes: • Testing storage performance with file copy commands • Comparing storage devices back-to-back w/o clearing server cache • Testing where the data set is so small the benchmark rarely goes beyond server to storage cache • Forgetting to monitor processor utilization during testing • Monitoring the wrong server’s performance This will ensure a more realistic and representative assessment of your environment. Ixia IxChariot IxChariot is a fee based benchmarking tool which simulates applications workloads to predict device and system performance under realistic load conditions. IxChariot performs thorough network performance assessment and device testing by simulating hundreds of protocols across thousands of network endpoints. When vendors utilize such benchmarking tools to asses performance, they take into consideration the entire network, as the server, network and storage system all play a part in application performance. It’s important to understand how to identify and eliminate latency bottlenecks to ensure superior application performance. While it may be logical to look for sources of performance degradation outside the server – in the network connectivity or storage components – it’s important to understand that performance degradation
  • 31. What Network Administrators Need to Know about Storage Management22 can also occur within the server. For example the cycles the server CPU has available to process application workloads can impact performance. This is referred to as a server’s CPU efficiency. What affects CPU efficiency is further discussed below. Therefore a properly designed SAN can improve storage utilization, high availability and data protection. When evaluating SAN performance, the following need to be considered: • Latency • Bandwidth • Throughput • Input/Output operations per second (IOPS) Fibre Channel has evolved over the years, delivering faster and faster performance, as measured by throughput (megabits per second). Today; however, 10Gb based Ethernet networks now provide performance equal to Fibre Channel based networks. 10GbE is currently the fastest of the Ethernet standards, with a nominal data rate of 10Gb/s or 10 times as fast as Gigabit Ethernet. The following table provides a performance summary of Fibre Channel evolution, along with 10GbE for comparison. Name Throughput (MBps)* Line-Rate – 1Gb Fibre Channel 200 MB/s 1.0625 GBaud – 2Gb Fibre Channel 400 MB/s 2.125   GBaud – 4Gb Fibre Channel 800 MB/s 4.25   GBaud – 8Gb Fibre Channel 1600 MB/s 8.50   GBaud – 16b Fibre Channel 3200 MB/s 17.00   GBaud – 1Gig bit Ethernet 1Gb second 1Gigabit / sec. 10Gig bit Ethernet 10Gb second 10Gigabits / sec. 40Gig bit Ethernet 40Gb second 40Gigabits / sec. * - Throughput for duplex connections Key Terminology The following terminology is important to understanding SAN performance: CPU Efficiency CPU efficiency has various definitions. In context of this document, CPU
  • 32. 23What Network Administrators Need to Know about Storage Management efficiency is referring to the server processor’s ability to process application workloads - or simply put application workload IOP requirements divided by the server’s CPU speed (GHz). The more IOPS that can be processed by each GHz, the higher the CPU’s efficiency. A factor that can impact a server’s CPU efficiency is the HBA selection. Some HBAs off-load certain processes onto the server’s processer. By doing so, the server processor has less cycles available for application workload processing, which can in turn lowers network performance. Therefore, proper HBA selection can be one of the simplest methods of improving overall performance. CPU efficiency also affords other benefits, including reduction of capital and operational expenditures. Performance Tuning Storage systems rely on a number of performance tuning processes described below. Driver Parameters Another factor that can impact performance is the driver parameter (also known as adapter parameter) settings. The optimum settings are either dynamically managed by the driver or configured automatically during the adapter installation using the adapter’s management application. Queue depth setting Queuing refers to the ability of a storage system to queue storage commands for later processing. Queuing can take place at various points in your storage environment, from the Host Bus Adapter (HBA) to the storage processor/ controller. For example, modifying the “HBA Queue Depth” is a performance tuning tip for servers that are connected to SANs. Since the HBA is the storage equivalent of a network card, the Queue Depth parameter controls how much data is allowed to be “in flight” on the storage network from that card. Most cards default to a queue depth of 32, which is perfect for a general purpose server and prevents the SAN from getting too busy. Queue depth can be adjustable. Note that a little queuing may be acceptable depending on the transaction workload, but too many outstanding I/Os can negatively impact performance, as measured in latency. Interrupt coalescing Interrupt coalescing batches up kernel interrupts from the NIC to the kernel, reducing per packet overhead. Interrupt coalescing represents a trade-off between latency and throughput. Coalescing interrupts always adds latency to arriving messages, but the resulting efficiency gains may be desirable where high throughput is desired over low latency. Troubleshooting latency problems often point to interrupt coalescing in Gigabit Ethernet NIC hardware. Fortunately, the behavior of interrupt coalescing is configurable and can generally be adjusted to
  • 33. What Network Administrators Need to Know about Storage Management24 the particular needs of an application. The default for some NICs or drivers is an “adaptive” or “dynamic” interrupt coalescing setting that seems to significantly favor high throughput over low latency. The details of configuring interrupt coalescing behavior will vary depending on the OS and perhaps even the type of NIC in use. Key Metrics The following are key SAN performance metrics: Latency: I/O latency, also known as I/O response time, measures how fast an I/O request can be processed by the disk I/O subsystem. For a given I/O path, it is in proportion to the size of the I/O request. That is, a larger I/O request takes longer to complete. Bandwidth: The amount of available end-to-end SAN bandwidth is dependent on back-end storage capacity on the SAN side. Improving SAN bandwidth requires consideration of such factors as how the storage is configured, what the application workload is and where a current bottleneck exists. For example, if each server accesses a separate unique LUN, adding a second HBA would add more bandwidth, but you might not see a performance improvement. This would be the case if the LUN is being accessed via a single adapter path as well as if the adapter or the LUN are not the bottlenecks. Or consider if each server accesses multiple LUNs; if the LUNs are load balanced across adapters, there is the potential for performance improvement. Throughput: Throughput measures how much data can be pumped through the disk I/O path. If you view the I/O path as a pipeline, throughput measures how big the pipeline is and how much pressure it can sustain. So, the bigger the pipeline is and the more pressure it can handle, the more data it can push through. For a given I/O path, throughput is in direct proportion to the size of the I/O requests. That is, the larger the I/O requests, the higher the megabytes per second (MBps). Larger I/Os give you better throughput because they incur less disk seek time penalty than smaller I/Os. IOPS: I/O Operations Per Second (also known as IOPS) is a measure of a device or a network ability to send and receive pieces of data. The size for these prices of data depends on the application (ie: transactional, data base, etc.) and generally range in size from 512byte to 8kilo bytes. IOPS have a known performance profile of raising CPU utilization from a combination of CPU interrupt and wait times. The specific number of IOPS possible in any server configuration will vary greatly depending upon the variables entered into the program, including the balance of read and write operations, the mix of random or sequential access patterns and the number of worker threads and queue depth, as well as the data block sizes. Transfer Rate: Transfer rate is the amount of data that can be transferred on a specific technology (ie: 2Gb, 4Gb or 8Gb Fibre Channel) within a specific time period. In storage related tests, the transfer rate is expressed in megabytes or
  • 34. 25What Network Administrators Need to Know about Storage Management gigabytes per second; MB/s and GB/s respectively. High sustainable transfer rate play a critical in applications which “stream” data. These include backup and restore, continuous data protection, RAID, video streaming, file copy and data duplication applications. CPU Efficiency (based on IOPS): This metric examines the ratio of IOPS divided by average CPU utilization. This ratio illustrates the efficiency of a given technology in terms of CPU utilization. Higher numbers of CPU efficiency show that the given technology is friendlier to the host system’s processors. Higher bandwidth or IOPS with lower CPU utilization is the desired result. This is important, as users are try­ing to maximize their investments, and CPU utilization. IOPS The most common performance characteristics that are measured or defined are: • Total IOPS: Total number of I/O operations per second (when performing a mix of read and write tests) • Random Read IOPS: Average number of random read I/O operations per second • Random Write IOPS: Average number of random write I/O operations per second • Sequential Read IOPS: Average number of sequential read I/O operations per second • Sequential Write IOPS: Average number of sequential write I/O operations per second Latency SANs cannot tolerate delay. The performance of storage networks is extremely sensitive to data/frame loss. While LAN traffic is less sensitive, slowing down access to storage has a significant impact on server and application performance. In addition, such delays also negatively impact server-to-server traffic. For that reason, Fibre Channel has been the network protocol of choice for storage networking, providing high-performance connectivity between servers and their storage resources. Fibre Channel is an example of a loss-less network in the sense that a data transmission from the sender (initiator/server) is only allowed if the recipient (target/storage array) has sufficient buffer (memory) to receive the data. This ensures data is not “dropped” by the recipient.
  • 35. What Network Administrators Need to Know about Storage Management26 Chapter 7: Security Due to compliance or risk concerns, storage administrators must be aware of the accessibility and vulnerabilities that storage systems are exposed to via network interconnections. Protecting sensitive data residing in and flowing through storage networks should be part of risk management assessments. Defense in depth approaches to security include applying solutions that balance the risks and costs with the desire to apply best practices for securing storage systems. Security controls, whether they are preventive, detective, deterrent, or corrective measures can be categorized into physical, procedural, technical, or legal/ regulatory compliance controls. There are several documents that promote good security practices and define frameworks to structure the analysis and design for managing information security controls. These include documents from ISO (27001/2) and NIST. SNIA publishes best practices for storage system security. Technical solutions are available to implement controls of the confidentiality, integrity, and availability of information. In addition concerns about accountability and non-repudiation should be considered. Access controls and authorization controls can prevent accidents and restrict privileges. Authentication of users and devices can provide network access controls. Protecting management interfaces, including replacement of default passwords, assures protection from unauthorized changes. Audit and logging support provides for support for validation of security configurations and support for organizations’ policies. Security in Converged Networking Environments Many IT organizations are acknowledging the benefits and advantages of converged networking environments, primarily the sharing of infrastructure and the reduction of costs. Network convergence allows unprecedented connectivity options to information via platforms that are capable of supporting block storage traffic such as iSCSI, FC, and FCoE, as well as file service traffic for NAS (NFS/CIFS/SMB2 ) storage. As networks and storage increasingly share the same infrastructures, the security aspects such as confidentiality, integrity, and availability are to be considered in risk assessments. Authentication, confidentiality, user ID and credential management, audit support, and other solutions relevant to converged or virtualized traffic flows can provide new opportunities for efficiency by considering common security solutions whenever possible. Many customers are finding that protocol agnostic and storage agnostic solutions will prove to be economical solutions to assist them in meeting security and compliance requirements.
  • 36. 27What Network Administrators Need to Know about Storage Management Security Breaches The inherent architecture of Fibre Channel SAN affords it greater degree of security. However this is not to say a SAN is impervious to security breaches. Common risks include: • Compromised management path can occur when the organization has a: - Mal-intentioned administrators - Compromised management console - Unsecure management interfaces To avoid such situations, organizations typically implement management authorization and access control access processes as well as authentication measures. Therefore it is critical to select components that support role based policies and authentication features. • Unauthorized data access - This typically occurs when a storage LUN becomes accessible beyond the authorized hosts. The implication of such an event means that people who should not have access to certain data will now be able to access it. LUN masking/mapping, typically done at the array level, is how such conditions are addressed. • Impersonations and identity spoofing - This condition occurs when initiators fake their identity through worldwide name (WWN) spoofing which enable a session to be hijacked. To protect against such occurrences in the SAN, organizations leverage DH-CHAP, a type of authentication and IKE, which establishes shared security information between two network entities to support secure communication. Applying tighter controls to overall SAN configurations is also helpful as it would prevent administrative errors which could leave a SAN vulnerable to such attacks. • Compromised communication - This can be one of the most costliest breaches for an organization. Not only are there regulatory implications, in terms of fines, but also business implications, in terms of loss of intellectual property and loss of customer confidence. Therefore great care must be taken to protect data from interception or eavesdropping. Loss of data integrity is another way “communication” can be compromised as data can be intercepted, modified and then sent on its way. Therefore great care must be taken to prevent compromised communication and loss of data integrity. Organization should leverage data encryption to encrypt their data. Although there are various encryption methodologies, host based encryption is the most
  • 37. What Network Administrators Need to Know about Storage Management28 effective as it encrypts data at source of its origin, protecting the data in flight and at rest. Even in case of a lost or stolen hard disk drive, the data remains encrypted. To reduce data integrity incidents, SAN administrators are showing greater interest in products from vendors who support industry initiatives such as Data Integrity Initiative (DII), which provides application to disk data integrity protection. Methods of Protecting a SAN The following are methods storage administrators leverage to augment security within SANs. Zoning Fabric Zoning The zoning service within a Fibre Channel fabric was designed to provide security between devices sharing the same fabric. The primary goal was to prevent certain devices from accessing other devices within the fabric. With many different types of servers and storage devices on the network, the need for security is critical. For example, if a host were to gain access to a disk being used by another host, potentially with a different OS, the data on this disk could become corrupted. To avoid any compromise of critical data within the SAN, zoning allows the user to overlay a security map dictating which devices, namely hosts, can see which targets, thereby reducing the risk of data loss. Zoning does, however, have its limitations. Zoning was designed to do nothing more than prevent devices from communicating with other unauthorized devices. It is a distributed service that is common throughout the fabric. Any installed changes to a zoning configuration are therefore disruptive to the entire connected fabric. Zoning also was not designed to address availability or scalability of a Fibre Channel infrastructure. Therefore, while zoning provides a necessary service within a fabric, the use of VSANs, described below, along with zoning, provides an optimal solution. WWN Zoning WWN zoning uses name servers in the switches to either allow or block access to particular WWNs in the fabric. A major advantage of WWN zoning is the ability to re-cable the fabric without having to redo the zone information. However, WWN zoning is susceptible to unauthorized access, as a zone can be bypassed if an attacker is able to spoof the WWN of an authorized adapter. SAN zoning SAN zoning is a method of arranging Fibre Channel devices into logical groups over the physical configuration of the fabric. SAN zoning can be used to compartmentalize data for security purposes. SAN zoning also enables each device in a SAN to be placed into multiple zones.
  • 38. 29What Network Administrators Need to Know about Storage Management Hard Zoning Hard zoning occurs in hardware; therefore, the zone is physically isolated, blocking access to the zone from any device outside of the zone. Soft Zoning Soft zoning occurs at the software level; thus, it is more flexible than hard zoning, making rezoning processes easier. Soft zoning uses filtering implemented in Fibre Channel switches to prevent ports from being seen from outside of their assigned zones. It uses WWNs to assign security permissions. The security vulnerability in soft zoning is that the ports are still accessible if the user in another zone correctly guesses the Fibre Channel address. Port Zoning Port zoning uses physical ports to define security zones, enabling IT administrators to control data access through port connections. With port zoning, zone information must be updated every time a user changes switch ports. In addition, port zoning does not allow zones to overlap. Port zoning is normally implemented using hard zoning, but could also be implemented using soft zoning. Virtual SAN VSAN is a Cisco technology, designed to enhance scalability and availability within the Fibre Channel networks. It augments the security services available through fabric zoning. VSANs enable IT administrators to take a physical SAN and establish multiple VSANs on top of it, creating completely isolated fabric topologies, each with its own set of fabric services. Since individual VSANs possess their own zoning services, each is independent of the other and does not affect zoning services of other VSANs. Some benefits of VSANs include: a. Increased utilization of existing assets and reduced need to build additional physically isolated SANs b. Improved SAN availability by not only providing hardware-based isolation, but also the ability to fully replicate a set of Fibre Channel services for each VSAN c. Greater flexibility through selective addition or deletion of VSANs from a trunk link, controlling the propagation of VSANs through the fabric As a side note, VLANs allows the extension of a LAN over the WAN interface, overcoming the physical limitations of a regular LAN. Just as with VSANs, VLANs enable IT administrators to take a physical LAN and overlay on top multiple VLANs. VLAN technology also allows IT administrators to deploy several VLANs over a single switch in such a manner that all the LANs will operate as independent networks.
  • 39. What Network Administrators Need to Know about Storage Management30 LUN Masking LUN masking is an authorization process that makes a LUN available to some hosts and unavailable to other hosts. LUN masking is implemented primarily at the HBA level. LUN masking implemented at this level is vulnerable to any attack that compromises the HBA. Some storage controllers also support LUN masking. An additional benefit to LUN masking is that it prevents Windows operating systems to write volume labels on all available/visible LUNs within the network, which can render the LUNs unusable by other operating systems or result in data loss. Security Protocols Fibre Channel Authentication Protocol Fibre Channel Authentication Protocol (FCAP) is an optional authentication mechanism used between any two devices or entities on a Fibre Channel network using certificates or optional keys. Fibre Channel Password Authentication Protocol Fibre Channel Password Authentication Protocol (FCPAP) is an optional password-based authentication and key exchange protocol that is utilized in Fibre Channel networks. FCPAP is used to mutually authenticate Fibre Channel ports to each other. This includes E_Ports, N_Ports and Domain Controllers. Switch Link Authentication Protocol Switch Link Authentication Protocol (SLAP) was designed to prevent the unauthorized addition of switches into a Fibre Channel network. It is an authentication method for Fibre Channel switches that uses digital certificates to authenticate switch ports. Fibre Channel - Security Protocol Fibre Channel - Security Protocol (FC-SP) is a security protocol for Fibre Channel Protocol (FCP) and fiber connectivity (Ficon). FC-SP is a project of Technical Committee T11 of the InterNational Committee for Information Technology Standards (INCITS). FC-SP is a security framework that includes protocols to enhance Fibre Channel security in several areas, including authentication of Fibre Channel devices, cryptographically secure key exchange and cryptographically secure communication between Fibre Channel devices. FC-SP is focused on protecting data in transit throughout the Fibre Channel network. FC-SP does not address the security of data stored on the Fibre Channel network. Diffie Hellman - Challenge Handshake Authentication Protocol FC-SP defines Diffie Hellman - Challenge Handshake Authentication Protocol (DH-CHAP) as the baseline authentication scheme. DH-CHAP prevents World
  • 40. 31What Network Administrators Need to Know about Storage Management Wide Name (WWN) spoofing (i.e., impersonation, masquerading attacks) and is designed to withstand replay, offline dictionary password lookup and challenge reflection attacks. (See Figure 8 for an illustration of the threats prevented through the implementation of DH-CHAP authentication by the HBA/CNA.) DH- CHAP supports algorithm-based authentication such as MD-5 and SHA-1. Figure 8: Host threats prevented by implementation of DH-CHAP authentication by the HBA or UCNA. Encapsulating Security Payload over Fibre Channel Encapsulating Security Payload (ESP) is an Internet standard for the authentication and encryption of IP packets. ESP is widely deployed in IP networks and has been adapted for use in Fibre Channel networks. The Internet Engineering Task Force (IETF) iSCSI proposal specifies ESP link authentication and optional encryption. ESP over Fibre Channel is focused on protecting data in transit throughout the Fibre Channel network. ESP over Fibre Channel does not address the security of data stored on the Fibre Channel network. Securing iSCSI, iFCP and FCIP over IP Networks The IETF IP Storage (IPS) Working Group is responsible for defining standards for the encapsulation and transport of Fibre Channel and SCSI protocols over IP networks. The IPS Working Group’s charter includes responsibility for data security, security including authentication, keyed cryptographic data integrity and confidentiality, sufficient to defend against threats up to and including those that can be expected on a public network. Implementation of basic security functionality will be required, although usage may be optional. The IPS Working Group defines the use of the existing IPsec and Internet Key Exchange (IKE) protocols to secure block storage protocols over IP.
  • 41. What Network Administrators Need to Know about Storage Management32 Effect of Converged Network Given the unified nature within a converged environment, precautions have to be put in place to address access control, preventing the network administrator undoing something the server administrator did. Currently SAN, server and network administration are independent of each other; however, in converged environments, management of these areas will overlap. Native FCoE Storage Storage arrays supporting native FCoE interfaces will enable end-to-end network convergence and are expected to be the next logical progression in the converged network environment. Besides the change in physical layer connectivity that encapsulates Fibre Channel frames over Ethernet, the functionality provided by native FCoE arrays remains equivalent to that of a Fibre Channel array. The native FCoE arrays will leverage the proven performance of Fibre Channel stack and retain the existing processes required for LUN masking and storage backup (see Figure 9). Zoning Zoning practices used in Fibre Channel networking typically remain unaffected in a converged network environment. Processes are transparently carried over to the FCoE-capable lossless Ethernet switch.
  • 42. 33What Network Administrators Need to Know about Storage Management Figure 9: A native FCoE storage connected to FCoE-enabled network LUN Masking LUN masking practices used by the storage administrators in Fibre Channel storage remain unaffected in a converged network environment. Processes are transparently carried over to native FCoE storage. Compliance Internal business initiatives and external regulations are constantly adding to compliance challenges and are testing the capabilities of status quo networks. Although IT managers could continue to deploy multiple networks and ensure compliance, the process gets tedious with the changing dynamics of SAN expansion driven by virtual servers and blade servers. A simplified approach to networking provides competitive advantages in the face of new business initiatives and helps meet regulatory compliance obligations.
  • 43. What Network Administrators Need to Know about Storage Management34 Chapter 8: Management: Configuration and Diagnostics Network administrators are concerned with movement of data, or to be more specific, the reliable of user data from one point to another point within the network. Therefore the network administrator is interested in factors that affect management. Examples of such factors include bandwidth utilization, provisioning of redundant links to ensure secondary data paths, support for multiple protocols and so forth. Storage administrators on the other hand, are less concerned about data transport than about the organization and placement of data once it arrives at its destination. LUN mapping, RAID levels, file integrity, data backup, storage utilization and so forth comprise the bulk of a storage administrator’s daily management routines. These different views of management converge in a SAN, since the proper operationofaSANrequiresbothmanagementofdatatransportandmanagement of data placement. By introducing networking between servers and storage, a SAN forces traditional storage management to broaden its scope to include network administration and encourages traditional network management to extend its reach to data placement and organization. Some of the most frequent questions SAN administrators need to answer are: • How much storage do I have available for my applications? • Which applications, users and databases are the primary consumers of storage? • When do I need to buy more storage? • How is storage being used? SAN’s storage resources can be managed centrally, allowing administrators to organize, provision and allocate that storage to users or applications operating on the network across an organization. Centralization also allows administrators to monitor performance, troubleshoot problems and manage the demands of storage growth. SAN provisioning To centralize storage on a SAN while restricting access to authorized users or applications; the entire storage environment should not be accessible to every user. Administrators must carve up the storage space into segments
  • 44. 35What Network Administrators Need to Know about Storage Management that are only accessible to specific users. This management process is known as provisioning. For example, some amount of data center storage may be provisioned for a purchasing related application that may only be accessible by the purchasing department, while other space may be apportioned for personnel records accessible only to the human resources department. The major challenge with provisioning relates to storage utilization. Once space is allocated, it cannot easily be changed. Thus, administrators typically provision ample space for an application’s future use. Unfortunately, storage capacity that is provisioned for one application cannot be used by another, so space that is allocated, but unused, is basically wasted until called for by the application. This need to allocate for future expansion often leads to significant storage waste on the storage area network. One way to alleviate this problem is through thin provisioning, which essentially allows an administrator to “tell” an application that some amount of storage is available but actually commit far less drive space — expanding that storage in later increments as the application’s needs increase. Provisioning is accomplished through the use of software tools. Tools typically accompany major storage products. The issue for administrators is to seek a provisioning tool that offers heterogeneous support supporting the storage platforms currently in their datacenter. Creating a SAN involves more than simply cabling servers and storage systems together. Resources must be configured, allocated, tested and maintained. Introduction of new devices to the SAN can change the requirements, therefore management is a key consideration and it’s important to select solutions that can minimize the time and effort needed to keep a SAN running. Manageability has a significant impact on data centers. Streamlining deployment, installation and configuration processes to improve efficiency are critical for IT organizations that are challenged with servicing increasing business demands with shrinking resources. Another key aspect of management is the ability to monitor, diagnose and obtain information on the health of the SAN. It is important to understand that storage traffic does not tolerate data loss; therefore, it requires advanced management granularity. To that end, a more comprehensive set of tools have been developed to provide switch fabric, initiators, targets (storage arrays) and LUN administrative capabilities. This enables the storage network to be kept at an optimum level of performance. In addition, like Ethernet networks, Fibre Channel-based SANs have a robust set of error checking and diagnostic capabilities designed to ensure the highest level of network performance and connectivity. In addition, there are a broad range of tools that enable storage administrators to address any issues that may arise within their networks. These include diagnostic tools that help troubleshoot: • Port functionality (initiator and target) - Adapter port level
  • 45. What Network Administrators Need to Know about Storage Management36 - Storage port level - LUN and spindle - Switch port • I/O diagnostics - Performance from IOPS perspective - Performance from latency perspective - Error detection Adapter Management Adapter management can be broken down in the following manner: Installation This entails the physical installation of the adapter within the server as well as the adapter’s software components. It is important to select adapters which provide greatest installation flexibility as such capabilities can significantly help to streamline deployments, improve server availability and reduce costs. Some examples of such capabilities include the ability to pre-configure a server with the adapter’s software without the adapter being present in the server. This helps to pre-stage server resources for rapid deployment. Installation automation is also another feature which should be taken into consideration. Automation can speedup and streamline adapter installation by deploying software components in a “batch” fashion. Configuration Once the HBA has been installed, it must be configured. Using the HBA’s management application, SAN administrators set the “driver parameter” settings to customize the HBAs capabilities to match the needs of their environment. There is a host of setting which administrators can set to activate features and change performance characteristics of the adapter. Some examples include queue depth settings for optimal operation with existing storage resources, security settings, virtualization, time outs, etc. Boot from SAN settings can also be set during the configuration process. As server vendors shift to diskless server designs, a boot device must then be assigned to the server from the SAN. Such servers can also be assigned with a secondary boot device, in case the primary boot device become inaccessible. Certain adapter vendors also provide configuration automation capabilities as well, enabling SAN administrators to streamline management capabilities. An example of configuration automation is the ability to centrally propagate adapter firmware and driver updates across the entire network, helping to reduce server re-boots, maximize network uptime and increase overall management flexibility.
  • 46. 37What Network Administrators Need to Know about Storage Management Management Adapter management should be a critical consideration in selecting an adapter for the server. IT administrators in general are tasked to do more with same level of resources. To that end, they need management tools which help them improve administration of adapters within the data center. Convergence introduces a new layer of management requirements. Fibre Channel adapter vendors are now also offering FCoE, NIC and iSCSI solutions as well. However, some vendors have yet to integrate central management of their server to network connectivity solutions. That is why it is important to select adapter vendors which provide a centralized, cross platform management solution for unified administration of adapters, regardless of the protocol (Fibre Channel, FCoE, iSCSI or NIC). Such solutions can centrally display all adapters within a SAN, enabling effective and efficient management of adapters. By selecting the right adapter, SAN administrators can simplify administrative tasks and improve data center responsiveness support demands of dynamic business environments. Diagnostics Given the critical nature of a SAN, robust set of diagnostics are a must for the various pieces that comprise a SAN, which includes the adapter. While there are a common set of diagnostic tools offered by adapter vendors, some vendors have developed advanced set of diagnostic and I/O Management applications designed to truly optimize network availability, asset utilization and responsiveness. Such tools can be used to identify and address intermittent SAN issues, over subscription conditions, and end-to-end I/O performance degradations. Key Terminology The following section defines some common terms and management functions used by storage administrators. HBA and CNA configuration Relative to IP networks, there is more involved in configuring connectivity (HBAs and CNAs) for storage networks (relative to NICs used in IP networks). For example, when configuring storage adapters, storage administrators need to: • Know how to plan and provision storage resources • Allocate storage resources based on user requirements, which requires understanding of the user’s requirements (capacity needed, performance required, availability, etc.) • Tune adapter and storage fabric to match the optimum I/O transactional capabilities of the storage arrays
  • 47. What Network Administrators Need to Know about Storage Management38 Port Configuration Initially, you have to make sure the port’s world wide port name (WWPN) is part of a storage network zone. This ensures the server can access the storage on the SAN fabric. Boot from SAN Similar to PXE boot in IP networks, Fibre Channel networks also support booting of the server from a non-local hard disk. This is called “boot from SAN.” While Ethernet networks require a host of intermediary (DHCP, PXE, along with an FTP or HTTP) services, Fibre Channel does not have such a requirement. In Fibre Channel networks, the server has direct access to the highly available storage devices within the SAN, which it can use for booting. Enabling boot from SAN requires configuring the storage device, such as the HBA, with the boot image and boot disk information and then installing the OS. vPorts Similar to creating virtual end-points in Ethernet environments, storage administrators can create Fibre Channel vPorts. Using N_Port ID Virtualization (NPIV), multiple vPorts can be assigned to one physical port. NPIV allows each vPort to have its own WWPN, a unique identifier. Storage administrators use vPorts to apply SAN best practices, such as zoning, in virtual server environments. SMI-S Storage Management Initiative Specification (SMI-S) defines Distributed Management Task Force (DMTF) management profiles for storage systems. A profile describes the behavior characteristics of an autonomous, self-contained management domain. SMI-S includes profiles for adapters, arrays, switches, storage virtualizer, volume management and many other domains. A “provider” is an implementation for a specific profile. At a very basic level, SMI-S entities are divided into two categories: • Clients are management software applications that can reside virtually anywhere within a network provided they have a physical link (either within the data path or outside the data path) to providers. • Servers are the devices under management within the storage fabric. Clients can be host-based management applications (storage resource management, or SRM), enterprise management applications or SAN appliance- based management applications (e.g., virtualization engines). Servers can be disk arrays, host bus adapters, switches, tape drives, etc. By leveraging SMI-S, vendors offer open, standards-based interfaces and solutions (hardware or software), enabling easier integration, interoperability and management.
  • 48. 39What Network Administrators Need to Know about Storage Management CIM The Common Information Model (CIM) is an open standard, and part of the DMTF standard, that defines how managed elements in an IT environment are represented as a common set of objects and relationships between them. This is intended to allow consistent management of these managed elements, independent of their manufacturer or SMI-S provider. It is also the basis for the SMI-S standard for storage management. Effect of Converged Network Currently, storage administrators have a distinct set of diagnostic tools and processes for fault isolation and diagnoses of issues within Fibre Channel networks. Given that there will be a common infrastructure in converged environments, fault isolation procedures must be adjusted or changed to determine the best method to effectively identify and resolve issues within the converged network. For example, determining if Fibre Channel end-device (storage) can be accessed, its response time, etc. Other impacts include the following: • Administrators need to configure 10GbE DCB ports to carry LAN and storage traffic, as well as allocate bandwidth. • When running Fibre Channel or iSCSI over Ethernet, both have direct booting capabilities. • As 10GbE DCB will be used for multi-traffic types, any physical disruption will adversely affect storage, LAN and any other forms of data traffic. Fibre Channel Initialization Protocol (FIP) Fibre Channel Initialization Protocol (FIP) discovers all Fibre Channel devices within an Ethernet network. It is the FCoE “control” protocol responsible for establishing and maintaining Fibre Channel virtual links between FCoE devices. Port Configuration The following describes new port configuration processes. FCoE Port Configuration Process: The FCoE port configuration mirrors that of Fibre Channel port configuration. The major difference, however, is that before port configuration can take place, we need to make sure there is a Converged Ethernet connection to an FCF provider through an FCoE switch. FCF provider establishes a connection between the FCoE adapter and the FCoE switch. When this is operational, the adapter will discover the presented SAN fabric and all targets will be visible on the FCoE switch.
  • 49. What Network Administrators Need to Know about Storage Management40 iSCSI Port Configuration Process Using the iSCSI adapter’s management application, the adapter must be given an iSCSI-qualified name (IQN). The IQN of the iSCSI target device and the IP address of the target portal must also be available. With the iSCSI adapter’s management application, a connection to the target can then be initiated via the target’s IP address.
  • 50. 41What Network Administrators Need to Know about Storage Management Chapter 9: Emulex Solutions About Emulex Emulex® creates enterprise-class products that intelligently connect storage, servers and networks, and is the leader in converged networking solutions for the data center. Expanding on its traditional Fibre Channel solutions, Emulex’s Connectivity Continuum architecture now provides intelligent networking services that transition today’s infrastructure into tomorrow’s unified network ecosystem. Through strategic collaboration and integrated partner solutions, Emulex provides its customers with industry-leading business value, operational flexibility and strategic advantage. Emulex Server-to-network Connectivity Solutions Emulex designs and offers a broad range of server-to-network connectivity solutions, qualified for use with offerings from major server and storage OEMs. The Emulex family of LightPulse™ Fibre Channel HBAs and OneConnect™ UCNAs provide IT administrators the flexibility, performance and reliability they need to keep pace with demanding and dynamic business environments. Emulex OneConnect UCNA The Emulex OneConnect UCNA is a single-chip, high- performance 10GbE adapter with support for TCP/IP, FCoE and iSCSI, enabling one adapter to support a broad range of network protocols. OneConnect is designed to address the key challenges of the evolving data center and improve the overall operational efficiency. OneConnect UCNA is a flexible server connectivity platform that enables IT administrators to consolidate multiple 1GbE links onto a single 10GbE link. With support for TCP/IP, FCoE, iSCSI and Internet Wide Area RDMA Protocol (RoCE) on a single platform, IT administrators can also meet the connectivity requirements of all networking, storage and clustering applications. Such flexibility simplifies server hardware configurations and significantly reduces standard server configurations deployed in the data center. For greater performance, at adapter and server level, OneConnect leverages iSCSI and FCoE offload technology. This not only improves adapter performance, but also leaves more of the server’s CPU cycles available for application workload processing. The end result is more effective utilization of existing IT assets, which helps to reduce capital
  • 51. What Network Administrators Need to Know about Storage Management42 expenditures. In fact, Emulex’s OneConnect UCNA design is so innovative that Network Computing not only recognized it as the “New Product of the Year” but also the “Network Infrastructure Product of the Year”. But the true measure of OneConnect’s success has been its acceptance and deployment in data centers large and small. Emulex LightPulse Fibre Channel HBAs EmulexLightPulseHBAsleverageeightgenerationsofadvanced, field-proven technologies to deliver a distinctive set of benefits that are relied upon by the world’s largest enterprises. From the unique firmware upgradeable architecture, to the common driver model, Emulex is considered to provide the most reliable and scalable Fibre Channel HBAs, and has received various industry accolade Emulex LightPulse 8Gb/s Fibre Channel HBAs provide the bandwidth required to support the increase in data traffic brought about by organizations that are: - Consolidating server resources through deployment of virtualization and blade server technologies - Leveraging higher performance next-generation server platforms - Deploying or enhancing storage networking infrastructure to address transaction intensive and data streaming applications - Increasing data center power efficiency
  • 52. 43What Network Administrators Need to Know about Storage Management Emulex Fibre Channel HBAs are designed with the enterprise customer in mind. Working in close collaboration with IT organizations and system-level OEMs, Emulex integrates features that streamline the deployment and simplify the management of Fibre Channel HBAs within the data center. Interoperability Emulex server connectivity solutions are based on industry standards. Emulex works closely with server, switch, storage and software OEMs to ensure highest level of interoperability within heterogeneous data center environments. This is just one of the reasons why Emulex HBAs and UCNAs have been broadly adopted and deployed by IT organizations large and small. Broad Operating System Support with Investment Protection Emulex provides support for the major enterprise class operating systems. Leveraging the exclusive “common driver” model, Emulex ensures Fibre Channel driver interoperability between generations of LightPulse HBAs and OneConnect UCNAs. This approach helps to preserve IT investment, as well as simplifying redeployment. Emulex’s Service Level Interface (SLI™) architecture was developed to allow deployment of new firmware releases on one server or multiple servers throughout the network without rebooting. Firmware independence and the common driver model also mean that Emulex adapters can easily be redeployed in servers running different operating systems. OneCommand™ Manager – Centralized, Multi-protocol Adapter Management Emulex server connectivity solutions are not only designed for performance and scalability, but also manageability. Emulex consolidated the management of its HBAs and UCNAs under a single management application – OneCommand Manager. With OneCommand Manager, IT administrators can remotely manage Emulex Fibre Channel, iSCSI, FCoE and NIC resources from a centralized location. Furthermore, powerful diagnostic and automation functions within this application further help streamline administration functions, thus improving management efficiency. Regardless of the protocol, OneCommand Manager simplifies the administration, maintenance and monitoring of server connectivity across the entire data center. Emulex – The Solution of Choice With over 25 years of storage networking experience, Emulex server connectivity solutionsdelivertheperformance,flexibility,scalabilityandreliabilityorganizations need to address the demands of today’s dynamic business environment. This experience, combined with close development partnerships with the
  • 53. What Network Administrators Need to Know about Storage Management44 industry’s leading hardware and software OEMs, has made Emulex’s family of LightPulse Fibre Channel HBAs and OneConnect CNAs the solution of choice for the enterprise data center. Emulex HBA and CNA solutions have been qualified and are used in a broad range of standard and blade server platforms. Regardless of whether you are using a pure Fibre Channel network, or transitioning to a converged network environment using 10GbE, Emulex has the server to network connectivity to address your challenging needs. For more information on Emulex solutions, please visit Emulex.com.
  • 54. 45What Network Administrators Need to Know about Storage Management Chapter 10: Conclusion Converged networking is an emerging technology that will change the way data center managers install and operate equipment, processes and manpower. Converged networking results in an overlap of network and storage administrators’ responsibilities. This guide explains networking and storage basics to help each administrator better understand the changes resulting from converged networking and how it will impact their role in the data center. Figure 10 provides an example of a converged network environment. Figure 10: Converged network deployment Look to Emulex to provide not only adapters to serve a converged network environment, but also to help educate the industry as it evolves. For more information on converged networking, download the Emulex Convergenomics Guide from Emulex.com.
  • 55. What Network Administrators Need to Know about Storage Management46