This paper reviews the trends and technologies in Unified Computing, describes the Datacenter Ethernet technologies for implementing Fibre Channel over Ethernet, and describes Cisco\'s Unified Computing System (UCS)
Unified Computing in Servers
June 8, 2010
UCSC Extension, Network Storage Essentials, 21940
Unified Computing (UC) is an industry term which refers to a common network fabric for both data
(TCP/IP) and storage (Fibre Channel or iSCSI) networking. Servers which have implemented Unified
Computing use a converged network adapter (CNA) which can manage the transport of multiple
network protocols including TCP/IP, Fibre Channel over Ethernet (FCoE) and iSCSI. In addition, practical
implementation of UC also includes 10Gb Ethernet (10GbE), support for network virtualization and
implementation of Data Center Bridging (DCB). Cisco’s implementation of Unified Computing is
Converged Network Adapter
A Converged Network adapter (CNA) is a Network Interface Card (NIC) which supports IP data and also
storage network protocols such as Fibre Channel over Ethernet (FCoE) and iSCSI. CNA’s replace the Host
Bus Adapter for Fibre Channel, thereby reducing the amount of network cabling, the number of add-in
adapter cards and the power required for such cards. Since Fibre Channel cards have a data rate of
4Gbps or 8Gbps, CNA’s typically use 10GbE. Since CNA’s implement a Fibre Channel initiator just the
same as a HBA, the initiator can be implemented in the CNA hardware controller or as a software-based
initiator in the driver. While the hardware based initiator has traditionally been implemented in order
to minimize the host-server CPU loading [CHE], faster CPU’s and more efficient software initiators have
also been implemented by Intel and other CNA providers [INT].
In the diagram above from [INT], the architecture of Intel’s CAN using an “open” FCoE approach is
described. Intel uses a software-based FC initiator rather than a hardware based approach used in FC
HBA’s. The diagram shows that the adapter implements both FCoE and iSCSI off-load functions in the
adapter hardware for such functions such as check-sum calculation. Above the adapter, the functions
implemented by the host server are shown. Note that most of the logic for FCoE including the protocols
are implemented in software by the host system.
In contrast, the diagrams above from [CHE] shows Cheslio Network’s implementation of a CNA with the
FCoE implemented in hardware. The top diagram shows that the “T4” hardware implemented in the
CNA handles most of the protocol functions with a lower-level driver allowing communication to iSCSI,
FCOE, and RDMA protocols. In the lower diagram, Chelsio also implements TCP/IP Offload Engine (TOE)
to also accelerate iSCSI protocol traffic.
Data Center Bridging
Data Center Bridging (DCB), also called Converged Enhanced Ethernet (CEE) or Data Center Ethernet
(DCE) are extensions to the Ethernet specification (IEEE 802.1) which optimize the Ethernet protocol for
large scale data centers and unified computing. These enhancements include:
802.1aq -- Shortest Path Bridging
802.1Qau -- Congestion Notification
802.1Qaz -- Enhanced Transmission Selection
802.1Qbb -- Priority-based Flow Control
802.1Qbb (Priority-based Flow Control) implements the 8 priority levels as specified in the existing
802.1p while adding the capability of stopping the flow of lower priority traffic while allowing higher
priority traffic to pass [JAC]. 802.1Qaz (Enhanced Transmission Selection) allows grouping of 802.1p
priorities into groups requiring the same class of service and allocating specific network bandwidth to
the group. A priority group is also defined which can override the priority of all other groups and
consume all of the network bandwidth if required. 802.1Qau provides a rate-matching mechanism
which allows an end-point which is nearing the overflow of its receive buffers to notify all sending nodes
of the congestion and requests throttling of the transmission. When the congestion is cleared up, the
receiving node notifies all sending nodes to resume their full transmission rate. 802.1aq and the related
Transparent Interconnection of Lots of Links (TRILL) optimizes the Spanning Tree routing protocol with
link state information to quickly determine the optimal route through the network, to react quickly to
network changes and to take advantage of routes by spreading traffic among multiple routes.
In order to support the loss-less nature of fibre channel networking using FCoE, an end-to-end DCB-
enabled network is required. Deployment of DCB in a unified computing environment can be done
three different ways [MAR2].
In the first method, a server with a converged network adapter (CNA) is connected to a DCB-enabled
enable unified computing switch which routes FCoE traffic to a traditional Fibre Channel switch. The FC
switch routes FC traffic to the disk storage array over Fibre Channel.
In the second method, the same server with the CNA routes FCoE storage traffic to the same DCB-
enabled UC switch. The switch routes the FC storage traffic directly to the target storage array.
In the third method, the CNA enabled server is connected to a DCB-enabled UC switch which is
connected to a CNA enabled disk array. Storage traffic using FCoE is routed over an IP network until it is
received by the FCoE enabled Storage Array where the IP headers are stripped off and the native FC
commands are sent to the storage array. [Diagrams from MAR2].
The figure above from [GAI2] illustrates an end-to-end transmission of a FC packet from a storage array
with ID 7.1.1 to a server with FC ID of 1.1.1. In this example, the packet first transits through a FC fabric
switch which looks up the destination of 1.1.1 and forwards it using the Fabric Shortest Path First (FSPF)
routing algorithm. The switch 3, an Ethernet switch capable of routing FCoE traffic, encapsulates the FC
packet with Ethernet headers and forwards it to Switch 4 which de-encapsulates the packet and then re-
encapsulates it for forwarding to the end destination server.
Cisco’s Unified Computing System (UCS)
In March 2009, Cisco introduced the Unified Computing System (UCS), previously called “Project
California”. UCS is one of the first implementations of a unified computing platform in both a blade and
rack server implementation. UCS is based on Intel’s” Nehalem” server platform, also called the Xeon
5500. As described in [GAI], the UCS has implemented the CNA using a custom ASIC called “Palo” which
implements FCoE and iSCSI functionality over a 10GbE transport. The UCS system also supports
virtualization features including network, storage and workload virtualization.
The blades in the UCS 5000 chassis are connected to Cisco’s UCS 2000 series or Nexus 2000 fabric
extenders which provides a switched fabric between blades as well as aggregates traffic to/from the
blade chassis and provides a 10GbE up-link to the top-of-rack switch which can be a UCS 100 Series or a
Nexus 2000 series switch. The Nexus 5000 switch aggregates traffic from the top-of-rack switch and
forwards traffic to a core data center switch, typically a Nexus 7000.
[CHE], Chelsio, Converged LAN’s, SAN’s and Clusters on a Unified Wire, Wael Noureddine, Presented at
Linley Group Spring Conference on Ethernet in the Data Center, May 11, 2010.
[GAI] Gai, S., Salli, T, Andersson, R., “Project California: a Data Center Virtualization Server – UCS (Unified
Computing System)”, Cisco Systems, 2009, ISBN 976-0-557-05739-9.
[GAI2], Gai, S., Lemasa, G., “Fibre Channel over Ethernet in the Data Center: an Introduction”, 2007,
Fibre Channel Industry Organization.
[INT] Intel, Open FCoE, Cost Effective Access to Storage, Manoj Dhawan, Presented at Linley Group
Spring Conference on Ethernet in the Data Center, May 11, 2010.
[JAC] Jacobs, David, Converged Enhanced Ethernet: New protocols enhance data
center Ethernet, April 15, 2009,
[MAR] Martin, Dennis, Unified data center fabric primer: FCoE and data center bridging, January 11,
[MAR2] Martin, Dennis, Unified fabric: Data center bridging and FCoE implementation, January 22, 2010,
[McG] McGillicuddy, Shamus, FCoE network roadmap: Do you need a unified fabric strategy?
October 22, 2009,