SlideShare a Scribd company logo
1 of 7
Download to read offline
Unified Computing in Servers
                                            Wendell Wenjen

                                      Wendell.wenjen@gmail.com

                                              June 8, 2010

                          UCSC Extension, Network Storage Essentials, 21940



Abstract:
Unified Computing (UC) is an industry term which refers to a common network fabric for both data
(TCP/IP) and storage (Fibre Channel or iSCSI) networking. Servers which have implemented Unified
Computing use a converged network adapter (CNA) which can manage the transport of multiple
network protocols including TCP/IP, Fibre Channel over Ethernet (FCoE) and iSCSI. In addition, practical
implementation of UC also includes 10Gb Ethernet (10GbE), support for network virtualization and
implementation of Data Center Bridging (DCB). Cisco’s implementation of Unified Computing is
examined.


Converged Network Adapter

A Converged Network adapter (CNA) is a Network Interface Card (NIC) which supports IP data and also
storage network protocols such as Fibre Channel over Ethernet (FCoE) and iSCSI. CNA’s replace the Host
Bus Adapter for Fibre Channel, thereby reducing the amount of network cabling, the number of add-in
adapter cards and the power required for such cards. Since Fibre Channel cards have a data rate of
4Gbps or 8Gbps, CNA’s typically use 10GbE. Since CNA’s implement a Fibre Channel initiator just the
same as a HBA, the initiator can be implemented in the CNA hardware controller or as a software-based
initiator in the driver. While the hardware based initiator has traditionally been implemented in order
to minimize the host-server CPU loading [CHE], faster CPU’s and more efficient software initiators have
also been implemented by Intel and other CNA providers [INT].
In the diagram above from [INT], the architecture of Intel’s CAN using an “open” FCoE approach is
described. Intel uses a software-based FC initiator rather than a hardware based approach used in FC
HBA’s. The diagram shows that the adapter implements both FCoE and iSCSI off-load functions in the
adapter hardware for such functions such as check-sum calculation. Above the adapter, the functions
implemented by the host server are shown. Note that most of the logic for FCoE including the protocols
are implemented in software by the host system.
In contrast, the diagrams above from [CHE] shows Cheslio Network’s implementation of a CNA with the
FCoE implemented in hardware. The top diagram shows that the “T4” hardware implemented in the
CNA handles most of the protocol functions with a lower-level driver allowing communication to iSCSI,
FCOE, and RDMA protocols. In the lower diagram, Chelsio also implements TCP/IP Offload Engine (TOE)
to also accelerate iSCSI protocol traffic.

Data Center Bridging

Data Center Bridging (DCB), also called Converged Enhanced Ethernet (CEE) or Data Center Ethernet
(DCE) are extensions to the Ethernet specification (IEEE 802.1) which optimize the Ethernet protocol for
large scale data centers and unified computing. These enhancements include:

        802.1aq -- Shortest Path Bridging
        802.1Qau -- Congestion Notification
        802.1Qaz -- Enhanced Transmission Selection
        802.1Qbb -- Priority-based Flow Control
802.1Qbb (Priority-based Flow Control) implements the 8 priority levels as specified in the existing
802.1p while adding the capability of stopping the flow of lower priority traffic while allowing higher
priority traffic to pass [JAC]. 802.1Qaz (Enhanced Transmission Selection) allows grouping of 802.1p
priorities into groups requiring the same class of service and allocating specific network bandwidth to
the group. A priority group is also defined which can override the priority of all other groups and
consume all of the network bandwidth if required. 802.1Qau provides a rate-matching mechanism
which allows an end-point which is nearing the overflow of its receive buffers to notify all sending nodes
of the congestion and requests throttling of the transmission. When the congestion is cleared up, the
receiving node notifies all sending nodes to resume their full transmission rate. 802.1aq and the related
Transparent Interconnection of Lots of Links (TRILL) optimizes the Spanning Tree routing protocol with
link state information to quickly determine the optimal route through the network, to react quickly to
network changes and to take advantage of routes by spreading traffic among multiple routes.

In order to support the loss-less nature of fibre channel networking using FCoE, an end-to-end DCB-
enabled network is required. Deployment of DCB in a unified computing environment can be done
three different ways [MAR2].




In the first method, a server with a converged network adapter (CNA) is connected to a DCB-enabled
enable unified computing switch which routes FCoE traffic to a traditional Fibre Channel switch. The FC
switch routes FC traffic to the disk storage array over Fibre Channel.




In the second method, the same server with the CNA routes FCoE storage traffic to the same DCB-
enabled UC switch. The switch routes the FC storage traffic directly to the target storage array.
In the third method, the CNA enabled server is connected to a DCB-enabled UC switch which is
connected to a CNA enabled disk array. Storage traffic using FCoE is routed over an IP network until it is
received by the FCoE enabled Storage Array where the IP headers are stripped off and the native FC
commands are sent to the storage array. [Diagrams from MAR2].




The figure above from [GAI2] illustrates an end-to-end transmission of a FC packet from a storage array
with ID 7.1.1 to a server with FC ID of 1.1.1. In this example, the packet first transits through a FC fabric
switch which looks up the destination of 1.1.1 and forwards it using the Fabric Shortest Path First (FSPF)
routing algorithm. The switch 3, an Ethernet switch capable of routing FCoE traffic, encapsulates the FC
packet with Ethernet headers and forwards it to Switch 4 which de-encapsulates the packet and then re-
encapsulates it for forwarding to the end destination server.

Cisco’s Unified Computing System (UCS)

In March 2009, Cisco introduced the Unified Computing System (UCS), previously called “Project
California”. UCS is one of the first implementations of a unified computing platform in both a blade and
rack server implementation. UCS is based on Intel’s” Nehalem” server platform, also called the Xeon
5500. As described in [GAI], the UCS has implemented the CNA using a custom ASIC called “Palo” which
implements FCoE and iSCSI functionality over a 10GbE transport. The UCS system also supports
virtualization features including network, storage and workload virtualization.

The blades in the UCS 5000 chassis are connected to Cisco’s UCS 2000 series or Nexus 2000 fabric
extenders which provides a switched fabric between blades as well as aggregates traffic to/from the
blade chassis and provides a 10GbE up-link to the top-of-rack switch which can be a UCS 100 Series or a
Nexus 2000 series switch. The Nexus 5000 switch aggregates traffic from the top-of-rack switch and
forwards traffic to a core data center switch, typically a Nexus 7000.




References:

[CHE], Chelsio, Converged LAN’s, SAN’s and Clusters on a Unified Wire, Wael Noureddine, Presented at
Linley Group Spring Conference on Ethernet in the Data Center, May 11, 2010.

[GAI] Gai, S., Salli, T, Andersson, R., “Project California: a Data Center Virtualization Server – UCS (Unified
Computing System)”, Cisco Systems, 2009, ISBN 976-0-557-05739-9.

[GAI2], Gai, S., Lemasa, G., “Fibre Channel over Ethernet in the Data Center: an Introduction”, 2007,
Fibre Channel Industry Organization.

[INT] Intel, Open FCoE, Cost Effective Access to Storage, Manoj Dhawan, Presented at Linley Group
Spring Conference on Ethernet in the Data Center, May 11, 2010.

 [JAC] Jacobs, David, Converged Enhanced Ethernet: New protocols enhance data
center Ethernet, April 15, 2009,
http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1353927_mem1,00.html

 [MAR] Martin, Dennis, Unified data center fabric primer: FCoE and data center bridging, January 11,
2010, http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1378613_mem1,00.html

[MAR2] Martin, Dennis, Unified fabric: Data center bridging and FCoE implementation, January 22, 2010,
http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1379716_mem1,00.html

[McG] McGillicuddy, Shamus, FCoE network roadmap: Do you need a unified fabric strategy?
October 22, 2009,
http://searchnetworking.techtarget.com/news/article/0,289142,sid7_gci1372107,00.html

More Related Content

What's hot

LTE paging.ppt
LTE paging.pptLTE paging.ppt
LTE paging.pptmravi423
 
Plexxi Pod Switch Interconnect
Plexxi Pod Switch InterconnectPlexxi Pod Switch Interconnect
Plexxi Pod Switch InterconnectPlexxi, Inc.
 
Multicastingand multicast routing protocols
Multicastingand multicast routing protocolsMulticastingand multicast routing protocols
Multicastingand multicast routing protocolsIffat Anjum
 
Multicasting and multicast routing protocols
Multicasting and multicast routing protocolsMulticasting and multicast routing protocols
Multicasting and multicast routing protocolsAbhishek Kesharwani
 
On-Demand Multicast Routing Protocol
On-Demand Multicast Routing ProtocolOn-Demand Multicast Routing Protocol
On-Demand Multicast Routing ProtocolSenthil Kanth
 
Comparison between traditional vpn and mpls vpn
Comparison between traditional vpn and mpls vpnComparison between traditional vpn and mpls vpn
Comparison between traditional vpn and mpls vpnmmubashirkhan
 
MPLS Deployment Chapter 1 - Basic
MPLS Deployment Chapter 1 - BasicMPLS Deployment Chapter 1 - Basic
MPLS Deployment Chapter 1 - BasicEricsson
 
Multi-Protocol Label Switching
Multi-Protocol Label SwitchingMulti-Protocol Label Switching
Multi-Protocol Label Switchingseanraz
 
LTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor Research
LTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor ResearchLTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor Research
LTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor ResearchEiko Seidel
 
MPLS
MPLSMPLS
MPLSKHNOG
 
Multicastingand multicast routing protocols
Multicastingand multicast routing protocolsMulticastingand multicast routing protocols
Multicastingand multicast routing protocolsIffat Anjum
 

What's hot (20)

Ppt multicast routing
Ppt multicast routingPpt multicast routing
Ppt multicast routing
 
LTE paging.ppt
LTE paging.pptLTE paging.ppt
LTE paging.ppt
 
Plexxi Pod Switch Interconnect
Plexxi Pod Switch InterconnectPlexxi Pod Switch Interconnect
Plexxi Pod Switch Interconnect
 
Plexxi Switch 2
Plexxi Switch 2 Plexxi Switch 2
Plexxi Switch 2
 
Multicastingand multicast routing protocols
Multicastingand multicast routing protocolsMulticastingand multicast routing protocols
Multicastingand multicast routing protocols
 
Multicasting and multicast routing protocols
Multicasting and multicast routing protocolsMulticasting and multicast routing protocols
Multicasting and multicast routing protocols
 
On-Demand Multicast Routing Protocol
On-Demand Multicast Routing ProtocolOn-Demand Multicast Routing Protocol
On-Demand Multicast Routing Protocol
 
Comparison between traditional vpn and mpls vpn
Comparison between traditional vpn and mpls vpnComparison between traditional vpn and mpls vpn
Comparison between traditional vpn and mpls vpn
 
MPLS Deployment Chapter 1 - Basic
MPLS Deployment Chapter 1 - BasicMPLS Deployment Chapter 1 - Basic
MPLS Deployment Chapter 1 - Basic
 
IP Multicasting
IP MulticastingIP Multicasting
IP Multicasting
 
Multi-Protocol Label Switching
Multi-Protocol Label SwitchingMulti-Protocol Label Switching
Multi-Protocol Label Switching
 
2 applications.key
2 applications.key2 applications.key
2 applications.key
 
MPLS
MPLSMPLS
MPLS
 
LTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor Research
LTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor ResearchLTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor Research
LTE-A HetNets using Carrier Aggregation - Eiko Seidel, CTO, Nomor Research
 
Wan networks
Wan networksWan networks
Wan networks
 
Detailed iSCSI presentation
Detailed iSCSI presentationDetailed iSCSI presentation
Detailed iSCSI presentation
 
Network device management
Network device managementNetwork device management
Network device management
 
MPLS
MPLSMPLS
MPLS
 
Multicastingand multicast routing protocols
Multicastingand multicast routing protocolsMulticastingand multicast routing protocols
Multicastingand multicast routing protocols
 
IP Multicasting
IP MulticastingIP Multicasting
IP Multicasting
 

Viewers also liked

Viewers also liked (8)

Hsar
HsarHsar
Hsar
 
070915
070915070915
070915
 
Phillips Power Point
Phillips Power PointPhillips Power Point
Phillips Power Point
 
Prova slideshare
Prova slideshareProva slideshare
Prova slideshare
 
D Part 15 Hsc & Hse
D  Part 15 Hsc & HseD  Part 15 Hsc & Hse
D Part 15 Hsc & Hse
 
070838
070838070838
070838
 
D Part 7 Section 2 Hswa Revision
D  Part 7 Section 2 Hswa RevisionD  Part 7 Section 2 Hswa Revision
D Part 7 Section 2 Hswa Revision
 
D Part 9 H & S Regs By J Mc Cann
D  Part 9 H & S Regs  By J Mc CannD  Part 9 H & S Regs  By J Mc Cann
D Part 9 H & S Regs By J Mc Cann
 

Similar to Unified Computing In Servers

Converged Networks: FCoE, iSCSI and the Future of Storage Networking
Converged Networks: FCoE, iSCSI and the Future of Storage NetworkingConverged Networks: FCoE, iSCSI and the Future of Storage Networking
Converged Networks: FCoE, iSCSI and the Future of Storage NetworkingStuart Miniman
 
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterFibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterStuart Miniman
 
Performance analysis of container-based networking Solutions for high-perform...
Performance analysis of container-based networking Solutions for high-perform...Performance analysis of container-based networking Solutions for high-perform...
Performance analysis of container-based networking Solutions for high-perform...IJECEIAES
 
Unified Fabric: Data Centre Bridging and FCoE Implementation
Unified Fabric: Data Centre Bridging and FCoE ImplementationUnified Fabric: Data Centre Bridging and FCoE Implementation
Unified Fabric: Data Centre Bridging and FCoE ImplementationCSCJournals
 
Quality of Service for Video Streaming using EDCA in MANET
Quality of Service for Video Streaming using EDCA in MANETQuality of Service for Video Streaming using EDCA in MANET
Quality of Service for Video Streaming using EDCA in MANETijsrd.com
 
SAN Virtuosity Series: Network Convergence & Fibre Channel over Ethernet
SAN Virtuosity Series: Network Convergence & Fibre Channel over EthernetSAN Virtuosity Series: Network Convergence & Fibre Channel over Ethernet
SAN Virtuosity Series: Network Convergence & Fibre Channel over EthernetEmulex Corporation
 
I/O Consolidation in the Data Center -Excerpt
I/O Consolidation in the Data Center -ExcerptI/O Consolidation in the Data Center -Excerpt
I/O Consolidation in the Data Center -ExcerptJamie Shoup
 
AhmedAymanMastersThesis
AhmedAymanMastersThesisAhmedAymanMastersThesis
AhmedAymanMastersThesisAhmed Ayman
 
iSCSI and CLEAR-Flow
iSCSI and CLEAR-FlowiSCSI and CLEAR-Flow
iSCSI and CLEAR-FlowMUK Extreme
 
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...EMC
 
An Insight Into The Qos Techniques
An Insight Into The Qos TechniquesAn Insight Into The Qos Techniques
An Insight Into The Qos TechniquesKatie Gulley
 
Enabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaper
Enabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaperEnabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaper
Enabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaperIvar Søvold
 
A20345606_Shah_Bonus_Report
A20345606_Shah_Bonus_ReportA20345606_Shah_Bonus_Report
A20345606_Shah_Bonus_ReportPanth Shah
 
DCN Presentation
DCN PresentationDCN Presentation
DCN PresentationBrad Boyce
 
1Running Head Network Design3Network DesignUn.docx
1Running Head Network Design3Network DesignUn.docx1Running Head Network Design3Network DesignUn.docx
1Running Head Network Design3Network DesignUn.docxeugeniadean34240
 
40G 100G gigabit ethernet technology overview
40G 100G gigabit ethernet technology overview40G 100G gigabit ethernet technology overview
40G 100G gigabit ethernet technology overviewMapYourTech
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 

Similar to Unified Computing In Servers (20)

Converged Networks: FCoE, iSCSI and the Future of Storage Networking
Converged Networks: FCoE, iSCSI and the Future of Storage NetworkingConverged Networks: FCoE, iSCSI and the Future of Storage Networking
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
 
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data CenterFibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
 
Performance analysis of container-based networking Solutions for high-perform...
Performance analysis of container-based networking Solutions for high-perform...Performance analysis of container-based networking Solutions for high-perform...
Performance analysis of container-based networking Solutions for high-perform...
 
Unified Fabric: Data Centre Bridging and FCoE Implementation
Unified Fabric: Data Centre Bridging and FCoE ImplementationUnified Fabric: Data Centre Bridging and FCoE Implementation
Unified Fabric: Data Centre Bridging and FCoE Implementation
 
Quality of Service for Video Streaming using EDCA in MANET
Quality of Service for Video Streaming using EDCA in MANETQuality of Service for Video Streaming using EDCA in MANET
Quality of Service for Video Streaming using EDCA in MANET
 
Afdx solutions an
Afdx solutions anAfdx solutions an
Afdx solutions an
 
SAN Virtuosity Series: Network Convergence & Fibre Channel over Ethernet
SAN Virtuosity Series: Network Convergence & Fibre Channel over EthernetSAN Virtuosity Series: Network Convergence & Fibre Channel over Ethernet
SAN Virtuosity Series: Network Convergence & Fibre Channel over Ethernet
 
I/O Consolidation in the Data Center -Excerpt
I/O Consolidation in the Data Center -ExcerptI/O Consolidation in the Data Center -Excerpt
I/O Consolidation in the Data Center -Excerpt
 
AhmedAymanMastersThesis
AhmedAymanMastersThesisAhmedAymanMastersThesis
AhmedAymanMastersThesis
 
iSCSI and CLEAR-Flow
iSCSI and CLEAR-FlowiSCSI and CLEAR-Flow
iSCSI and CLEAR-Flow
 
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
White paper : Introduction to Fibre Channel over Ethernet (FCoE) - A Detailed...
 
An Insight Into The Qos Techniques
An Insight Into The Qos TechniquesAn Insight Into The Qos Techniques
An Insight Into The Qos Techniques
 
Enabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaper
Enabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaperEnabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaper
Enabling 5G X-Haul with Deterministic Ethernet - A TransPacket whitepaper
 
IFD30104 Chapter 1
IFD30104 Chapter 1IFD30104 Chapter 1
IFD30104 Chapter 1
 
A20345606_Shah_Bonus_Report
A20345606_Shah_Bonus_ReportA20345606_Shah_Bonus_Report
A20345606_Shah_Bonus_Report
 
DCN Presentation
DCN PresentationDCN Presentation
DCN Presentation
 
1Running Head Network Design3Network DesignUn.docx
1Running Head Network Design3Network DesignUn.docx1Running Head Network Design3Network DesignUn.docx
1Running Head Network Design3Network DesignUn.docx
 
40G 100G gigabit ethernet technology overview
40G 100G gigabit ethernet technology overview40G 100G gigabit ethernet technology overview
40G 100G gigabit ethernet technology overview
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Virtual
VirtualVirtual
Virtual
 

Unified Computing In Servers

  • 1. Unified Computing in Servers Wendell Wenjen Wendell.wenjen@gmail.com June 8, 2010 UCSC Extension, Network Storage Essentials, 21940 Abstract: Unified Computing (UC) is an industry term which refers to a common network fabric for both data (TCP/IP) and storage (Fibre Channel or iSCSI) networking. Servers which have implemented Unified Computing use a converged network adapter (CNA) which can manage the transport of multiple network protocols including TCP/IP, Fibre Channel over Ethernet (FCoE) and iSCSI. In addition, practical implementation of UC also includes 10Gb Ethernet (10GbE), support for network virtualization and implementation of Data Center Bridging (DCB). Cisco’s implementation of Unified Computing is examined. Converged Network Adapter A Converged Network adapter (CNA) is a Network Interface Card (NIC) which supports IP data and also storage network protocols such as Fibre Channel over Ethernet (FCoE) and iSCSI. CNA’s replace the Host Bus Adapter for Fibre Channel, thereby reducing the amount of network cabling, the number of add-in adapter cards and the power required for such cards. Since Fibre Channel cards have a data rate of 4Gbps or 8Gbps, CNA’s typically use 10GbE. Since CNA’s implement a Fibre Channel initiator just the same as a HBA, the initiator can be implemented in the CNA hardware controller or as a software-based initiator in the driver. While the hardware based initiator has traditionally been implemented in order to minimize the host-server CPU loading [CHE], faster CPU’s and more efficient software initiators have also been implemented by Intel and other CNA providers [INT].
  • 2. In the diagram above from [INT], the architecture of Intel’s CAN using an “open” FCoE approach is described. Intel uses a software-based FC initiator rather than a hardware based approach used in FC HBA’s. The diagram shows that the adapter implements both FCoE and iSCSI off-load functions in the adapter hardware for such functions such as check-sum calculation. Above the adapter, the functions implemented by the host server are shown. Note that most of the logic for FCoE including the protocols are implemented in software by the host system.
  • 3. In contrast, the diagrams above from [CHE] shows Cheslio Network’s implementation of a CNA with the FCoE implemented in hardware. The top diagram shows that the “T4” hardware implemented in the CNA handles most of the protocol functions with a lower-level driver allowing communication to iSCSI, FCOE, and RDMA protocols. In the lower diagram, Chelsio also implements TCP/IP Offload Engine (TOE) to also accelerate iSCSI protocol traffic. Data Center Bridging Data Center Bridging (DCB), also called Converged Enhanced Ethernet (CEE) or Data Center Ethernet (DCE) are extensions to the Ethernet specification (IEEE 802.1) which optimize the Ethernet protocol for large scale data centers and unified computing. These enhancements include: 802.1aq -- Shortest Path Bridging 802.1Qau -- Congestion Notification 802.1Qaz -- Enhanced Transmission Selection 802.1Qbb -- Priority-based Flow Control
  • 4. 802.1Qbb (Priority-based Flow Control) implements the 8 priority levels as specified in the existing 802.1p while adding the capability of stopping the flow of lower priority traffic while allowing higher priority traffic to pass [JAC]. 802.1Qaz (Enhanced Transmission Selection) allows grouping of 802.1p priorities into groups requiring the same class of service and allocating specific network bandwidth to the group. A priority group is also defined which can override the priority of all other groups and consume all of the network bandwidth if required. 802.1Qau provides a rate-matching mechanism which allows an end-point which is nearing the overflow of its receive buffers to notify all sending nodes of the congestion and requests throttling of the transmission. When the congestion is cleared up, the receiving node notifies all sending nodes to resume their full transmission rate. 802.1aq and the related Transparent Interconnection of Lots of Links (TRILL) optimizes the Spanning Tree routing protocol with link state information to quickly determine the optimal route through the network, to react quickly to network changes and to take advantage of routes by spreading traffic among multiple routes. In order to support the loss-less nature of fibre channel networking using FCoE, an end-to-end DCB- enabled network is required. Deployment of DCB in a unified computing environment can be done three different ways [MAR2]. In the first method, a server with a converged network adapter (CNA) is connected to a DCB-enabled enable unified computing switch which routes FCoE traffic to a traditional Fibre Channel switch. The FC switch routes FC traffic to the disk storage array over Fibre Channel. In the second method, the same server with the CNA routes FCoE storage traffic to the same DCB- enabled UC switch. The switch routes the FC storage traffic directly to the target storage array.
  • 5. In the third method, the CNA enabled server is connected to a DCB-enabled UC switch which is connected to a CNA enabled disk array. Storage traffic using FCoE is routed over an IP network until it is received by the FCoE enabled Storage Array where the IP headers are stripped off and the native FC commands are sent to the storage array. [Diagrams from MAR2]. The figure above from [GAI2] illustrates an end-to-end transmission of a FC packet from a storage array with ID 7.1.1 to a server with FC ID of 1.1.1. In this example, the packet first transits through a FC fabric switch which looks up the destination of 1.1.1 and forwards it using the Fabric Shortest Path First (FSPF) routing algorithm. The switch 3, an Ethernet switch capable of routing FCoE traffic, encapsulates the FC packet with Ethernet headers and forwards it to Switch 4 which de-encapsulates the packet and then re- encapsulates it for forwarding to the end destination server. Cisco’s Unified Computing System (UCS) In March 2009, Cisco introduced the Unified Computing System (UCS), previously called “Project California”. UCS is one of the first implementations of a unified computing platform in both a blade and rack server implementation. UCS is based on Intel’s” Nehalem” server platform, also called the Xeon 5500. As described in [GAI], the UCS has implemented the CNA using a custom ASIC called “Palo” which implements FCoE and iSCSI functionality over a 10GbE transport. The UCS system also supports virtualization features including network, storage and workload virtualization. The blades in the UCS 5000 chassis are connected to Cisco’s UCS 2000 series or Nexus 2000 fabric extenders which provides a switched fabric between blades as well as aggregates traffic to/from the blade chassis and provides a 10GbE up-link to the top-of-rack switch which can be a UCS 100 Series or a
  • 6. Nexus 2000 series switch. The Nexus 5000 switch aggregates traffic from the top-of-rack switch and forwards traffic to a core data center switch, typically a Nexus 7000. References: [CHE], Chelsio, Converged LAN’s, SAN’s and Clusters on a Unified Wire, Wael Noureddine, Presented at Linley Group Spring Conference on Ethernet in the Data Center, May 11, 2010. [GAI] Gai, S., Salli, T, Andersson, R., “Project California: a Data Center Virtualization Server – UCS (Unified Computing System)”, Cisco Systems, 2009, ISBN 976-0-557-05739-9. [GAI2], Gai, S., Lemasa, G., “Fibre Channel over Ethernet in the Data Center: an Introduction”, 2007, Fibre Channel Industry Organization. [INT] Intel, Open FCoE, Cost Effective Access to Storage, Manoj Dhawan, Presented at Linley Group Spring Conference on Ethernet in the Data Center, May 11, 2010. [JAC] Jacobs, David, Converged Enhanced Ethernet: New protocols enhance data center Ethernet, April 15, 2009, http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1353927_mem1,00.html [MAR] Martin, Dennis, Unified data center fabric primer: FCoE and data center bridging, January 11, 2010, http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1378613_mem1,00.html [MAR2] Martin, Dennis, Unified fabric: Data center bridging and FCoE implementation, January 22, 2010, http://searchnetworking.techtarget.com/tip/0,289483,sid7_gci1379716_mem1,00.html [McG] McGillicuddy, Shamus, FCoE network roadmap: Do you need a unified fabric strategy?