FCoE vs. iSCSIMaking the ChoiceStephen Foskettstephen@fosketts.net@SFoskettFoskettServices.comBlog.Fosketts.netGestaltIT.com
This is Not a Rah-Rah Session
This is Not a Cage Match
First, let’s talk about convergence and Ethernet
Converging on ConvergenceData centers rely more on standard ingredientsWhat will connect these systems together?IP and Ethernet are logical choices
Drivers of Convergence
What's in it for you?
The Storage Network Roadmap
Performance: Throughput vs. LatencyHigh-ThroughputLow Latency
Serious Performance10 GbE is faster than most storage interconnectsiSCSI and FCoE both can perform at wire-rate
Latency is Critical TooLatency is even more critical in shared storage
Benefits Beyond Speed10 GbE takes performance off the table (for now…)But performance is only half the story:Simplified connectivityNew network architectureVirtual machine mobility
Server Connectivity1 GbE Network1 GbE Cluster4G FC Storage10 GbE(Plus 6 Gbps extra capacity)
FlexibilityNo more rats-nest of cablesServers become interchangeable unitsSwappableBrought on line quicklyFew cable connections Less concern about availability of I/O slots, cards and portsCPU, memory, chipset are deciding factor, not HBA or network adapter
Changing Data CenterPlacement and cabling of SAN switches and adapters dictates where to install serversConsiderations for placing SAN-attached servers:Cable types and lengthsSwitch locationLogical SAN layoutApplies to both FC and GbEiSCSISANsUnified 10 GbE network allows the same data and storage networking in any rack position
Virtualization: Performance and FlexibilityPerformance and flexibility benefits are amplified with virtual servers10 GbE acceleration of storage performance, especially latency – “the I/O blender”Can allow performance-sensitive applications to use virtual servers
Virtual Machine MobilityMoving virtual machines is the next big challengePhysical servers are difficult to move around and between data centersPent-up desire to move virtual machines from host to host and even to different physical locations
Enhanced 10 Gb EthernetEthernet and SCSI were not made for each other
SCSI expects a lossless transport with guaranteed delivery
Ethernet expects higher-level protocols to take care of issues
“Data Center Bridging” is a project to create lossless Ethernet
AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
iSCSI and NFS are happy with or without DCB
DCB is a work in progress
FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
QCN (Qau) is still not readyPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauBandwidth Management (ETS)802.1QazPAUSE802.3xData Center Bridging Exchange Protocol (DCBX)Traffic Classes 802.1p/Q
Flow ControlPAUSE (802.3x)Reactive not proactive (like FC credit approach)Allows a receiver to block incoming traffic in a point-to-point Ethernet linkPriority Flow Control 802.1Qbb)Uses an 8-bit mask in PAUSE to specify 802.1p prioritiesBlocks a class of traffic, not an entire linkRatified and shippingSwitch ASwitch BGraphic courtesy of EMCResult of PFC:Handles transient spikesMakes Ethernet losslessRequired for FCoE
Bandwidth ManagementEnhanced Transmission Selection (ETS) 802.1QazLatest in a series of attempts at Quality of Service (QoS)Allows “overflow” to better-utilize bandwidthData Center Bridging Exchange (DCBX) protocolAllows devices to determine mutual capabilitiesRequired for ETS, useful for othersRatified and shipping10 GE Link Realized Traffic UtilizationOffered TrafficHPC Traffic3G/s3G/s2G/s3G/s3G/s2G/sStorage Traffic3G/s3G/s3G/s3G/s3G/s3G/sLAN Traffic4G/s5G/s3G/s3G/s4G/s6G/sGraphic courtesy of EMC
Congestion NotificationNeed a more proactive approach to persistent congestionQCN 802.1QauNotifies edge ports of congestion, allowing traffic to flow more smoothlyImproves end-to-end network latency (important for storage)Should also improve overall throughputNot quite readyGraphic courtesy of Broadcom
Now we can talk aboutiSCSI vs. FCoE
Why Choose a Protocol?
SAN History: SCSIEarly storage protocols were system-dependent and short distanceMicrocomputers used internal ST-506 disksMainframes used external bus-and-tag storageSCSI allowed systems to use external disksBlock protocol, one-to-many communicationExternal enclosures, RAIDReplaced ST-506 and ESDI in UNIX systemsSAS dominates in servers; PCs use IDE (SATA)Copyright 2006, GlassHouse Technologies
The Many Faces of SCSI“SCSI”SASiSCSI“FC”FCoE
Comparing Protocols
Why Go iSCSI?iSCSI targets are robust and matureJust about every storage vendor offers iSCSI arraysSoftware targets abound, too (Nexenta, Microsoft, StarWind)Client-side iSCSI is strong as wellWide variety of iSCSI adapters/HBAsSoftware initiators for UNIX, Windows, VMware, MacSmooth transition from 1- to 10-gigabit EthernetPlug it in and it works, no extensions requirediSCSI over DCB is rapidly appearing
iSCSI Support Matrix
iSCSI Reality Check
The Three-Fold Path of Fibre Channel
FCoE Spotters’ GuideFibre Channel over Ethernet (FCoE)FC-BB-5Bandwidth Management (ETS)802.1QazPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauFibre ChannelEthernetFibre Channel over Ethernet (FCoE)
Why Go FCoE?Large FC install base/investmentStorage arrays and switchesManagement tools and skillsAllows for incremental adoptionFCoE as an edge protocol promises to reduce connectivity costsEnd-to-end FCoE would be implemented laterI/O consolidation and virtualization capabilitiesMany DCB technologies map to the needs of server virtualization architecturesAlso leverages Ethernet infrastructure and skills

"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011

  • 1.
    FCoE vs. iSCSIMakingthe ChoiceStephen Foskettstephen@fosketts.net@SFoskettFoskettServices.comBlog.Fosketts.netGestaltIT.com
  • 2.
    This is Nota Rah-Rah Session
  • 3.
    This is Nota Cage Match
  • 4.
    First, let’s talkabout convergence and Ethernet
  • 5.
    Converging on ConvergenceDatacenters rely more on standard ingredientsWhat will connect these systems together?IP and Ethernet are logical choices
  • 6.
  • 7.
    What's in itfor you?
  • 8.
  • 9.
    Performance: Throughput vs.LatencyHigh-ThroughputLow Latency
  • 10.
    Serious Performance10 GbEis faster than most storage interconnectsiSCSI and FCoE both can perform at wire-rate
  • 11.
    Latency is CriticalTooLatency is even more critical in shared storage
  • 12.
    Benefits Beyond Speed10GbE takes performance off the table (for now…)But performance is only half the story:Simplified connectivityNew network architectureVirtual machine mobility
  • 13.
    Server Connectivity1 GbENetwork1 GbE Cluster4G FC Storage10 GbE(Plus 6 Gbps extra capacity)
  • 14.
    FlexibilityNo more rats-nestof cablesServers become interchangeable unitsSwappableBrought on line quicklyFew cable connections Less concern about availability of I/O slots, cards and portsCPU, memory, chipset are deciding factor, not HBA or network adapter
  • 15.
    Changing Data CenterPlacementand cabling of SAN switches and adapters dictates where to install serversConsiderations for placing SAN-attached servers:Cable types and lengthsSwitch locationLogical SAN layoutApplies to both FC and GbEiSCSISANsUnified 10 GbE network allows the same data and storage networking in any rack position
  • 16.
    Virtualization: Performance andFlexibilityPerformance and flexibility benefits are amplified with virtual servers10 GbE acceleration of storage performance, especially latency – “the I/O blender”Can allow performance-sensitive applications to use virtual servers
  • 17.
    Virtual Machine MobilityMovingvirtual machines is the next big challengePhysical servers are difficult to move around and between data centersPent-up desire to move virtual machines from host to host and even to different physical locations
  • 18.
    Enhanced 10 GbEthernetEthernet and SCSI were not made for each other
  • 19.
    SCSI expects alossless transport with guaranteed delivery
  • 20.
    Ethernet expects higher-levelprotocols to take care of issues
  • 21.
    “Data Center Bridging”is a project to create lossless Ethernet
  • 22.
    AKA Data CenterEthernet (DCE), Converged Enhanced Ethernet (CEE)
  • 23.
    iSCSI and NFSare happy with or without DCB
  • 24.
    DCB is awork in progress
  • 25.
    FCoE requires PFC(Qbb or PAUSE), DCBX (Qaz)
  • 26.
    QCN (Qau) isstill not readyPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauBandwidth Management (ETS)802.1QazPAUSE802.3xData Center Bridging Exchange Protocol (DCBX)Traffic Classes 802.1p/Q
  • 27.
    Flow ControlPAUSE (802.3x)Reactivenot proactive (like FC credit approach)Allows a receiver to block incoming traffic in a point-to-point Ethernet linkPriority Flow Control 802.1Qbb)Uses an 8-bit mask in PAUSE to specify 802.1p prioritiesBlocks a class of traffic, not an entire linkRatified and shippingSwitch ASwitch BGraphic courtesy of EMCResult of PFC:Handles transient spikesMakes Ethernet losslessRequired for FCoE
  • 28.
    Bandwidth ManagementEnhanced TransmissionSelection (ETS) 802.1QazLatest in a series of attempts at Quality of Service (QoS)Allows “overflow” to better-utilize bandwidthData Center Bridging Exchange (DCBX) protocolAllows devices to determine mutual capabilitiesRequired for ETS, useful for othersRatified and shipping10 GE Link Realized Traffic UtilizationOffered TrafficHPC Traffic3G/s3G/s2G/s3G/s3G/s2G/sStorage Traffic3G/s3G/s3G/s3G/s3G/s3G/sLAN Traffic4G/s5G/s3G/s3G/s4G/s6G/sGraphic courtesy of EMC
  • 29.
    Congestion NotificationNeed amore proactive approach to persistent congestionQCN 802.1QauNotifies edge ports of congestion, allowing traffic to flow more smoothlyImproves end-to-end network latency (important for storage)Should also improve overall throughputNot quite readyGraphic courtesy of Broadcom
  • 30.
    Now we cantalk aboutiSCSI vs. FCoE
  • 31.
    Why Choose aProtocol?
  • 32.
    SAN History: SCSIEarlystorage protocols were system-dependent and short distanceMicrocomputers used internal ST-506 disksMainframes used external bus-and-tag storageSCSI allowed systems to use external disksBlock protocol, one-to-many communicationExternal enclosures, RAIDReplaced ST-506 and ESDI in UNIX systemsSAS dominates in servers; PCs use IDE (SATA)Copyright 2006, GlassHouse Technologies
  • 33.
    The Many Facesof SCSI“SCSI”SASiSCSI“FC”FCoE
  • 34.
  • 35.
    Why Go iSCSI?iSCSItargets are robust and matureJust about every storage vendor offers iSCSI arraysSoftware targets abound, too (Nexenta, Microsoft, StarWind)Client-side iSCSI is strong as wellWide variety of iSCSI adapters/HBAsSoftware initiators for UNIX, Windows, VMware, MacSmooth transition from 1- to 10-gigabit EthernetPlug it in and it works, no extensions requirediSCSI over DCB is rapidly appearing
  • 36.
  • 37.
  • 38.
    The Three-Fold Pathof Fibre Channel
  • 39.
    FCoE Spotters’ GuideFibreChannel over Ethernet (FCoE)FC-BB-5Bandwidth Management (ETS)802.1QazPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauFibre ChannelEthernetFibre Channel over Ethernet (FCoE)
  • 40.
    Why Go FCoE?LargeFC install base/investmentStorage arrays and switchesManagement tools and skillsAllows for incremental adoptionFCoE as an edge protocol promises to reduce connectivity costsEnd-to-end FCoE would be implemented laterI/O consolidation and virtualization capabilitiesMany DCB technologies map to the needs of server virtualization architecturesAlso leverages Ethernet infrastructure and skills
  • 41.
    Who’s Pushing FCoEand Why?Cisco wants to move to an all-Ethernet futureBrocade sees it as a way to knock off Cisco in the Ethernet marketQlogic, Emulex, and Broadcom see it as a differentiator to push siliconIntel wants to drive CPU upgradesNetApp thinks their unified storage will win as native FCoE targetsEMC and HDS want to extend their dominance of high-end FC storageHP, IBM, and Oracle don’t care about FC anyway
  • 42.
  • 43.
    Comparing Protocol EfficiencySource:UjjwalRajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FC
  • 44.
    Comparing Protocol ThroughputSource:UjjwalRajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FC
  • 45.
    Comparing Protocol CPUEfficiencySource: UjjwalRajbhandari, Dell Storage Product Marketinghttp://www.delltechcenter.com/page/Comparing+Performance+Between+iSCSI,+FCoE,+and+FC
  • 46.
  • 47.
    Counterpoint: Why Ethernet?Whyconverge on Ethernet at all?Lots of work just to make Ethernet perform unnatural acts!Why not InfiniBand?Converged I/O already worksExcellent performance and scalabilityWide hardware availability and supportKinda pricey; another new networkWhy not something else entirely?Token Ring would have been great!FCoTRGet on the Ring
  • 48.

Editor's Notes

  • #14 The average server includes quite a few back-end ports, from traditional data networking to storage connectivity, management, and clustering. Many server manufacturers are consolidating most or all of these functions on 10 GbE, and improving performance at the same time. Rather than four bonded Gigabit Ethernet ports, less than half the capacity of a single 10 GbE connection is required. Next, server management is moved to the same IP/Ethernet connection, and capacity is left over for iSCSI to replace a Gigabit Ethernet or Fibre Channel port or two. Even cluster communication will be bundled over 10 GbE, whether a simple IP heartbeat or advanced DMA using RoCE. The average server now needs just two redundant 10 GbE ports rather than up to a dozen ports and cables.