Next-generation AAM aircraft unveiled by Supernal, S-A2
Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center
1. Fibre Channel over Ethernet (FCoE), iSCSI and the Converged Data Center Stuart Miniman, Technologist, Office of the CTO EMC Corporation
2.
3.
4.
5.
6. 10 Gigabit Ethernet Cabling *10GBase-CX is another copper option, not expected in most 10 Gb Ethernet High power and cost today Keep existing cabling layout (> 1 B ports) and patch panel infrastructure Cat6 55m; Cat 6a 100m Cat6 or Cat6a Copper (10GBase-T) / RJ-45 Distance limited to rack Passive = Very low power 5m Twinax Copper / SFP+DA (direct attach) Optical historically 1% of overall Ethernet ports Provides extended distance for backbone or core OM2 82m; OM3 300m OM2 ( orange ) or OM3 ( aqua ) Optical (multimode) / LC – + Distance Cable Type / Connector
7. Why iSCSI? Link IPsec IP TCP iSCSI SCSI Link IPsec IP TCP iSCSI SCSI Initiator Target IP Network Provides physical network capability (Layer 2 Ethernet, Cat 5, MAC, etc .) Provides IP routing (Layer 3) capability so packets can find their way through the network Reliable data transport and delivery ( TCP Windows, ACKs, ordering, etc.) Delivery of iSCSI Protocol Data Unit (PDU) for SCSI functionality (initiator, target, data read / write, etc.)
8.
9. Protocol Comparisons Base Transport Encapsulation Layer SCSI App Ethernet IP TCP iSCSI IP TCP iFCP FC IP TCP FCIP FC Infiniband SRP SCSI Applications FC replication over IP Block storage with TCP/IP New transport and drivers Low latency, high bandwidth FC over Ethernet (no TCP/IP) Base Transport Encapsulation Layer SCSI App Ethernet IP TCP iSCSI IP TCP iFCP FC IP TCP FCIP FC FC FC FCoE FC SCSI Applications FCoE FC FC management
10. FCoE Extends FC on a Single Network FC network FC storage Ethernet Network Converged Network Switch Lossless Ethernet Links 2 options SAN sees host as FC Network Driver FC Driver Converged Network Adapter Server sees storage traffic as FC Ethernet FC FCoE SW Stack Standard 10G NIC
11. Time To Widespread Adoption 1990 2000 2010 1980 Defined 73 Standard 83 Widespread 93 Defined 85 Standard 94 Widespread 03 07 09 ?? Defined Standard? iSCSI Defined 00 02 Widespread 08 Standard Standard 10 Gigabit Ethernet 02 09? Widespread
12.
13.
14.
15.
16. SCSI-to-iSCSI Mapping SCSI Command and Data iSCSI PDU alignment with packets varies Header Data iSCSI PDU Header Data iSCSI PDU Header Data iSCSI PDU Header Data iSCSI PDU IP Packet IP Packet IP Packet IP Packet IP Packet IP Packet IP Packet IP Packet IP Packet
17.
18. Storage Drivers and Virtualization NIC NIC vNIC vNIC vSCSI vSCSI *iSCSI initiator can also be in the VM Vswitch VMkernel storage stack FC HBA FC HBA LAN traffic FC traffic CNA CNA LAN traffic FCoE follows FC path Hypervisor iSCSI traffic iSCSI traffic
19. Storage Drivers and Virtualization NIC NIC vNIC vNIC vSCSI vSCSI FCoE software in the guest would send traffic through the vSwitch to the vNIC FC HBA FC HBA Hypervisor SW FCoE SW FCoE VMkernel storage stack No FCoE access here currently Vswitch vSwitch is not Lossless
Questions for the audience:How many have FC today?How many have iSCSI today?How many have 10GbE today?
WHY need it?WHAT it is?HOW it will happen!
Let’s look at the rack server environment today. In today’s environment… Servers connect to the LAN with multiple network interface cards (NICs)Servers require HBAs to connect to an existing Fibre Channel SANMany of today’s data centers are running at 1 Gigabit Ethernet speedMultiple server adapters require more power and cooling costs
Buy the “best available” grade of cable to help future proof. OM4 under discussion (buy OM3 today). Cat6a available today.For 2008 FCoE deployments, Twinax helps meet time-to-market needs with a low power/cost solution until 10GBase-T has more time to mature.Very few customers have deployed 10Gb Ethernet. Some may question adding a new cabling infrastructure, but should be limited to early “Phase 1” testing and deployments.
FC has a large and well managed install base (EMC has shipped over 4M Connectrix switch ports); $10B install base according to BrocadeDrive for I/O consolidationiSCSI strength was to use existing network infrastructure and skill set to offer lower-cost connectivity for consolidation, but management of large iSCSI environments is cumbersomeNot rip and replace or new management paradigm, but incremental transition
Replication first (iFCP EOL)IB = special transport layer and new management; used in HPCFCoE is layer 2 only, so can use FCIP under a shared FC management with FC+FCIP+FCoE; or use Ethernet native replication from Symm/CLARiiON
Same host drivers, FC management of storage.With motion – bubble analogy of FC in Jumbo Frame Ethernet and “popped” in CNS.Discuss CNA vs. NIC (NIC on 2nd click) for deployment options. CNA has hardware offload, NIC does not.
FCoE is getting traction because it maintains the channel-like performance of FC and low protocol overhead. Note that FCoE is Data Center only (Layer 2 – no routing).Encapsulation (not bridging) to attach to FC
Discussion after arrows gone – CNA (not NIC) for 1st generation since need FC traffic to go to hypervisor as FC, not through Vswitch (not lossless) or in guest.
Also useful for iSCSI
Also useful for iSCSI
TitleMonth Year<number>
TitleMonth Year<number>
FCoE is NOT the death of iSCSI (just like iSCSI was not the death of FC) – plenty of room for both; both add to the overall SAN market (meet the needs of the growing virtualization market – want more networked devices).This is also the place to say reassuring things about EMC as \"arms supplier\" in the protocol wars – whatever protocol fits your needs (customer), we (EMC) are the experts.