Converged Data Center: FCoE, iSCSI and the Future of Storage Networking

  • 1,541 views
Uploaded on

(EMC World 2012 )This session explores the opportunities and challenges of using a single network to support both storage and networking. The Fibre Channel over Ethernet (FCoE) and iSCSI (SCSI over …

(EMC World 2012 )This session explores the opportunities and challenges of using a single network to support both storage and networking. The Fibre Channel over Ethernet (FCoE) and iSCSI (SCSI over TCP/IP) protocols offer two approaches for supporting storage over Ethernet. Standards, technologies and deployment scenarios for both protocols are covered, along with the future of storage networking technology.

More in: Technology
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Be the first to comment
No Downloads

Views

Total Views
1,541
On Slideshare
0
From Embeds
0
Number of Embeds
2

Actions

Shares
Downloads
80
Comments
0
Likes
1

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide

Transcript

  • 1. CONVERGED DATA CENTER: FCoE, iSCSI AND THE FUTURE OF STORAGE NETWORKING David L. Black, Ph.D. Distinguished Engineer© Copyright 2012 EMC Corporation. All rights reserved. 1
  • 2. Agenda  Network Convergence  Protocols & Standards  Server Virtualization  Solution Evolution  Conclusion© Copyright 2012 EMC Corporation. All rights reserved. 2
  • 3. 10Gb Ethernet Converged Data Center Maturation of 10 Gigabit Ethernet – Replace 1Gb adapters with fewer (start with 2) 10Gb adapters – Single network simplifies mobility for virtualization/cloud deployments Single Wire for SAN 10 GbE Network and Storage LAN 10 Gigabit Ethernet simplifies infrastructure – Reduces the number of cables and server adapters – Lowers capital expenditures and administrative costs – Reduces server power and cooling costs – Blade servers and server virtualization drive consolidated bandwidth FCoE and iSCSI both leverage this inflection point© Copyright 2012 EMC Corporation. All rights reserved. 3
  • 4. Conventional Rack Servers Ethernet Fibre Channel iSCSI SAN • Servers connect to LAN, NAS and iSCSI SAN with NICs Gigabit Ethernet • Servers connect to FC SAN with HBAs • Many environments today are Gigabit Ethernet still Gigabit Ethernet • Multiple server adapters, Fibre higher power/ cooling costs Gigabit Ethernet LAN – Separate storage network (incl. iSCSI) Ethernet Channel HBAs NICs Note: NAS is part of the converged Fibre Channel SAN approach. Everywhere that Ethernet or 10Gb Ethernet is used in this presentation, NAS can be considered part of the unified Storage storage solution Rack-mount servers© Copyright 2012 EMC Corporation. All rights reserved. 4
  • 5. Agenda • Network Convergence • Protocols & Standards • Server Virtualization • Solution Evolution • Conclusion© Copyright 2012 EMC Corporation. All rights reserved. 5
  • 6. iSCSI Introduction Transport storage (SCSI) over standard Ethernet – Reliability through TCP More flexible than FC due to IP routing SCSI Good performance iSCSI iSCSI has thrived – Especially where the server, storage and network TCP administrators are the same person IP Link IP Network© Copyright 2012 EMC Corporation. All rights reserved. 6
  • 7. iSCSI Introduction (continued) Standardized in 2004: IETF RFC 3720 – Stable: No major changes since 2004 – iSCSI Corrections and Clarifications: IETF RFC 5048 (2007) – Now underway: consolidated spec, minor updates iSCSI Session: One Initiator and one Target – Multiple TCP connections allowed in a session Important iSCSI additions to SCSI – Immediate and unsolicited data to avoid round trip – Login phase for connection setup – Explicit logout for clean teardown© Copyright 2012 EMC Corporation. All rights reserved. 7
  • 8. iSCSI Read Example Initiator Target SCSI Read Data in PDU Command Target Data in PDU Receive Data in PDU Data Status Command Complete Optimization: Good status can be included with last “Data in” PDU© Copyright 2012 EMC Corporation. All rights reserved. 8
  • 9. iSCSI Write Example Initiator Target Ready to SCSI Write Transmit Command (R2T) Optimization: Immediate and/or Data out PDU Receive unsolicited data Data Data out PDU avoids a round trip Data out PDU R2T Data out PDU Receive Data Command Status Complete© Copyright 2012 EMC Corporation. All rights reserved. 9
  • 10. iSCSI Encapsulation Ethernet IP TCP iSCSI Data CRC Header Delivery of iSCSI Protocol Data Unit (PDU) for SCSI functionality (initiator, target, data read/write, etc.) Reliable data transport and delivery (TCP Windows, ACKs, ordering, etc.) Also demux within node (port numbers) Provides IP routing capability so packets can find their way through the network Provides physical network capability (Cat 6, MAC, etc.)© Copyright 2012 EMC Corporation. All rights reserved. 10
  • 11. FCoE: Why a New Option for FC? FC: large and well managed installed base – Leverage FC expertise / investment – Other convergence options not incremental for existing FC Data Center solution for I/O consolidation Leverage Ethernet infrastructure and skill set FCoE allows an Ethernet-based SAN to be introduced into an FC-based Data Center without breaking existing administrative tools and workflows© Copyright 2012 EMC Corporation. All rights reserved. 11
  • 12. FCoE Extends FC on a Single NetworkServer sees storage traffic as FC Ethernet Network FC Network Driver Driver Converged FC storage Network Adapter SAN sees host as FC Lossless Ethernet FC network FCoE Switch Ethernet FC © Copyright 2012 EMC Corporation. All rights reserved. 12
  • 13. FCoE Frames FC frames encapsulated in Layer 2 Ethernet frames – No TCP, Lossless Ethernet required – No IP routing 1:1 frame encapsulation – FC frame never segmented across multiple Ethernet frames Requires at least Mini Jumbo (2.5k) Ethernet frames – Max FC payload size: 2180 bytes – Max FCoE frame size: 2240 bytes FC Frame Header Ethernet FC Payload Header Header CRC FCoE EOF FC FCS© Copyright 2012 EMC Corporation. All rights reserved. 13
  • 14. FCoE InitializationEthernet is more than a cable Native FC link: Optical fiber has 2 endpoints (simple) – Discovery: Who’s at the other end? – Liveness: Is the other end still there? FCoE virtual link: Ethernet LAN or VLAN, 3+ endpoints possible – Discovery: Choice of FCoE switches – Liveness: FCoE virtual link may span multiple Ethernet links ▪ Single link liveness check isn’t enough, where’s the problem? FCoE configuration: Do mini jumbo (or larger) frames work? FIP: FCoE Initialization Protocol – Discover endpoints, create and initialize virtual link with FCoE switch – Mini jumbo frame support: Large frame is part of discovery – Periodic LKA (Link Keep Alive) messages after initialization© Copyright 2012 EMC Corporation. All rights reserved. 14
  • 15. FCoE Switch Discovery FCoE/FCStep 1: FIP Solicitation Switches Server DCB Ethernet FC SAN Solicitation Select FCoE VLAN first (pre-config or FIP) Multicast Solicitation: Server can discover multiple switches Solicitation identifies Server (FC WWN for FCoE CNA) – CNA = Converged Network Adapter (FCoE analog of HBA) – Switch chooses servers to respond to (default: respond to all)© Copyright 2012 EMC Corporation. All rights reserved. 15
  • 16. FCoE Switch Discovery FCoE/FCStep 2: FIP Advertisement Switches Server DCB Ethernet FC SAN Advertisement Priority = 1 Advertisement Priority = 25 Advertisement identifies switch – Multiple switches may respond, advertisement includes priority – Server chooses FCoE switch by priority (smallest number wins) Advertisement padded to max FC frame size: Test mini jumbo frames© Copyright 2012 EMC Corporation. All rights reserved. 16
  • 17. FIP Switch Discovery FCoE/FCStep 3: FIP-based FC Login Switches Server DCB Ethernet FC SAN FLOGI Priority = 1 FLOGI ACC Priority = 25 FIP encapsulated FC Login – Server sends FC Fabric Login (FLOGI) to selected switch – Switch responds with FC FLOGI ACC (accept) with assigned FCID All further traffic is standard FC frames (FCoE encapsulated)© Copyright 2012 EMC Corporation. All rights reserved. 17
  • 18. FCoE and Ethernet Standards –Two complementary standards efforts Fibre Channel over Ethernet (FCoE) Data Center Bridging (DCB) Ethernet  Developed by International  Developed by IEEE Data Center Committee for Information Bridging (DCB) Task Group Technology Standards (INCITS)  DCB Ethernet drops frames as T11 Fibre Channel Interfaces rarely as FC Technical Committee  Technology commonly referred to  Enables FC traffic over Ethernet as Lossless Ethernet  FC-BB-5 standard: June 2009  IEEE standards: final approval  FC-BB-6 standard in process to March 2011 expand solution  DCB: Required for FCoE  DCB: Enhancement for iSCSI Companies working on the standard committees Key participants: Brocade, Cisco, EMC, Emulex, HP, IBM, Intel, QLogic, others© Copyright 2012 EMC Corporation. All rights reserved. 18
  • 19. FC-BB-6 – New FCoE features Direct connection of servers to storage – PT2PT [point to point]: Single cable – VN2VN [VN_Port to VN_Port]: Single Ethernet LAN or VLAN Better support for FC fabric scaling (switch count) – Distribute logical FC fabric switch functionality – Enables every DCB Ethernet switch to participate in FCoE For more, see Erik Smith’s (EMC E-Lab) presentation: FCoE - Topologies, Protocol, and Limitations Tues 10:00am and Wed 4:15pm© Copyright 2012 EMC Corporation. All rights reserved. 19
  • 20. Lossless Ethernet (DCB) IEEE 802.1 Data Center Bridging (DCB) Link level enhancements: 1. Enhanced Transmission Selection (ETS) 2. Priority Flow Control (PFC) 3. Data Center Bridging Exchange Protocol (DCBX) DCB: network portion that must be lossless – Generally limited to data center distances per link – Can use long-distance optics, but uncommon in practice DCB Ethernet provides the Lossless Infrastructure that enables FCoE. DCB also improves iSCSI.© Copyright 2012 EMC Corporation. All rights reserved. 20
  • 21. Enhanced Transmission SelectionDCB part 1: IEEE 802.1Qaz [ETS]Management framework for link bandwidth • Priority configuration and bandwidth reservation – HPC & storage traffic have higher priority – HPC & storage traffic reserve bandwidth • Low latency for Offered Traffic 10 GE Link Realized Traffic Utilization high priority traffic 3G/s 3G/s 2G/s 3G/s HPC Traffic 2G/s 3G/s – Unused bandwidth available to other 3G/s Storage Traffic 3G/s 3G/s 3G/s 3G/s traffic 3G/s 3G/s 4G/s 6G/s 3G/s LAN Traffic 5G/s 4G/s t1 t2 t3 t1 t2 t3© Copyright 2012 EMC Corporation. All rights reserved. 21
  • 22. PAUSE and Priority Flow ControlDCB part 2: IEEE 802.1Qbb & 802.3bd [PFC] PAUSE can produce lossless Ethernet behavior – Original 802.3x PAUSE stops all traffic: rarely implemented New PAUSE: Priority Flow Control (PFC) – Pause per priority level – No effect on traffic at other priority levels – Creates lossless virtual lanes Per priority flow control – Enable/disable per priority ▪ Only for traffic that needs it – Better link management than 8-way PAUSE Switch A Switch B© Copyright 2012 EMC Corporation. All rights reserved. 22
  • 23. Data Center Bridging Capability eXchangeDCB part 3: IEEE 802.1Qaz (again) [DCBX] FCoE/FC Switches Server DCB Ethernet FC SAN DCBX• Ethernet Link configuration (single link) – Extends Link Layer Discovery Protocol (LLDP)• Reliably enables lossless behavior (DCB) – e.g., exchange Ethernet priority values for FCoE and FIP• FCoE virtual links should not be instantiated without DCBX© Copyright 2012 EMC Corporation. All rights reserved. 23
  • 24. Ethernet Spanning Trees and FCoE Reminder: FCoE is Ethernet only, no IP routing – Ethernet (layer 2) is bridged, not routed Spanning Tree Protocol (STP): Prevents (deadly) loops – Elects a Root Switch, disables redundant paths Causes problems in large layer 2 networks – No network multipathing – Inefficient link utilization Root Switch Si Si Si Si Si Si Si© Copyright 2012 EMC Corporation. All rights reserved. 24
  • 25. TRILL – Transparent Interconnection ofLots of Links Layer 2 routing for Ethernet switches [IP: layer 3] – IS-IS routing protocol for inter-switch Ethernet traffic – Blocks Spanning Tree Protocol TRILL encapsulates Ethernet frames – Not used with end systems (NICs) – NICs: use link teaming/aggregation Si Si All links active  Si Si Si Si Si© Copyright 2012 EMC Corporation. All rights reserved. 25
  • 26. Ethernet Cabling Choices Type / Connector Cable 1Gb 10Gb 40/100Gb Copper Cat6 or Most Some products Not (10GBase-T) Cat6a existing on market, but supported / RJ-45 cabling not for FCoE (insufficient (lots of yet. For 10Gb bandwidth) Cat 5e) Ethernet: Cat6 55m Cat6a 100m Optical OM2 Rare for Most backbone Expect shift (multimode) (orange) Ethernet deployments to optical w/ / LC OM3 are optical. 40/100Gb (aqua) Standard for FC OM2 82m OM3 100m OM4 (aqua) OM3 300m OM4 125m OM4 380m Copper / Twinax N/A Low power Different SFP+DA Think of short- (direct 5-10m distance distance as part of (Rack solution) attach) option connected (QSFP) equipment© Copyright 2012 EMC Corporation. All rights reserved. 26
  • 27. Agenda • Network Convergence • Protocols & Standards • Server Virtualization • Solution Evolution • Conclusion© Copyright 2012 EMC Corporation. All rights reserved. 27
  • 28. Live Virtual Machine Migration Shared storage: Storage networking: Move VM withoutmoving stored data C: Enabler of shared storage© Copyright 2012 EMC Corporation. All rights reserved. 28
  • 29. Storage Drivers and Server Virtualization vNIC vSCSI vNIC vSCSI virtual switch Hypervisor Hypervisor driver NIC FC NIC FC HBA HBA LAN traffic iSCSI traffic FC traffic *iSCSI initiator can also be in the VM (Private Storage)© Copyright 2012 EMC Corporation. All rights reserved. 29
  • 30. Storage Drivers and Server Virtualization vNIC vSCSI vNIC vSCSI virtual switch Hypervisor Hypervisor driver NIC C FC NIC C FC N HBA N HBA A A LAN traffic iSCSI traffic FCoE follows FC path *iSCSI initiator can also be in the VM (Private Storage)© Copyright 2012 EMC Corporation. All rights reserved. 30
  • 31. Software FCoE and Server Virtualization Not a problem foriSCSI, NFS or CIFS in a VM SW FCoE SW FCoE vNIC vSCSI vNIC vSCSIVirtual Switches in ESX/ESXi (including CiscoNexus 1000v) and virtual switch Hypervisor Hypervisor driver Hyper-V are notLossless (no DCB) NIC FC NIC FC HBA HBA FCoE software in VMs would send traffic through the virtual switch to the NICs© Copyright 2012 EMC Corporation. All rights reserved. 31
  • 32. Software FCoE and Server Virtualization SW SW FCoE FCoE vNIC vSCSI vNIC vSCSI virtual switch Hypervisor Hypervisor driver SW FCoE works in FCoEHypervisor or CNA(just not in a VM) NIC FC NIC C FC HBA N HBA A FCoE software in VMs would send traffic through the virtual switch to the NICs© Copyright 2012 EMC Corporation. All rights reserved. 32
  • 33. Agenda • Network Convergence • Protocols & Standards • Server Virtualization • Solution Evolution • Conclusion© Copyright 2012 EMC Corporation. All rights reserved. 33
  • 34. FCoE and iSCSI FCoE iSCSI Ethernet FC expertise / install base No FC expertise needed FC management Leverage Layer 2 Ethernet Ethernet/IP expertise Supports distance Use FCIP for distance 10 Gigabit Ethernet connectivity (L3 IP routing) Lossless Ethernet Strong virtualization affinity© Copyright 2012 EMC Corporation. All rights reserved. 34
  • 35. iSCSI Deployment  10 Gb iSCSI solutions are available – Traditional Ethernet (recover from dropped packets using TCP) or – Lossless Ethernet (DCB) environment (TCP still used)  iSCSI: natively routable (IP) Ethernet – Can use VLAN(s) to isolate traffic iSCSI SAN  iSCSI solutions: smaller scale than FC – Larger SANs: usually FC© Copyright 2012 EMC Corporation. All rights reserved. 35
  • 36. Convergence: Server Phase Converged Network Switch at top of rack or end of row – Tightly controlled solution – Server 10 GE adapters: CNA or NIC Ethernet LAN iSCSI and FCoE via Converged Network Switch Ethernet iSCSI FC Converged Network Switch FC Attach Fibre Channel SAN 1 Gb NICs FC HBAs 10 GbE CNAs Storage Rack Mount Servers© Copyright 2012 EMC Corporation. All rights reserved. 36
  • 37. Convergence: Network Phase Converged Network Switches move out of rack Maintains existing SAN and network management Ethernet LAN Overlapping admin domains may compel cultural adjustments Ethernet Network Ethernet (IP, FCoE) and CNS Converged Network FC Switch Fibre Channel SAN 10 GbE CNAs Storage Rack Mount Servers© Copyright 2012 EMC Corporation. All rights reserved. 37
  • 38. Convergence at 10 Gigabit Ethernet Two paths to a Converged Network – iSCSI: purely Ethernet – FCoE: mix FC and Ethernet (or all Ethernet) ▪ FC compatibility now and in the future Ethernet LAN Choose (one or both) on scalability, management, and skill set Converged Network Ethernet Switch FC iSCSI/FCoE Storage Fibre Channel & FCoE attach 10 GbE CNAs FC & FCoE SAN Rack Mount Servers© Copyright 2012 EMC Corporation. All rights reserved. 38
  • 39. EMC and Ethernet TechBooks (Google: “FCoE Tech Book”) – Fibre Channel over Ethernet (FCoE) and Data Center Bridging (DCB) Concepts and Protocols – Fibre Channel over Ethernet (FCoE) and Data Center Bridging (DCB) Case Studies ▪ Includes blade server case studies Services – Design, Implementation, Performance and Security offerings for networks Products – Ethernet equipment for creating Converged Network Environments© Copyright 2012 EMC Corporation. All rights reserved. 39
  • 40. Agenda • Network Convergence • Protocols & Standards • Server Virtualization • Solution Evolution • Conclusion© Copyright 2012 EMC Corporation. All rights reserved. 40
  • 41. Summary Converged data centers can be built using 10Gb Ethernet – Continued use of FC and adoption of FCoE can be flexible due to shared management – iSCSI solutions work well for all IP/Ethernet networks 10 Gigabit Ethernet solutions are maturing – Active industry participation is creating standards that allow solutions that can integrate into existing data centers – FCoE and iSCSI will follow Ethernet roadmap to 40 and 100 Gigabits/sec Achieving a converged network: Consider technology, processes/best practices and organizational dynamics© Copyright 2012 EMC Corporation. All rights reserved. 41
  • 42. Network Virtualization: Background Benefits of Virtual Networks Common network links with access control properties of separate links. VLAN A VLAN B VLAN C Manage virtual networks instead of physical networks. Virtual SANs provide similar benefits for storage area networks. VLAN Trunk Virtual Switch SwitchNetworks Each application (or VM) sees its own virtual network, independent of physical network© Copyright 2012 EMC Corporation. All rights reserved. 42
  • 43. Network Virtualization: What’s new? Network version of DOS’s 640k memory limit – Ethernet VLAN tag has only 12 bits! Not enough for large data centers – Run any workload, anywhere? – Configure every VLAN, everywhere! New approach: IP-based encapsulation – Encapsulate Ethernet frames in IP – Use IP routing (e.g., OSPF ECMP) to run network – Hypervisor virtual switches can encapsulate for VMs Example encapsulations: VXLAN, NVGRE – Initially: No DCB Ethernet support (so, no FCoE, initially) – iSCSI, NFS, CIFS all work fine (all use TCP) Watch this space! – E.g., IETF nvo3 (Network Virtualization Overlays) Working Group© Copyright 2012 EMC Corporation. All rights reserved. 43
  • 44. Related Session and Resources FCoE - Topologies, Protocol, and Limitations – Tuesday 10:00a & Wednesday 4:15p Birds of a Feather: The Future of Storage Networking – Tuesday 1:30p Brocade: Storage Networking For the Virtual Enterprise – Tuesday 4:15p FCoE in the EMC Support Matrix – http://elabnavigator.emc.com EMC FCoE Videos: Search for “FCoE” on YouTube EMC FCoE Introduction whitepaper – http://www.emc.com/collateral/hardware/white-papers/h5916-intro-to- fcoe-wp.pdf FCoE Blog by Erik Smith (E-Lab) – http://www.brasstacksblog.typepad.com© Copyright 2012 EMC Corporation. All rights reserved. 44
  • 45. Q&A© Copyright 2012 EMC Corporation. All rights reserved. 45
  • 46. Provide Feedback & Win!  125 attendees will receive $100 iTunes gift cards. To enter the raffle, simply complete: – 5 sessions surveys – The conference survey  Download the EMC World Conference App to learn more: emcworld.com/app© Copyright 2012 EMC Corporation. All rights reserved. 46
  • 47. © Copyright 2012 EMC Corporation. All rights reserved. 47