More Related Content Similar to Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC World 2013 ) (20) Converged Data Center: FCoE, iSCSI, & the Future of Storage Networking ( EMC World 2013 )1. 1© Copyright 2013 EMC Corporation. All rights reserved.
CONVERGED DATA
CENTER:
FCoE, iSCSI AND THE FUTURE OF
STORAGE NETWORKING
David L. Black, Ph.D.
Distinguished Engineer
Office of the CTO
2. 2© Copyright 2013 EMC Corporation. All rights reserved.
Roadmap Information Disclaimer
EMC makes no representation and undertakes no obligations with
regard to product planning information, anticipated product
characteristics, performance specifications, or anticipated release
dates (collectively, “Roadmap Information”).
Roadmap Information is provided by EMC as an accommodation to the
recipient solely for purposes of discussion and without intending to be
bound thereby.
Roadmap information is EMC Restricted Confidential and is provided
under the terms, conditions and restrictions defined in the EMC Non-
Disclosure Agreement in place with your organization.
3. 3© Copyright 2013 EMC Corporation. All rights reserved.
Agenda
Network Convergence
Protocols & Standards
Server Virtualization
Solution Evolution
Conclusion
4. 4© Copyright 2013 EMC Corporation. All rights reserved.
10Gb Ethernet Converged Data Center
Maturation of 10 Gigabit Ethernet
– Single network simplifies mobility for virtualization/cloud deployments
10 Gigabit Ethernet simplifies infrastructure
– Reduces the number of cables and server adapters
– Lowers capital expenditures and administrative costs
– Reduces server power and cooling costs
– Blade servers and server virtualization drive consolidated bandwidth
FCoE and iSCSI both leverage this inflection point
LAN
SANSingle Wire for
Network and Storage
10
GbE
5. 5© Copyright 2013 EMC Corporation. All rights reserved.
Un-Converged Rack Servers
• Servers connect to LAN, NAS
and iSCSI SAN with NICs
• Servers connect to FC SAN with
HBAs
• Some environments are still
Gigabit Ethernet
• Multiple server adapters,
higher power/ cooling costs
Rack-mount
servers
Ethernet
Fibre Channel
Ethernet LAN
Ethernet
Ethernet
NICs
Storage
Fibre Channel SAN
Fibre
Channel
HBAs
Ethernet
iSCSI SAN
Note: NAS is part of the converged
approach. Everywhere that Ethernet is
used in this presentation, NAS can be
part of the unified storage solution
6. 6© Copyright 2013 EMC Corporation. All rights reserved.
Agenda
• Network Convergence
• Protocols & Standards
• Server Virtualization
• Solution Evolution
• Conclusion
7. 7© Copyright 2013 EMC Corporation. All rights reserved.
IP Network
iSCSI Introduction
Transport storage (SCSI) over standard Ethernet
– Reliability through TCP
More flexible than FC due to IP routing
– Effectively reaches lower-tier servers than FC
Good performance
iSCSI has thrived
– Especially where server, storage, and
network admins are the person
– Example: IaaS Clouds
▪ E.g., OpenStack
Link
IP
TCP
iSCSI
SCSI
8. 8© Copyright 2013 EMC Corporation. All rights reserved.
iSCSI Introduction (continued)
Standardized in 2004: IETF RFC 3720
– Stable: No major changes since 2004
– iSCSI Corrections and Clarifications: IETF RFC 5048 (2007)
– Now underway: consolidated spec, minor updates
iSCSI Session: One Initiator and one Target
– Multiple TCP connections allowed in a session
Important iSCSI additions to SCSI
– Immediate and unsolicited data to avoid round trip
– Login phase for connection setup
– Explicit logout for clean teardown
9. 9© Copyright 2013 EMC Corporation. All rights reserved.
iSCSI Read Example
Optimization: Good
status can be included
with last “Data in” PDU
Command
Complete
Receive
Data
SCSI Read
Command
Initiator Target
Status
Data in PDU
Target
Data in PDU
Data in PDU
10. 10© Copyright 2013 EMC Corporation. All rights reserved.
iSCSI Write Example
Optimization:
Immediate and/or
unsolicited data
avoids a round trip
Status
Initiator
Ready to
Transmit
Target
SCSI Write
Command
Ready to
Transmit
Command
Complete
Receive
Data
Receive
Data
Data out PDU
Data out PDU
Data out PDU
Data out PDU
11. 11© Copyright 2013 EMC Corporation. All rights reserved.
CRCEthernet
Header
IP TCP iSCSI
iSCSI Encapsulation
Delivery of iSCSI Protocol Data Unit
(PDU) for SCSI functionality (initiator,
target, data read/write, etc.)
Provides IP routing capability so packets
can find their way through the network
Reliable data transport and delivery (TCP windows,
ACKs, ordering, etc.) Also demux (port numbers)
Provides physical network capability (Cat 6, MAC, etc.)
Data
12. 12© Copyright 2013 EMC Corporation. All rights reserved.
FCoE: Another Option for FC
FC: large and well managed installed base
– Leverage FC expertise / investment
– Other convergence options not incremental for existing FC
Data Center solution for I/O consolidation
Leverage Ethernet infrastructure and skill set
FCoE allows an Ethernet-based SAN to be introduced
into an FC-based Data Center
without breaking existing administrative tools and workflows
13. 13© Copyright 2013 EMC Corporation. All rights reserved.
FCoE Extends FC on a Single Network
Network
Driver
FC
Driver
Converged
Network Adapter
Server sees storage traffic as FC
FC Network
FC Storage
Ethernet
Network
FCoE
Switch
Lossless Ethernet
SAN sees host as FC
Ethernet
FC
14. 14© Copyright 2013 EMC Corporation. All rights reserved.
FCoE Frames
FC frames encapsulated in Layer 2 Ethernet frames
– No TCP, Lossless Ethernet (DCB) required
– No IP routing
1:1 frame encapsulation
– FC frame never segmented across multiple Ethernet frames
Requires at least Mini Jumbo (2.5k) Ethernet frames
– Max FC payload size: 2180 bytes
– Max FCoE frame size: 2240 bytes
Ethernet
Header
FCoE
Header
FC
Header
FC Payload
CRC
EOF
FCS
FC Frame
15. 15© Copyright 2013 EMC Corporation. All rights reserved.
Ethernet is more than a cable
FCoE Initialization
Native FC link: Optical fiber has 2 endpoints (simple)
– Discovery: Who’s at the other end?
– Liveness: Is the other end still there?
FCoE virtual link: Ethernet LAN or VLAN, 3+ endpoints possible
– Discovery: Choice of FCoE switches
– Liveness: FCoE virtual link may span multiple Ethernet links
▪ Single link liveness check isn’t enough, where’s the problem?
FCoE configuration: Do mini jumbo (or larger) frames work?
FIP: FCoE Initialization Protocol
– Discover endpoints, create and initialize virtual link with FCoE switch
– Mini jumbo frame support: Large frame is part of discovery
– Periodic LKA (Link Keep Alive) messages after initialization
16. 16© Copyright 2013 EMC Corporation. All rights reserved.
FCoE Switch Discovery
Step 1: FIP Solicitation
FCoE/FC
Switches
DCB Ethernet FC SAN
Select FCoE VLAN first (pre-config or FIP)
Multicast Solicitation: Server can discover multiple switches
Solicitation identifies Server (FC WWN for FCoE CNA)
– CNA = Converged Network Adapter (FCoE analog of HBA)
– Switch chooses servers to respond to (default: respond to all)
Solicitation
Server
17. 17© Copyright 2013 EMC Corporation. All rights reserved.
FCoE Switch Discovery
Step 2: FIP Advertisement
FCoE/FC
Switches
DCB Ethernet FC SAN
Advertisement identifies switch
– Multiple switches may respond, advertisement includes priority
– Server chooses FCoE switch by priority (smallest number wins)
Advertisement padded to max FC frame size: Test mini jumbo frames
Advertisement
Advertisement
Priority = 1
Priority = 25
Server
18. 18© Copyright 2013 EMC Corporation. All rights reserved.
FIP Switch Discovery
Step 3: FIP-based FC Login
FCoE/FC
Switches
DCB Ethernet FC SAN
FIP encapsulated FC Login
– Server sends FC Fabric Login (FLOGI) to selected switch
– Switch responds with FC FLOGI ACC (accept) with assigned FCID
All further traffic is standard FC frames (FCoE encapsulated)
Priority = 25
FLOGI
FLOGI ACC
Server
Priority = 1
19. 19© Copyright 2013 EMC Corporation. All rights reserved.
FCoE and Ethernet Standards –
Fibre Channel over Ethernet
(FCoE)
Developed by INCITS T11 Fibre
Channel Interfaces Technical
Committee
Enables FC traffic over Ethernet
FC-BB-5 standard: June 2009
FC-BB-6 standard in process to
expand solution
Data Center Bridging (DCB)
Ethernet
Developed by IEEE Data Center
Bridging (DCB) Task Group
DCB Ethernet drops frames as
rarely as FC
Technology commonly referred to as
Lossless Ethernet
DCB: Required for FCoE
DCB: Enhancement for iSCSI
Two complementary standards
Participants: Brocade, Cisco, EMC, Emulex, HP, IBM, Intel, QLogic, others
20. 20© Copyright 2013 EMC Corporation. All rights reserved.
FC-BB-6 – New FCoE features (soon)
Direct connection of servers to storage
– PT2PT [point to point]: Single cable
– VN2VN [VN_Port to VN_Port]: Single Ethernet LAN or VLAN
Better support for FC fabric scaling (switch count)
– Distribute logical FC fabric switch functionality
– Enables every DCB Ethernet switch to participate in FCoE
More on FCoE from E-Lab:
SAN Technology Update & Best Practice Deep Dive
for FC, FCoE & iSCSI SANs
Mon 1:00pm and Thu 11:30am
21. 21© Copyright 2013 EMC Corporation. All rights reserved.
Lossless Ethernet (DCB)
IEEE 802.1 Data Center Bridging (DCB)
Link level enhancements:
1. Enhanced Transmission Selection (ETS)
2. Priority Flow Control (PFC)
3. Data Center Bridging Exchange Protocol (DCBX)
DCB: network portion that must be lossless
– Generally limited to data center distances per link
– Can use long-distance optics, but uncommon in practice
DCB Ethernet provides the Lossless Infrastructure
that enables FCoE. DCB also improves iSCSI.
22. 22© Copyright 2013 EMC Corporation. All rights reserved.
Enhanced Transmission Selection
DCB part 1: IEEE 802.1Qaz [ETS]
Management framework for link bandwidth
Priority configuration and bandwidth reservation
– HPC & storage traffic: higher priority, reserved bandwidth
Low latency for
high priority traffic
– Unused bandwidth
available to other
traffic
Offered Traffic
t1 t2 t3
Link Utilization (10Gig link)
3G/s HPC Traffic
3G/s
2G/s
3G/sStorage Traffic
3G/s
3G/s
LAN Traffic
4G/s
5G/s3G/s
t1 t2 t3
3G/s 3G/s
3G/s 3G/s 3G/s
2G/s
3G/s 4G/s 6G/s
23. 23© Copyright 2013 EMC Corporation. All rights reserved.
Switch A Switch B
PAUSE and Priority Flow Control
DCB part 2: IEEE 802.1Qbb & 802.3bd [PFC]
PAUSE can produce lossless Ethernet behavior
– Original 802.3x PAUSE affects all traffic: rarely implemented
New PAUSE: Priority Flow Control (PFC)
– Pause per priority level
– No effect on traffic at other priority levels
– Creates lossless virtual lanes
Per priority flow control
– Enable/disable per priority
▪ Only for traffic that needs it
– Better link management
than 8-way PAUSE
24. 24© Copyright 2013 EMC Corporation. All rights reserved.
Data Center Bridging Capability eXchange
DCB part 3: IEEE 802.1Qaz (again) [DCBX]
• Ethernet Link configuration (single link)
– Extends Link Layer Discovery Protocol (LLDP)
• Reliably enables lossless behavior (DCB)
– e.g., exchange Ethernet priority values for FCoE and FIP
• FCoE virtual links should not be instantiated without DCBX
FCoE/FC Switches
DCB Ethernet FC SAN
Server
DCBX
25. 25© Copyright 2013 EMC Corporation. All rights reserved.
Ethernet Spanning Trees
Reminder: FCoE is Ethernet only, no IP routing
– Ethernet (layer 2) is bridged, not routed
Spanning Tree Protocol (STP): Prevents (deadly) loops
– Elects a Root Switch, disables redundant paths
Causes problems in large layer 2 networks (for both FCoE and iSCSI)
– No network multipathing
– Inefficient link utilization
SiSiSiSi
SiSi SiSi SiSiSiSi SiSi
Root Switch
26. 26© Copyright 2013 EMC Corporation. All rights reserved.
SiSiSiSi
SiSi SiSi SiSiSiSi SiSi
Ethernet Multipathing: SPBM and TRILL
SPBM = Shortest Path Bridging-MAC [IEEE 802.1aq]
TRILL = Transparent Interconnection of Lots of Links [IETF RFC 6325]
Layer 2 routing for Ethernet switches
– Encapsulate Ethernet traffic, use IS-IS routing protocol
– Block Spanning Tree Protocol
Transparent to NICs
All links active
27. 27© Copyright 2013 EMC Corporation. All rights reserved.
Ethernet Cabling Choices
Type / Connector Cable 1Gb 10Gb 40/100Gb
Copper
(10GBase-T) /
RJ-45
Cat6 or Cat6a Most existing
cabling (lots
of Cat 5e)
Some products on
market, but not for
FCoE yet. For 10Gb
Ethernet:
Cat6 55m
Cat6a 100m
Not supported
(insufficient
bandwidth)
Optical
(multimode) /
LC
OM2
(orange)
OM3 (aqua)
OM4 (aqua)
Rare for
Ethernet
Standard for
FC
Most backbone
deployments are
optical.
OM2 82m
OM3 300m
OM4 380m
Primarily
optical (QSFP+
connector)
OM3 100m
OM4 125m
Copper /
SFP+DA (direct
attach)
Twinax N/A Low power
5-10m distance
(Rack solution)
Different short-
distance option
(QSFP+)
Think of as
part of
connected
equipment
28. 28© Copyright 2013 EMC Corporation. All rights reserved.
Agenda
• Network Convergence
• Protocols & Standards
• Server Virtualization
• Solution Evolution
• Conclusion
29. 29© Copyright 2013 EMC Corporation. All rights reserved.
Live Virtual Machine Migration
C:
Shared storage:
Move VM without
moving stored data
Storage networking:
Enabler of shared
storage
30. 30© Copyright 2013 EMC Corporation. All rights reserved.
virtual switch Hypervisor driver
Storage Drivers and Server Virtualization
NIC NICFC
HBA
FC
HBA
vNIC vNICvSCSI vSCSI
Hypervisor
*iSCSI initiator can also be in the VM
iSCSI traffic FC trafficLAN traffic
31. 31© Copyright 2013 EMC Corporation. All rights reserved.
virtual switch Hypervisor driver
Storage Drivers and Server Virtualization
NIC NIC
vNICvSCSI vSCSI
Hypervisor
iSCSI traffic
vNIC
*iSCSI initiator can also be in the VM
FC
HBA
FC
HBA
C
N
A
C
N
A
FCoE follows FC pathLAN traffic
32. 32© Copyright 2013 EMC Corporation. All rights reserved.
Software FCoE and Server Virtualization
NIC NICFC
HBA
FC
HBA
vNIC vNICvSCSI vSCSI
Hypervisor
FCoE software in VMs would send traffic
through the virtual switch to the NICs
SW
FCoE
SW
FCoE
Hypervisor drivervirtual switch
Virtual Switches in
ESX/ESXi
(including Cisco
Nexus 1000v) and
Hyper-V are not
Lossless (no DCB)
Not a problem for
iSCSI, NFS or CIFS
in a VM
33. 33© Copyright 2013 EMC Corporation. All rights reserved.
Software FCoE and Server Virtualization
NIC NICFC
HBA
FC
HBA
vNIC vNICvSCSI vSCSI
Hypervisor
FCoE software in VMs would send traffic
through the virtual switch to the NICs
SW
FCoE
SW
FCoE
Hypervisor drivervirtual switch
FCoE works in
Hypervisor or CNA
(just not in a VM) C
N
A
SW
FCoE
34. 34© Copyright 2013 EMC Corporation. All rights reserved.
Agenda
• Network Convergence
• Protocols & Standards
• Server Virtualization
• Solution Evolution
• Conclusion
35. 35© Copyright 2013 EMC Corporation. All rights reserved.
FCoE and iSCSI
FCoE
FC expertise / install base
FC management
Layer 2 Ethernet
Use FCIP for distance
Ethernet
Leverage
Ethernet/IP expertise
10 Gigabit Ethernet
Lossless Ethernet
iSCSI
No FC expertise needed
Supports distance
connectivity (L3 IP routing)
Strong virtualization affinity
36. 36© Copyright 2013 EMC Corporation. All rights reserved.
iSCSI Deployment
10 Gb iSCSI solutions
– Traditional Ethernet (recover from
dropped packets using TCP) or
– Lossless Ethernet (DCB)
environment (TCP still used)
iSCSI: natively routable (IP)
– Can use VLAN(s) to isolate traffic
iSCSI: smaller scale solutions
– Larger SANs: usually FC
(e.g., for robustness, management)
Ethernet
iSCSI SAN
37. 37© Copyright 2013 EMC Corporation. All rights reserved.
Top Tips From E-Lab
Some iSCSI Best Practices
Use a separate VLAN for iSCSI
– Direct visibility and control of iSCSI traffic
Avoid mixing 1Gb/sec and 10Gb/sec Ethernet
– Congestion can occur where speed changes
DCB (lossless) Ethernet helps iSCSI, but not a panacea
– E.g., Still shouldn’t mix 1Gb/sec and 10Gb/sec Ethernet
38. 38© Copyright 2013 EMC Corporation. All rights reserved.
Converged Switch at top of rack or end of row
– Tightly controlled solution
– Server 10 GE adapters: CNA or NIC
iSCSI and FCoE via Converged Switch
Convergence: Server Phase
FC HBAsNICs
Converged
Switch
Rack Mount
Servers
10 GbE CNAs
FC Attach
Ethernet LAN
Storage
Fibre Channel SAN
Ethernet
FC
iSCSI
39. 39© Copyright 2013 EMC Corporation. All rights reserved.
Convergence: Network Phase
Converged Switches move out of rack
FCoE: Multi-hop, may be end-to-end
Maintains existing SAN/network management
Overlapping admin domains may compel cultural adjustments
Converged
Switch
10 GbE CNAs
Ethernet LAN
Storage
Fibre Channel SAN
Ethernet
FC/FCoE
Ethernet Network
(IP, FCoE) and CNS
Rack Mount
Servers
40. 40© Copyright 2013 EMC Corporation. All rights reserved.
Convergence at 10 Gigabit Ethernet
Two paths to a Converged Network
– iSCSI: purely Ethernet
– FCoE: mix FC and Ethernet (or all Ethernet)
▪ FC compatibility now and in the future
Choose (one or both) on scalability,
management, and skill set
10 GbE CNAs
Ethernet LAN
FC & FCoE
SAN
iSCSI/FCoE
Storage
Ethernet
FC/FCoE
Fibre Channel
& FCoE attach
Rack Mount
Servers
Converged
Switch
41. 41© Copyright 2013 EMC Corporation. All rights reserved.
EMC and Ethernet
TechBooks (Google: “FCoE Tech Book”)
– Fibre Channel over Ethernet (FCoE) and Data Center Bridging
(DCB) Concepts and Protocols
– Fibre Channel over Ethernet (FCoE) and Data Center Bridging
(DCB) Case Studies
▪ Includes blade server case studies
Services
– Design, Implementation, Performance and Security offerings
for networks
Products
– Ethernet equipment for creating Converged Network
Environments
42. 42© Copyright 2013 EMC Corporation. All rights reserved.
Agenda
• Network Convergence
• Protocols & Standards
• Server Virtualization
• Solution Evolution
• Conclusion
43. 43© Copyright 2013 EMC Corporation. All rights reserved.
Conclusion
Converged data centers can be built using 10Gb Ethernet
– FCoE: Compatible with continued use of FC under common
management
– iSCSI solutions work well for all IP/Ethernet networks
10 Gigabit Ethernet solutions are maturing
– Standards enable integration into existing data centers
– FCoE and iSCSI will follow Ethernet roadmap to 40 and 100
Gigabits/sec
FC will follow FC roadmap to 16GFC and 32GFC speeds
Achieving a converged network: Consider technology,
processes/best practices and organizational dynamics
44. 44© Copyright 2013 EMC Corporation. All rights reserved.
Network Virtualization: Background
Each application (or VM) sees its own
virtual network, independent of
physical network
VLAN Trunk
Switch Switch
Benefits of Virtual Networks
Common network links with access
control properties of separate links.
Manage virtual networks instead of
physical networks.
Virtual SANs provide similar benefits
for storage area networks.
Virtual
Networks
VLAN B VLAN CVLAN A
45. 45© Copyright 2013 EMC Corporation. All rights reserved.
Network Virtualization: Overview
Network version of DOS’s 640k memory limit
– Ethernet VLAN tag has only 12 bits!
– Not enough for large data centers!
– Run any workload, anywhere? Configure every VLAN, everywhere!
New approach: IP-based encapsulation
– Encapsulate Ethernet frames in IP
– Use IP routing (e.g., OSPF ECMP) to run network
Hypervisor virtual switches encapsulate traffic to/from VMs
– Changes network provisioning from VLAN practices (e.g., more responsive)
Example encapsulations: VXLAN, NVGRE
– Initially: No DCB Ethernet support (so, no FCoE, initially)
– iSCSI, NFS, CIFS all work fine (all use TCP)
Storage implications: Birds of a Feather session – Wednesday, 1:00pm
46. 46© Copyright 2013 EMC Corporation. All rights reserved.
Related Session and Resources
Breakout Session:SAN Technology Update & Best Practice Deep Dive
for FC, FCoE & iSCSI SANs
– Monday, 1:00pm and Thursday, 11:30am
Birds of a Feather: Storage Networking and Network Virtualization
– Wednesday, 1:00pm
FCoE in the EMC Support Matrix
– http://elabnavigator.emc.com
EMC FCoE Introduction whitepaper
– http://www.emc.com/collateral/hardware/white-papers/h5916-intro-to-fcoe-wp.pdf
FCoE Blog by Erik Smith (E-Lab)
– http://www.brasstacksblog.typepad.com