• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
FCoE ─ Topologies, Protocols, and Limitations ( 2011 EMC World )
 

FCoE ─ Topologies, Protocols, and Limitations ( 2011 EMC World )

on

  • 1,210 views

FCoE ─ Topologies, Protocols, and Limitations ( 2011 EMC World )

FCoE ─ Topologies, Protocols, and Limitations ( 2011 EMC World )

Statistics

Views

Total Views
1,210
Views on SlideShare
1,210
Embed Views
0

Actions

Likes
0
Downloads
54
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    FCoE ─ Topologies, Protocols, and Limitations ( 2011 EMC World ) FCoE ─ Topologies, Protocols, and Limitations ( 2011 EMC World ) Presentation Transcript

    • FCoE ─ Topologies, Protocols, and Limitations Erik Smith© Copyright 2011 EMC Corporation. All rights reserved.
    • Objectives•  Understand topologies that are currently supported and what is currently being developed•  Understand the essential components of FCoE including DCBX, PFC, FIP, and DCB•  Understand the known limitations when using FCoE© Copyright 2011 EMC Corporation. All rights reserved. 2
    • EMC Product Roadmap & FuturesDisclaimer•  EMC makes no representation and undertakes no obligations with regard to product planning information, anticipated product characteristics, performance specifications, or anticipated release dates (collectively, “Roadmap Information”).•  Roadmap Information is provided by EMC as an accommodation to the recipient solely for purposes of discussion and without intending to be bound thereby.© Copyright 2011 EMC Corporation. All rights reserved. 3
    • Agenda•  Topology overview –  Fibre Channel (FC) –  Fibre Channel over Ethernet (FCoE) overview•  Enabling technologies –  DCBX –  PFC –  ETS –  FCoE Initialization Protocol (FIP)•  ENode-to-ENode / Direct connect configuration•  Zero-hop•  Multi-hop•  Real world use cases© Copyright 2011 EMC Corporation. All rights reserved. 4
    • Topology Overview – Fibre Channel SAN A Switch 1A Switch 2A Storage Host HBA A A HBA B SAN B B Switch 1B Switch 2B© Copyright 2011 EMC Corporation. All rights reserved. 5
    • Topology Overview – Fibre Channel SAN A N_Port N_Port Switch 1A Switch 2A Storage Host HBA A A N_Port HBA B SAN B B Switch 1B Switch 2B N_Port© Copyright 2011 EMC Corporation. All rights reserved. 6
    • Topology Overview – Fibre Channel F_Port F_Port SAN A N_Port N_Port Switch 1A Switch 2A Storage Host HBA A A N_Port HBA B SAN B B Switch 1B Switch 2B N_Port F_Port F_Port© Copyright 2011 EMC Corporation. All rights reserved. 7
    • Topology Overview – Fibre Channel E_Port E_Port F_Port F_Port SAN A N_Port N_Port Switch 1A Switch 2A Storage Host HBA A A N_Port HBA B SAN B B Switch 1B Switch 2B N_Port F_Port F_Port E_Port E_Port© Copyright 2011 EMC Corporation. All rights reserved. 8
    • Topology Overview – FCoE VE_Port VE_Port VF_Port VF_Port SAN A VN_Port VN_Port FCF 1A FCF 2A Storage Host CNA A A VN_Port CNA B SAN B B FCF 1B FCF 2B VN_Port VF_Port VF_Port VE_Port VE_Port© Copyright 2011 EMC Corporation. All rights reserved. 9
    • Topology Overview – FCoE VE_Port VE_Port VF_Port VF_Port SAN A VN_Port VN_Port FCF 1A FCF 2A Storage Host CNA A A VN_Port CNA B SAN B B NIC 1 FCF 1B FCF 2B NIC 2 VN_Port NIC n VF_Port VF_Port VE_Port VE_Port© Copyright 2011 EMC Corporation. All rights reserved. 10
    • Topology Overview – FCoE VE_Port VE_Port VF_Port VF_Port SAN A VN_Port VN_Port FCF 1A FCF 2A Storage Host CNA A A VN_Port CNA B SAN B B NIC 1 FCF 1B FCF 2B NIC 2 VN_Port NIC n VF_Port VF_Port VE_Port VE_Port LAN© Copyright 2011 EMC Corporation. All rights reserved. 11
    • Topology Overview – FCoE VE_Port VE_Port VF_Port VF_Port SAN A VN_Port VN_Port FCF 1A FCF 2A Storage Host CNA A A VN_Port CNA B SAN B B NIC 1 FCF 1B FCF 2B NIC 2 VN_Port NIC n VF_Port VF_Port VE_Port VE_Port VLAN 1 VLAN 2 VLAN n LAN© Copyright 2011 EMC Corporation. All rights reserved. 12
    • Topology Overview – FCoE VE_Port VE_Port VF_Port VF_Port SAN A VN_Port VN_Port FCF 1A FCF 2A Storage Host CNA A A VN_Port CNA B SAN B B NIC 1 FCF 1B FCF 2B NIC 2 VN_Port NIC n VF_Port VF_Port VE_Port VE_Port VLAN 1 VLAN 2 VLAN n LAN© Copyright 2011 EMC Corporation. All rights reserved. 13
    • Topology Overview – FCoE VE_Port VE_Port VF_Port VF_Port SAN A VN_Port Ethernet Ethernet VN_Port Switch / Switch / FCF 1A FCF 2A Storage Host NIC / A CNA A SAN B B NIC / Ethernet Ethernet CNA B Switch / Switch / FCF 1B FCF 2B VN_Port VN_Port VF_Port VF_Port VE_Port VE_Port VLAN 1 VLAN 2 VLAN n LAN© Copyright 2011 EMC Corporation. All rights reserved. 14
    • Topology Overview – FCoE vPC Domain SAN A vPC Ethernet Ethernet Switch / Switch / FCF 1A FCF 2A Storage Host NIC / A CNA A SAN B B NIC / Ethernet Ethernet CNA B Switch / Switch / FCF 1B FCF 2B VLAN 1 VLAN 2 VLAN n LAN© Copyright 2011 EMC Corporation. All rights reserved. 15
    • Agenda•  Topology overview –  Fibre Channel (FC) –  Fibre Channel over Ethernet (FCoE) overview•  Enabling technologies –  DCBX –  PFC –  ETS –  FCoE Initialization Protocol (FIP)•  ENode-to-ENode / Direct connect configuration•  Zero-hop•  Multi-hop•  Real world use cases© Copyright 2011 EMC Corporation. All rights reserved. 16
    • DCBX ─ Data Center Bridging CapabilityeXchange Protocol Lossless ENode Ethernet switch LINK UP Each DCBX DCBX DCBX frame contains DCBX DCBX priority map TLVs for both FIP and FCoE FIP VLAN D is covery •  An extension of the Link Layer Discovery Protocol (LLDP) •  Allows for the exchange of priority map values for both FCoE and the FCoE Initialization Protocol (FIP) •  Enables lossless behavior© Copyright 2011 EMC Corporation. All rights reserved. 17
    • PFC ─ Priority Flow Control Transmit Queues Receive Buffers Ethernet Link Eight Virtual Lanes Ÿ  Necessary since FC requires a lossless environment to operate properly Ÿ  Without PFC, normal periodic SAN congestion will cause frames to drop and the entire exchange will need to be retransmitted –  Can take up to 60 seconds, depending on which frame in the exchange is lost© Copyright 2011 EMC Corporation. All rights reserved. 18
    • ETS – Enhanced Transmission Selection Desired Traffic ETS configuration 10GbE Link Actual ThroughputiSCSI 3 Gbps 3 Gbps 3 Gbps 2 Gb 3 Gbps 3 Gbps 2 GbpsPriority (1) 3 Gbps 3 Gbps 3 GbpsFCoE 3 Gbps 3 Gbps 3 Gbps 3 GbPriority (3) 3 Gbps 5 Gbps 4 Gbps 6 Gbps 5 Gb 4 Gbps LAN 3 Gbps 9 Gbps 10 Gbps 10 GbpsPriority (5) t1 t2 t3 t1 t2 t3 Ÿ  ETS information is exchanged in DCBX Ÿ  Ensures storage traffic has a guaranteed minimum amount of bandwidth Ÿ  Utilized by all FCoE devices that are currently supported by EMC © Copyright 2011 EMC Corporation. All rights reserved. 19
    • FIP ─ FCoE Initialization ProtocolFIP bridges the gap between expectations of the FC protocol & the reality of an FCoE Toplogy •  ENode can discover who to log in with •  LKA (Link Keep Alive) and FIP CVL (Clear Virtual Links) allows for logout from the fabric should the logical link be lost •  Implicit security (man-in-the-middle is difficult) –  Provided that FIP snooping and Dynamic ACLs are implemented For an in-depth description of the FIP protocol, visit www.brasstacksblog.typepad.com © Copyright 2011 EMC Corporation. All rights reserved. 20
    • Agenda•  Topology overview –  Fibre Channel (FC) –  Fibre Channel over Ethernet (FCoE) overview•  Enabling technologies –  DCBX –  PFC –  ETS –  FCoE Initialization Protocol (FIP)•  ENode-to-ENode / Direct connect configuration•  Zero-hop•  Multi-hop•  Real world use cases© Copyright 2011 EMC Corporation. All rights reserved. 21
    • Enode-to-ENode – Direct Connect Important information CNA CNA CNA CNAŸ  Not supported today Server StorageŸ  FC-BB-5 requires an FCF to be presentŸ  FC-BB-6 removes this requirement and allows for the PT2PT / VN2VN protocol to be used Si CNA CNA CNA CNA CNA CNA CNA CNA Server Server Storage Storage© Copyright 2011 EMC Corporation. All rights reserved. 22
    • Agenda•  Topology overview –  Fibre Channel (FC) –  Fibre Channel over Ethernet (FCoE) overview•  Enabling technologies –  DCBX –  PFC –  ETS –  FCoE Initialization Protocol (FIP)•  ENode-to-ENode / Direct connect configuration•  Zero-hop•  Multi-hop•  Real world use cases© Copyright 2011 EMC Corporation. All rights reserved. 23
    • Zero-hop Topology ─ Available Today Features Ÿ  Benefits –  SAN A / SAN B –  Easier to manage SAN –  Congestion is less likely (optional) Ÿ  Physical topologies supported –  Top of Rack –  End of Row / Middle of Row –  Out of Row FC FCFs Ÿ  Products available Si Si –  Brocade §  8000 §  DCX Com FCoE i §  VCS/VDX g so n CNA CNA on –  Cisco §  7000 §  5000 Server Storage §  5000 / 2000EMC makes no representation and undertakes no obligations with regard to product planning information, anticipated productcharacteristics, performance specifications, or anticipated release dates (collectively, “Roadmap Information”). Roadmap Informationis provided by EMC as an accommodation to the recipient solely for purposes of discussion and without intending to be boundthereby. © Copyright 2011 EMC Corporation. All rights reserved. 24
    • Zero-hop ─ Top of Rack (ToR) Server 1 Server 2 Server n RACK© Copyright 2011 EMC Corporation. All rights reserved. 25
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 1B Server 1 Server 2 Server n RACK© Copyright 2011 EMC Corporation. All rights reserved. 26
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 1B Server 1 Server 2 Server n RACKŸ  Physical connectivity options –  Twinax, Optical fiberŸ  Scale –  Up to 96 ports (native FCoE)© Copyright 2011 EMC Corporation. All rights reserved. 27
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 2A FCF nA FCF 1B FCF 2B FCF nB Server 1 Server 1 Server 1 Server 2 Server 2 Server 2 Server n Server n Server n RACK 1 RACK 2 RACK nŸ  Physical connectivity options –  Twinax, Optical fiberŸ  Scale –  Up to 96 ports (native FCoE)© Copyright 2011 EMC Corporation. All rights reserved. 28
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 2A FCF nA FCF 1B FCF 2B FCF nB Server 1 Server 1 Server 1 Ethernet Server 2 Server 2 Server 2 switch A Ethernet switch B Server n Server n Server n RACK 1 RACK 2 RACK n End of RowŸ  Physical connectivity options –  Twinax, Optical fiberŸ  Scale –  Up to 96 ports (native FCoE)© Copyright 2011 EMC Corporation. All rights reserved. 29
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 2A FCF nA FCF 1B FCF 2B FCF nB Server 1 Server 1 Server 1 Ethernet Server 2 Server 2 Server 2 switch A Storage Ethernet switch B Server n Server n Server n RACK 1 RACK 2 RACK n Storage End of RowŸ  Physical connectivity options –  Twinax, Optical fiberŸ  Scale –  Up to 96 ports (native FCoE)© Copyright 2011 EMC Corporation. All rights reserved. 30
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 2A FCF nA FCF 1B FCF 2B FCF nB Server 1 Server 1 Server 1 Ethernet Server 2 Server 2 Server 2 switch A Storage Ethernet switch B Server n Server n Server n RACK 1 RACK 2 RACK n Storage End of RowŸ  Physical connectivity options –  Twinax, Optical fiberŸ  Scale –  Up to 96 ports (native FCoE)© Copyright 2011 EMC Corporation. All rights reserved. 31
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 2A FCF nA FCF 1B FCF 2B FCF nB SAN A Server 1 Server 1 Server 1 Ethernet Server 2 Server 2 Server 2 switch A Storage Ethernet switch B SAN B Server n Server n Server n RACK 1 RACK 2 RACK n Storage End of RowŸ  Physical connectivity options –  Twinax, Optical fiberŸ  Scale –  Up to 96 ports (native FCoE) –  Up to 50 Racks (per SAN)© Copyright 2011 EMC Corporation. All rights reserved. 32
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 2A FCF nA FCF 1B FCF 2B FCF nB SAN A Server 1 Server 1 Server 1 Ethernet Server 2 Server 2 Server 2 switch A Storage Ethernet switch B SAN B Server n Server n Server n RACK 1 RACK 2 RACK n Storage End of RowŸ  Physical connectivity options Ÿ  Storage connectivity –  Twinax, Optical fiber –  ToRŸ  Scale –  Up to 96 ports (native FCoE) –  Up to 50 Racks (per SAN)© Copyright 2011 EMC Corporation. All rights reserved. 33
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 2A FCF nA FCF 1B FCF 2B FCF nB SAN A Server 1 Server 1 Server 1 Ethernet Server 2 Server 2 Server 2 switch A Storage Ethernet switch B SAN B Server n Server n Server n RACK 1 RACK 2 RACK n Storage End of RowŸ  Physical connectivity options Ÿ  Storage connectivity –  Twinax, Optical fiber –  ToR, OoRŸ  Scale –  Up to 96 ports (native FCoE) –  Up to 50 Racks (per SAN)© Copyright 2011 EMC Corporation. All rights reserved. 34
    • Zero-hop ─ Top of Rack (ToR) FCF 1A FCF 2A FCF nA FCF 1B FCF 2B FCF nB SAN A Server 1 Server 1 Server 1 Ethernet Server 2 Server 2 Server 2 switch A Storage Ethernet switch B SAN B Server n Server n Server n RACK 1 RACK 2 RACK n Storage End of RowŸ  Physical connectivity options Ÿ  Storage connectivity –  Twinax, Optical fiber –  ToR, OoRŸ  Scale –  Up to 96 ports (native FCoE) –  Up to 50 Racks (per SAN)© Copyright 2011 EMC Corporation. All rights reserved. 35
    • Zero-hopEnd of Row (EoR) / Middle of Row (MoR) SAN A Server 1 Server 1 Server 1 FCF A Server 2 Server 2 Server 2 Storage FCF B SAN B Server n Server n Server n RACK 1 RACK 2 RACK n Storage End of RowŸ  Scale Ÿ  Storage connectivity –  Up to 512 ports (native FCoE) –  EoR, OoR –  Up to 512 serversŸ  Physical connectivity options –  Twinax (<=10m), Optical fiber© Copyright 2011 EMC Corporation. All rights reserved. 36
    • Zero-hop ─ Out of Row (OoR) Server 1 Server 1 Server 1 SAN A FCF A Server 2 Server 2 Server 2 Storage FCF B SAN B Server n Server n Server n RACK 1 RACK 2 RACK n Storage Out of Row RowŸ  Scale Ÿ  Storage connectivity –  Up to 512 ports (native FCoE) –  EoR, OoR –  Up to 512 servers Ÿ  Cost comparison to FCŸ  Physical connectivity options –  X % +- (Elias) –  Twinax (<=10m), Optical fiber© Copyright 2011 EMC Corporation. All rights reserved. 37
    • Zero-hop ─ Out of Row (OoR) Server 1 Server 1 Server 1 SAN A SAN B Server 2 Server 2 Server 2 Server n Server n Server n RACK 1 RACK 2 RACK n Row 1 Existing Data Center Server 1 Server 1 Server 1 patch panel Server 2 Server 2 Server 2 Server n Server n Server n RACK 1 RACK 2 RACK n Row 2 FCF A Server 1 Server 1 Server 1 Server 2 Server 2 Server 2 Storage FCF B Server n Server n Server n RACK 1 RACK 2 RACK n Row n© Copyright 2011 EMC Corporation. All rights reserved. 38
    • Agenda•  Topology overview –  Fibre Channel (FC) –  Fibre Channel over Ethernet (FCoE) overview•  Enabling technologies –  DCBX –  PFC –  ETS –  FCoE Initialization Protocol (FIP)•  ENode-to-ENode / Direct connect configuration•  Zero-hop•  Multi-hop•  Real world use cases© Copyright 2011 EMC Corporation. All rights reserved. 39
    • Multi-hop Topology ─ Available Today Features Ÿ  Benefits SAN –  SAN A / SAN B (optional) –  Increased scalability Ÿ  Physical topologies supported –  Top of Rack FC –  End of Row / Middle of Row Si Si –  Out of Row Ÿ  Products available FCFs –  Brocade §  DCX (FC ISLs only) Si Si §  8000 (FC ISLs only §  VDX (coming soon) FCoE –  Cisco CNA CNA §  MDS §  7000 Server Storage §  5000EMC makes no representation and undertakes no obligations with regard to product planning information, anticipated productcharacteristics, performance specifications, or anticipated release dates (collectively, “Roadmap Information”). Roadmap Informationis provided by EMC as an accommodation to the recipient solely for purposes of discussion and without intending to be boundthereby. © Copyright 2011 EMC Corporation. All rights reserved. 40
    • Multi-hop Topology ─ Available Today FeaturesŸ  Benefits SAN –  SAN A / SAN B (optional) –  Increased scalabilityŸ  Physical topologies supported –  Top of Rack FC –  End of Row / Middle of Row Si Si –  Out of Row VE_PortsŸ  Products available FCFs –  Brocade §  DCX (FC ISLs only) Si Si §  8000 (FC ISLs only §  VDX (coming soon) FCoE –  Cisco CNA CNA §  MDS §  7000 Server Storage §  5000© Copyright 2011 EMC Corporation. All rights reserved. 41
    • Multi-hop Topology ─ Available Today FeaturesŸ  Benefits SAN –  SAN A / SAN B (optional) –  Increased scalabilityŸ  Physical topologies supported –  Top of Rack FC –  End of Row / Middle of Row Si Si –  Out of Row FCFs ISLsŸ  Products available –  Brocade §  DCX (FC ISLs only) Si Si §  8000 (FC ISLs only §  VDX (coming soon) FCoE –  Cisco CNA CNA §  MDS §  7000 Server Storage §  5000© Copyright 2011 EMC Corporation. All rights reserved. 42
    • Multi-hop (Network Admin POV) L2/L3 L2/L3 Distribution Layer Ethernet Ethernet (EoR/OoR) Switch Switch© Copyright 2011 EMC Corporation. All rights reserved. 43
    • Multi-hop (Network Admin POV) L2/L3 L2/L3 Distribution Layer Ethernet Ethernet (EoR/OoR) Switch Switch L2/L3 L2/L3 Access Layer Ethernet Ethernet (ToR/EoR/OoR) Switch Switch© Copyright 2011 EMC Corporation. All rights reserved. 44
    • Multi-hop (Network Admin POV) L2/L3 L2/L3 Distribution Layer Ethernet Ethernet (EoR/OoR) Switch Switch L2/L3 L2/L3 Access Layer Ethernet Ethernet (ToR/EoR/OoR) Switch Switch FCoEServers Storage Row 1© Copyright 2011 EMC Corporation. All rights reserved. 45
    • Multi-hop (Network Admin POV) L2/L3 L2/L3 Distribution Layer Ethernet Ethernet (EoR/OoR) Switch Switch L2/L3 L2/L3 L2/L3 L2/L3 8  x  N7K-­‐F132XP-­‐ L2/L3 L2/L3 L2/L3 L2/L3 Access Layer Ethernet Ethernet Ethernet Ethernet 15 Ethernet 2  x  N7K-­‐SUP1 Ethernet Ethernet Ethernet (ToR/EoR/OoR) Switch Switch Switch Switch Switch Switch Switch Switch FCoE FCoE FCoE FCoEServers Storage Servers Servers Storage Servers Storage Storage Row 1 Row 2 Row 3 Row n© Copyright 2011 EMC Corporation. All rights reserved. 46
    • Multi-hop (Network Admin POV) L2/L3 L2/L3 Distribution Layer Ethernet Ethernet (EoR/OoR) Switch Switch L2/L3 8  x  N7K-­‐F132XP-­‐ L2/L3 L2/L3 L2/L3 15 Ethernet Ethernet 2  x  N7K-­‐SUP1 Ethernet Ethernet Switch Switch Switch Switch Access Layer (ToR/EoR/OoR) L2/L3 L2/L3 L2/L3 L2/L3 Ethernet Ethernet Ethernet Ethernet Switch Switch Switch Switch FCoE FCoE FCoE FCoEServers Storage Servers Servers Storage Servers Storage Storage Row 1 Row 2 Row 3 Row n© Copyright 2011 EMC Corporation. All rights reserved. 47
    • Multi-hop (Network Admin POV) L2/L3 L2/L3 Distribution Layer Ethernet Ethernet (EoR/OoR) Switch Switch L2/L3 8  x  N7K-­‐F132XP-­‐ L2/L3 L2/L3 L2/L3 15 Ethernet Ethernet 2  x  N7K-­‐SUP1 Ethernet Ethernet Switch Switch Switch Switch Access Layer (ToR/EoR/OoR) L2/L3 L2/L3 L2/L3 L2/L3 Ethernet Ethernet Ethernet Ethernet Switch Switch Switch Switch FCoE FCoE FCoE FCoEServers Storage Servers Servers Storage Servers Storage Storage Row 1 Row 2 Row 3 Row n© Copyright 2011 EMC Corporation. All rights reserved. 48
    • Multi-hop (Storage Admin POV) L2/L3 L2/L3 Distribution Layer Ethernet Ethernet (EoR/OoR) Switch Switch FCF FCF FCF FCF FCF Access Layer (ToR/EoR/OoR) SAN A SAN B FCF FCF FCF FCF FCoE FCoE FCoE FCoE Servers Storage Servers Servers Storage Servers Storage Storage Row 1 Row 2 Row 3 Row n© Copyright 2011 EMC Corporation. All rights reserved. 49
    • Agenda•  Topology overview –  Fibre Channel (FC) –  Fibre Channel over Ethernet (FCoE) overview•  Enabling technologies –  DCBX –  PFC –  ETS –  FCoE Initialization Protocol (FIP)•  ENode-to-ENode / Direct connect configuration•  Zero-hop•  Multi-hop•  Real world use cases© Copyright 2011 EMC Corporation. All rights reserved. 50
    • Case Study Overview•  Each of the topologies will –  Assume a server to storage ratio of 10 server ports to one storage port –  Provide the number of server ports available –  Provide the oversubscription ratio –  Use Cisco products© Copyright 2011 EMC Corporation. All rights reserved. 51
    • Case Study 1MDS Reference Architecture Server 1 Server 2 Server 16 RACK 1© Copyright 2011 EMC Corporation. All rights reserved. 52
    • Case Study 1MDS Reference Architecture Server 1 Server 1 Server 1 Server 2 Server 2 Server 2 Server 16 Server 16 Server 16 RACK 1 RACK 2 RACK 21 21 Racks 336 Servers© Copyright 2011 EMC Corporation. All rights reserved. 53
    • Case Study 1MDS Reference Architecture Server 1 Server 1 Server 1 MDS Server 2 Server 2 Server 2 9513 A MDS 9513 B Server 16 Server 16 Server 16 RACK 1 RACK 2 RACK 21 Each MDS: 21 Racks (7) 48 Port line cards = 336 ports 336 Servers - Servers - 4:1 oversubscription (4) 24 port line cards = 96 ports - Storage - 1:1 with proper layout© Copyright 2011 EMC Corporation. All rights reserved. 54
    • Case Study 1MDS Reference Architecture Optical – MMF 672 OM3 (30m) Server 1 Server 1 Server 1 MDS Server 2 Server 2 Server 2 336 9513 A MDS 336 9513 B Server 16 Server 16 Server 16 RACK 1 RACK 2 RACK 21 Each MDS: 21 Racks (7) 48 Port line cards = 336 ports 336 Servers - Servers - 4:1 oversubscription (4) 24 port line cards = 96 ports - Storage - 1:1 with proper layout© Copyright 2011 EMC Corporation. All rights reserved. 55
    • Case Study 1MDS Reference Architecture Optical – MMF 672 OM3 (30m) 66 FC storage ports Server 1 Server 1 Server 1 MDS Server 2 Server 2 Server 2 336 9513 A 33 MDS 336 9513 B 33 Server 16 Server 16 Server 16 RACK 1 RACK 2 RACK 21 EMC Storage 21 Racks Each MDS: Arrays (7) 48 Port line cards = 336 ports 336 Servers - Servers Server to Storage port ratio = 10:1 - 4:1 oversubscription (4) 24 port line cards = 96 ports - Storage - 1:1 with proper layout© Copyright 2011 EMC Corporation. All rights reserved. 56
    • Case Study 1MDS Reference Architecture Optical – MMF 672 OM3 (30m) 66 FC storage ports Server 1 Server 1 Server 1 MDS Server 2 Server 2 Server 2 336 9513 A 33 MDS 336 9513 B 33 Server 16 Server 16 Server 16 RACK 1 RACK 2 RACK 21 EMC Storage 21 Racks Each MDS: Arrays (7) 48 Port line cards = 336 ports 336 Servers - Servers Server to Storage port ratio = 10:1 - 4:1 oversubscription (4) 24 port line cards = 96 ports - Storage - 1:1 with proper layout© Copyright 2011 EMC Corporation. All rights reserved. 57
    • Case Study 2Nexus 7000 Optical – MMF 768 OM3 (30m) Server 1 Server 1 Server 1 Nexus Server 2 Server 2 Server 2 384 7018 39 A Nexus 384 7018 39 B Server 16 Server 16 Server 16 RACK 1 RACK 2 RACK 24 Each Nexus 7018: 24 Racks (14) 32 Port 10G “F1” = 448 ports 384 Servers - 32:23 oversubscription Server to Storage port ratio = 10:1 (4) spare slots (25) spare ports for VE_Ports© Copyright 2011 EMC Corporation. All rights reserved. 58
    • Case Study 2Nexus 7000 Optical – MMF 768 OM3 (30m) Server 1 Server 1 Server 1 Nexus Server 2 Server 2 Server 2 384 7018 39 A Nexus 384 7018 39 B Server 16 Server 16 Server 16 RACK 1 RACK 2 RACK 24 Each Nexus 7018: 24 Racks (14) 32 Port 10G “F1” = 448 ports 384 Servers - 32:23 oversubscription Server to Storage port ratio = 10:1 (4) spare slots (25) spare ports for VE_Ports© Copyright 2011 EMC Corporation. All rights reserved. 59
    • Case Study 3Nexus 5596 / 2232 2232 1A 2232 1B Server 1 Server 1 Server 2 Server 2 Server 16 Server 16 RACK 1A RACK 1B© Copyright 2011 EMC Corporation. All rights reserved. 60
    • Case Study 3Nexus 5596 / 2232 2232 1A 2232 1B FCoE Twinax Server 1 Server 1 Server 2 Server 2 Server 16 Server 16 RACK 1A RACK 1B© Copyright 2011 EMC Corporation. All rights reserved. 61
    • Case Study 3Nexus 5596 / 2232 2232 1A 2232 1B FCoE Twinax Server 1 Server 1 Server 2 Server 2 Server 16 Server 16 RACK 1A RACK 1B© Copyright 2011 EMC Corporation. All rights reserved. 62
    • Case Study 3Nexus 5596 / 2232 2232 1A 2232 1B 2232 15A 2232 15B FCoE Twinax Server 1 Server 1 Server 1 Server 1 Server 2 Server 2 Server 2 Server 2 30 Racks Server 16 Server 16 Server 16 Server 16 480 Servers RACK 1A RACK 1B RACK 15A RACK 15B© Copyright 2011 EMC Corporation. All rights reserved. 63
    • Case Study 3Nexus 5596 / 2232 Nexus 5596 1A Nexus 5596 1B 8 8 Nexus 5596 2A Nexus 5596 2B 2232 1A 2232 1B 2232 15A 2232 15B FCoE Twinax Server 1 Server 1 Server 1 Server 1 Server 2 Server 2 Server 2 Server 2 30 Racks Server 16 Server 16 Server 16 Server 16 480 Servers RACK 1A RACK 1B RACK 15A RACK 15B© Copyright 2011 EMC Corporation. All rights reserved. 64
    • Case Study 3Nexus 5596 / 2232 8 Nexus 5596 1A Nexus 5596 1B 8 8 8 Nexus 5596 2A 8 8 Nexus 5596 2B Optical – FET - MMF 240 OM3 (30m) 2232 1A 2232 1B 2232 15A 2232 15B FCoE Twinax Server 1 Server 1 Server 1 Server 1 Server 2 Server 2 Server 2 Server 2 30 Racks Server 16 Server 16 Server 16 Server 16 480 Servers – 4:1 oversubscribed RACK 1A RACK 1B RACK 15A RACK 15B© Copyright 2011 EMC Corporation. All rights reserved. 65
    • Case Study 3Nexus 5596 / 2232 96 FCoE storage ports 8 24 24 Nexus 5596 1A Nexus 5596 1B 8 8 24 24 8 Nexus 5596 2A 8 8 Nexus 5596 2B Optical – FET - MMF 240 OM3 (30m) 2232 1A 2232 1B 2232 15A 2232 15B FCoE Twinax Server 1 Server 1 Server 1 Server 1 FCoE Optical Server 2 Server 2 Server 2 Server 2 30 Racks Server 16 Server 16 Server 16 Server 16 480 Servers – 4:1 oversubscribed RACK 1A RACK 1B RACK 15A RACK 15B Server to Storage port ratio = 10:1© Copyright 2011 EMC Corporation. All rights reserved. 66
    • Case Study 3Nexus 5596 / 2232 96 FCoE storage ports 8 24 24 Nexus 5596 1A Nexus 5596 1B 8 4 4 LAN 8 24 24 8 LAN 4 4 Nexus 5596 2A 8 8 Nexus 5596 2B Optical – FET - MMF 240 OM3 (30m) 2232 1A 2232 1B 2232 15A 2232 15B FCoE Twinax Server 1 Server 1 Server 1 Server 1 FCoE Optical Server 2 Server 2 Server 2 Server 2 10GbE Optical 30 Racks Server 16 Server 16 Server 16 Server 16 480 Servers – 4:1 oversubscribed RACK 1A RACK 1B RACK 15A RACK 15B Server to Storage port ratio = 10:1© Copyright 2011 EMC Corporation. All rights reserved. 67
    • Case Study 3Nexus 5596 / 2232 96 FCoE storage ports 8 24 24 Nexus 5596 1A Nexus 5596 1B 8 4 4 LAN 8 24 24 8 LAN 4 4 Nexus 5596 2A 8 8 Nexus 5596 2B Optical – FET - MMF 240 OM3 (30m) 2232 1A 2232 1B 2232 15A 2232 15B FCoE Twinax Server 1 Server 1 Server 1 Server 1 FCoE Optical Server 2 Server 2 Server 2 Server 2 10GbE Optical 30 Racks Server 16 Server 16 Server 16 Server 16 480 Servers – 4:1 oversubscribed RACK 1A RACK 1B RACK 15A RACK 15B Server to Storage port ratio = 10:1© Copyright 2011 EMC Corporation. All rights reserved. 68
    • Case study summary•  It is possible to create a fully converged FCoE based solution for about half of what it costs to deploy a similar topology based on MDS and 6509 . –  Considering port count only.•  When you evaluate each topology on a per connected server port basis in terms of dollars per MB/s and Watts per MB/s, a fully converged topology costs less and saves power.•  Check this out for yourselves using the Cisco “End- to-End Unified Fabric TCO Calculator” –  www.cisco.com/go/fcoe© Copyright 2011 EMC Corporation. All rights reserved. 69
    • Available Now! EMC Fibre Channel over Ethernet (FCoE) TechBook Authored by Erik Smith, Mark Lippitt, Erik Paine, Mark Anthony De Castro, and Shreedhan Nikam Upd ated ! Available for purchase at EMC World Various new E-Lab TechBooks, created from the former EMC Networking Topology Guide, are available at elabNavigator.EMC.com Topology Resource Center Tab© Copyright 2011 EMC Corporation. All rights reserved. 70
    • Want More? Visit my blog! www.brasstacksblog.typepad.com Contact me via email! erik.smith@emc.com Drop by the EMC Select booth!© Copyright 2011 EMC Corporation. All rights reserved. 71
    • Summary and Questions•  Discussed topologies currently supported and what is being developed•  Explained essential components of FCoE, including –  DCBX –  PFC –  FIP –  DCB Clouds•  Highlighted known limitations when using FCoE•  Explained three real world use cases© Copyright 2011 EMC Corporation. All rights reserved. 72
    • Related Sessions at EMC World Session Title Type Lecture Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking Birds-of-a- The Future of Storage Networking Feather Solutions EMC Select Pavilion© Copyright 2011 EMC Corporation. All rights reserved. 73
    • Related Technical Documentation TechBooks Fibre Channel over Ethernet (FCoE) - Data Center Bridging (DCB) Concepts and Protocols Fibre Channel SAN Topologies Networked Storage Concepts and Protocols© Copyright 2011 EMC Corporation. All rights reserved. 74
    • © Copyright 2011 EMC Corporation. All rights reserved. 75
    • THANK YOU© Copyright 2011 EMC Corporation. All rights reserved. 76
    • Backup Slides© Copyright 2011 EMC Corporation. All rights reserved. 77
    • FIP ─ FCoE Initialization Protocol (cont.) FCF DCB Cloud Priority = 1 CNA: Lossless Universal-MAC Fabric Ethernet ENode-MAC FCF-MAC WWNN = FABRIC-WWNN VN_Port-MAC switch FCF Priority = 128FIP allows an ENode toŸ  Perform VLAN and FCF discoveryŸ  Ensure that Layer 2 network is capable of supporting mini-jumbo framesŸ  Perform fabric loginŸ  Use LKA (Link Keep Alive) to maintain the virtual link with the FCF & vice versa© Copyright 2011 EMC Corporation. All rights reserved. 78
    • FIP ─ VLAN Request FIP VLAN Request: DA = ALL-FCF-MACs SA = ENode MAC 802.1Q Tag = (Untagged) MAC Address descriptor = ENode-MAC FCF DCB Cloud e st qu Priority = 1 Re CNA: Request Lossless Universal-MAC Fabric Ethernet ENode-MAC FCF-MAC WWNN = FABRIC-WWNN VN_Port-MAC switch Re FCF qu e st Priority = 128 Ÿ  Multicast Ÿ  Allows a CNA to discover which VLANs FCoE services are being provided Ÿ  All FIP requests and responses use a pre-defined set of TLV (Type, Length, Value) data structures© Copyright 2011 EMC Corporation. All rights reserved. 79
    • FIP ─ VLAN Notification FIP VLAN Notification: DA = ENode MAC SA = FCF-MAC 802.1Q Tag = (untagged) MAC Address descriptor = FCF-MAC FCoE VID = 100 n FCF DCB Cloud a tio fic Priority = 1 N oti Notification CNA: Lossless Universal-MAC Fabric Ethernet ENode-MAC FCF-MAC WWNN = FABRIC-WWNN VN_Port-MAC switch Notification No FCF tific ati Priority = on 128 FIP VLAN Notification: DA = ENode MAC SA = FCF-MAC 802.1Q Tag = VLAN 1 MAC Address descriptor = FCF-MACŸ  Unicast FCoE VID = 100Ÿ  Both FCFs respondŸ  Note the 802.1Q tag and the FCoE VID TLV© Copyright 2011 EMC Corporation. All rights reserved. 80
    • FIP ─ Solicitation Solicitation: DA = ALL-FCF-MACs SA = ENode MAC 802.1Q Tag= VLAN 100 MAC Address descriptor = FIP-MAC Name Identifier = WWNN Max FCoE size = 2240 FCF DCB Cloud ion itat Priority = 1 S olic CNA: Solicitation Lossless Universal-MAC Fabric Ethernet ENode-MAC FCF-MAC WWNN = FABRIC-WWNN VN_Port-MAC switch So FCF l i ci tat Priority = i on 128Ÿ  MulticastŸ  Allows the CNA to discover which FCFs are available for loginŸ  Note the 802.1Q Tag and the Max FCoE size field© Copyright 2011 EMC Corporation. All rights reserved. 81
    • FIP ─ Advertisement Advertisement:Ÿ  Unicast DA = FIP-MAC SA = FCF-MAC 802.1Q Tag= VLAN 100Ÿ  Note the Priority, Name ID, and Max FCoE size Priority = 1 MAC Address descriptor = FCF-MAC Name Identifier = SWITCH-WWNNŸ  Max FCoE size is a field padded to the proper size Max FCoE size = 2158Ÿ  Dynamic ACL updated t FCF en –  FIP snooping DCB Cloud i se m Priority = 1 v ert Ad Advertisement CNA: Lossless Universal-MAC Fabric Ethernet ENode-MAC FCF-MAC WWNN = FABRIC-WWNN VN_Port-MAC switch Advertisement Ad ve r FCF Internal FCF list: tise me Priority = nt Entry 1: 128 Priority = 1 Name Identifier = SWITCH-WWNN DA = FCF-MAC Max FCoE size verified = 1 Advertisement: DA = FIP-MAC Entry 2: SA = FCF-MAC Priority = 128 802.1Q Tag= VLAN 100 Name Identifier = SWITCH-WWNN Priority = 128 DA = FCF-MAC MAC Address descriptor = FCF-MAC Max FCoE size verified = 1 Name Identifier = SWITCH-WWNN Max FCoE size = 2158© Copyright 2011 EMC Corporation. All rights reserved. 82
    • FIP ─ FLOGI FIP FLOGI: DA = FCF-MAC (priority 1) SA = ENode MAC 802.1Q Tag= VLAN 100 FIP FLOGI descriptor = FLOGI frame FCF GI FLO Priority = 1 CNA: FLOGI Lossless Universal-MAC Fabric Ethernet ENode-MAC FCF-MAC WWNN = FABRIC-WWNN VN_Port-MAC switch FCF Priority = 128 Internal FCF list: Entry 1: Priority = 1 Name Identifier = SWITCH-WWNN DA = FCF-MAC Max FCoE size verified = 1 Entry 2: Priority = 128 Ÿ  Unicast Name Identifier = SWITCH-WWNN DA = FCF-MAC Max FCoE size verified = 1 Ÿ  Since both are connected to the same Fabric (as determined by the Name ID field), the FCF with the lower priority is sent the FLOGI© Copyright 2011 EMC Corporation. All rights reserved. 83
    • FIP ─ FLOGI Accept FIP FLOGI ACC: DA = ENode MAC SA = FCF-MAC (priority 1) 802.1Q Tag= VLAN 100 FIP FLOGI descriptor = FLOGI frame MAC Descriptor = VN_Port MAC CC FCF DCB Cloud A GI Priority = 1 FLO FLOGI ACC CNA: Lossless Universal-MAC Fabric Ethernet ENode-MAC FCF-MAC WWNN = FABRIC-WWNN VN_Port-MAC switch FCF Priority = 128Ÿ  UnicastŸ  Dynamic ACLs updatedŸ  CNA will use the Fabric Provided MAC Address (FPMA) and FCoE Ethertype frames to log in with the Name Server and perform discovery© Copyright 2011 EMC Corporation. All rights reserved. 84