Interop: The 10GbE Top 10

1,273 views
1,144 views

Published on

Shaun Walsh, VP of marketing, Emulex, presented the '10GbE Top 10" at Interop Las Vegas in May 2011. If you missed it, here are the slides from that discussion.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
1,273
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
39
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Interop: The 10GbE Top 10

  1. 1. Shaun Walsh<br />Emulex<br />The 10GbE Top 10<br />
  2. 2. The Data Center of the Future<br />10G Ethernet and FCoE are enabling technologies to support the <br />Virtual Data Center and Cloud Computing.<br />Cloud Data Center<br />Virtual <br />Data Center<br />Discrete <br />Data Center<br />Private cloud<br /><ul><li> 3 Discrete Networks
  3. 3. Equipment Proliferation
  4. 4. Management Complexities
  5. 5. Expanding OPEX & CAPEX
  6. 6. Converged Networks
  7. 7. Virtualized
  8. 8. Simplifies I/O Management
  9. 9. Reduces CAPEX & OPEX
  10. 10. Cloud Computing (Private & Public)
  11. 11. On-Demand Provisioning and Scale
  12. 12. Modular Building Blocks “Legos”
  13. 13. Avoid CAPEX & OPEX</li></li></ul><li>Drivers of the Move to 10/40GbE<br />StorageUniverse <br />VirtualNetworking<br />CloudConnectivity<br />Network Convergence<br />o<br /><ul><li> 7ZB by 2013
  14. 14. Mobile and VDI
  15. 15. Device-Centric
  16. 16. VM I/O Density
  17. 17. Scalable vDevices
  18. 18. End-to-End vI/O
  19. 19. New I/O Models
  20. 20. I/O Isolation
  21. 21. New Server Models
  22. 22. Multi-Fabric I/O
  23. 23. Evolutionary Steps
  24. 24. RoCEE Low Latency </li></li></ul><li>VM Density Drives More I/O Throughput<br />40 Gb<br />10GbE & 16 Gb FC<br />Don’t Know<br />© 2011 Enterprise Strategy Group<br />
  25. 25. ESG’s Evolution of Network Convergence<br />© 2011 Enterprise Strategy Group<br />
  26. 26. Evolving Network Models<br />Target Connect<br />Host Connect<br />Discrete<br />Networks<br />Converged<br />Fabric<br />Networks<br />Converged<br />Networks<br />
  27. 27. Emulex Connect I/O Roadmap<br />2010<br />2011<br />2012<br />2013<br />Unified Storage<br />Multi-Fabric Technology<br />Ethernet High<br />Performance<br />Computing<br />Low Latency RoCEE<br />RDMA<br />I/O Management<br />Value Added<br />I/O Services<br />16Gb<br />32Gb<br />8Gb <br />Fibre Channel<br />PCIe Gen3<br />Converged Networking<br />10Gb<br /> 100Gb<br />10GBaseT<br />40Gb<br />SR-IOV, Multichannel<br />Universal<br />LOMs<br />10Gb<br />10GBaseT<br />40Gb<br />Networked Server/ Power Management<br />3rd Gen BMC<br />4th Gen BMC<br />
  28. 28. Waves of 10GbE Adoption<br />Sources: <br />Dell’Oro 2010 Adapter Forecast<br />ESG Deployment Paper 2010<br />IDC WW External Storage Report 2010<br />IT Brand Pulse 10GbE Survey April 2010<br />FCoE Storage 25% of I/O<br />10GbE LOM Everywhere<br />Wave 3<br />Cloud Container Servers for Web Giants and Telcos<br />1-2 Socket Server<br />with NICs and Modular LOMs<br />Wave 2<br />10GbE Adoption Waves<br />2-4 Socket X86 and Unix Rack Servers Drive 10Gb with Modular LOMs and UCNAs<br />10Gb NAS and iSCSI Drive IP SAN Storage for Unstructured Data<br />Wave 1<br />10Gb LOM and Adapters for Blade Servers<br />50% of IT Managers Cite Virtualization as the Driver of 10GbE in Blades<br />2010<br />2011<br />2013<br />Beyond<br />2012<br />
  29. 29. Wave 2 of 10GbE Market Adoption<br />10GbE IP Storage<br />Web Giants<br />x86 & Unix Rack Servers<br />65% of Server Shipments<br />More that 2X the Revenue Opportunity<br />Romley from Q4:11 to Q2:11<br />Modular LOM on Rack<br />More Cards vs. LOMs<br />Cisco, Juniper, Brocade announced Multi-Hop FCoE in last 90 days<br />10GbE NAS, ISCSI & FCoE Convergence<br />Next Gen Internet Scale Applications (Search, Games & Social Media)<br />Container & High Density servers migrating to 10GbE<br />Growing to 15% of Servers<br />Wave 2 – More Than Doubles 10GbE Revenue Opportunity CY12/13<br />
  30. 30. The 10Gb Transition – Cross Over in 2012<br />Source Dell Oro Group<br />
  31. 31. #1) Switch Architecture Top of Rack<br />Modular unit design – managed by rack<br />Servers connect to 1-2 Ethernet switches inside rack<br />Rack connected to data center aggregation layer with SR optic or twisted pair<br />Blades effectively incorporate TOR design with integrated switch<br />SAN<br />Eth Aggregation<br />FIBER<br />FIBER<br />FIBER<br />SERVER CABINET ROW<br />
  32. 32. Switch Architecture End of Row<br />Server cabinets lined up side-by-side<br />Rack or cabinet at end (or middle) of row with network switches<br />Fewer switches in the topology<br />Managed by row – fewer switches to manage<br />SAN<br />Eth Aggregation<br />FIBER<br />FIBER<br />FIBER<br />FIBER<br />COPPER<br />SERVER CABINET ROW<br />
  33. 33. #2) 10Gb LOM on Rack Servers<br />vs. <br />Typically limited I/O expansion (1-2 slots)<br />Much cheaper 10G physical interconnect – backplane Ethernet & integrated switch – leading 10G transition<br />Earlier 10G LOM<br />Generally more I/O & memory expansion<br />More expensive 10G physical interconnect<br />Later 10G LOM due to 10GBASE-T<br />
  34. 34. Today’s Data Center<br />Typically Twisted Pair for Gigabit Ethernet<br />Some Cat 6, more likely Cat 5<br />Optical cable for Fibre Channel<br />Mix of older OM1 and OM2 (orange)and newer OM3 (green)<br />Fibre Channel may notbe wired to every rack<br />#3) Cables<br />
  35. 35. Mid 1980’s<br />Mid 1990’s<br />Early 2000’s<br />Late 2000’s<br />100Mb<br />1Gb<br />10Gb<br />10Mb<br />UTP Cat 5SFP Fiber<br />SFP+ Cu, Fiber<br />X2, Cat 6/7<br />UTP Cat 5<br />UTP Cat 3<br />TransceiverLatency (link)<br />Power(each side)<br />Cable<br />Distance<br />Technology<br />Copper<br />Twinax<br />10m<br />~0.1ms<br />~0.1W<br />SFP+ Direct Attach<br />10GBase SR<br />(short range)<br />Optic<br />Multi-Mode<br />62.5mm - 82m50mm - 300m<br />~0<br />1W<br />Optic<br />Single-Mode<br />10GBase LR(long range)<br />10 km<br />Var<br />1W<br />10GbE Cable Options<br />Cat6 - 55mCat 6a 100m<br />Cat7 -100m<br />~2.5W (30m)~3.5W (100m)<br />~1W (EEE idle)<br />Copper<br />Twisted Pair<br />~2.5ms<br />10GBASE-T<br />
  36. 36. 3X Cost and 10X Performance<br />
  37. 37. 10G BASE-T Will Dominate IT Connectivity<br />Crehan Research 2011<br />
  38. 38. New 10GBASE-T Options<br />Top of Rack<br />End of Row<br />SFP+ is lowest cost option today<br />Doesn’t support existing gigabit Ethernet LOM ports<br />10GBASE-T support emerging<br />Lower power at 10-30m reach (2.5w)<br />Energy Efficient Ethernet reduces to ~1w on idle <br />Optical is only option today<br />10GBASE-T support emerging<br />Lower power at 10-30m reach (2.5w this year)<br />Can do 100m reachat 3.5w<br />Energy Efficient Ethernet reduces to ~1w on idle <br />
  39. 39. #4) Data Center Bridging<br />Ethernet is a “best-effort” network<br /><ul><li>Packets may be dropped
  40. 40. Packets may be delivered out of order
  41. 41. Transmission Control Protocol (TCP) is used to reassemble packets in correct order</li></ul>Data Center Bridging(aka “lossless Ethernet”)<br /><ul><li>The “bridge to somewhere” – pause-based link between nodes
  42. 42. Provides low latency required for FCoE support
  43. 43. Expected to benefit iSCSI, enable iSCSI convergence</li></li></ul><li>Feature<br />Benefit<br />StandardsActivity<br />IEEE Data Center Bridging Standards<br />
  44. 44. Savings with Convergence<br />After Convergence <br />60 Cables<br />Before Convergence 140 Cables<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />LOM<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />CNA<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />Savings Up To:<br /><ul><li> 28% on switches, adapters
  45. 45. 80% on cabling
  46. 46. 42% on power and cooling</li></ul>IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />IP<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />(Based on 2 LOM, 8 IP and 4 FC Ports on 10 Servers)<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />FC<br />
  47. 47. Volume x86 systems<br />High End Unix Systems<br />To Converge or Not To Converge?<br />Best business case for convergence<br />Systems actually benefit by reducing adapters, switch ports and cabling <br />Limited convergence benefit<br />Typically large number of SAN and LAN ports<br />System may go from 24 Ethernet and 24 Fibre Channel ports to 48 Converged Ethernet ports<br />Benefits in later stages of data center convergence when Fibre Channel SAN fully running over Ethernet physical layer (FCoE)<br />
  48. 48. #7) Select 10Gb Adapter<br />10Gb LOM becoming standard on high-end servers<br />Second adapter for high availability<br />Should support FCoE and iSCSI offload for network convergence<br />Compare performance for all protocols<br />
  49. 49. Convergence Means Performance<br />40 Gb/Sec<br />900K IOPS/Sec<br />Wider Track: More Data Lanes<br />No Virtualization Conflicts <br />Capacity for Provisioning<br />Faster Engine: More Transactions<br />More VMs/CPU<br />QOS you demand<br />Performance for Storage<br />24<br />
  50. 50. #8) Bandwidth and Redundancy<br />NIC Teaming – Link Aggregation<br />Multiple physical links (NIC ports) combined into one logical link<br />Bandwidth aggregation<br />Failover redundancy<br />FC Multipathing<br />Load balancing over multiple paths<br />Failover redundancy<br />Core<br />Core<br />Core<br />Core<br />a<br />b<br />Redundancy<br />Redundancy<br />Poor Density<br />c<br />d<br />
  51. 51. NETWORK<br />FRAMEWORK<br />SAN<br />LAN<br />NETWORKCONVERGENCE<br />#9 Management Convergence<br />Future Proof<br />SAN<br />LAN<br />Investment Protection<br />Protects Your<br />Configuration<br />Investment<br />Protects Your<br />LAN & SAN Investment<br />Protects Your<br />Management<br />Investment<br />
  52. 52. Convergence and IT Management<br />Traditional IT management<br />Server<br />Storage<br />Networking<br />Convergence is latest technology to perturb IT management <br />Prior: Blades, Virtualization, PODs<br />Drives innovation<br />Storage<br />Networking<br />Server/Virtualization<br />
  53. 53. #10) Deployment Plan<br />Upgrade switches as needed to support 10Gb links to servers<br />Plan for network convergence with switches that support DCB<br />Focus on new servers<br />Unified 10GbE platform for LOM, blade and stand-up adapters<br />
  54. 54. Shaun Walsh<br />Emulex<br />The 10GbE Top 10<br />
  55. 55. Flexible Converged Fabric Adapter Technology<br />4x8<br />2x8<br />2x16<br /> Quad Port 8Gb FC<br />Dual Port 8Gb FC<br />Dual Port 16Gb FC<br />1x40<br />2x10<br />Single Port 40Gb CNA<br />Dual Port 10Gb CNA<br />1x16 2x10<br />4x10<br />2x8 2x10<br />Quad Port 10Gb CNA<br />Single Port 16Gb FC<br />Dual Port 10Gb CNA<br />Dual Port 8Gb FC<br />Dual Port10Gb CNA<br />
  56. 56. Emulex at Interop Booth # 743<br />First 16Gb HBA<br />Fibre Channel <br />Demonstration<br />First UCNA<br />10GbE Base-T <br />Demonstration<br />First UCNA<br />40GbE <br />Demonstration<br />Interop<br />May 9, 2011<br />EMC World<br />May 9, 2011<br />Interop<br />May 9, 2011<br />
  57. 57. Wrap Up<br />Many Phase to Market Transitions and Implementation<br />10GBASE-T will drive 10Gb Cost in Racks<br />Network Convergence is Coming; Best to be Ready<br />Management of Domains Will be the Most Challenging<br />10GBASE-T is Like Hanging Out with an Old Friend<br />
  58. 58. THANK YOU<br />For more information, please contact Shaun Walsh<br />949.922.7472 shaun.walsh@emulex.com<br />

×