Interop: The 10GbE Top 10
Upcoming SlideShare
Loading in...5
×
 

Interop: The 10GbE Top 10

on

  • 1,281 views

Shaun Walsh, VP of marketing, Emulex, presented the '10GbE Top 10" at Interop Las Vegas in May 2011. If you missed it, here are the slides from that discussion.

Shaun Walsh, VP of marketing, Emulex, presented the '10GbE Top 10" at Interop Las Vegas in May 2011. If you missed it, here are the slides from that discussion.

Statistics

Views

Total Views
1,281
Views on SlideShare
1,281
Embed Views
0

Actions

Likes
0
Downloads
31
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Interop: The 10GbE Top 10 Interop: The 10GbE Top 10 Presentation Transcript

  • Shaun Walsh
    Emulex
    The 10GbE Top 10
  • The Data Center of the Future
    10G Ethernet and FCoE are enabling technologies to support the
    Virtual Data Center and Cloud Computing.
    Cloud Data Center
    Virtual
    Data Center
    Discrete
    Data Center
    Private cloud
    • 3 Discrete Networks
    • Equipment Proliferation
    • Management Complexities
    • Expanding OPEX & CAPEX
    • Converged Networks
    • Virtualized
    • Simplifies I/O Management
    • Reduces CAPEX & OPEX
    • Cloud Computing (Private & Public)
    • On-Demand Provisioning and Scale
    • Modular Building Blocks “Legos”
    • Avoid CAPEX & OPEX
  • Drivers of the Move to 10/40GbE
    StorageUniverse
    VirtualNetworking
    CloudConnectivity
    Network Convergence
    o
    • 7ZB by 2013
    • Mobile and VDI
    • Device-Centric
    • VM I/O Density
    • Scalable vDevices
    • End-to-End vI/O
    • New I/O Models
    • I/O Isolation
    • New Server Models
    • Multi-Fabric I/O
    • Evolutionary Steps
    • RoCEE Low Latency
  • VM Density Drives More I/O Throughput
    40 Gb
    10GbE & 16 Gb FC
    Don’t Know
    © 2011 Enterprise Strategy Group
  • ESG’s Evolution of Network Convergence
    © 2011 Enterprise Strategy Group
  • Evolving Network Models
    Target Connect
    Host Connect
    Discrete
    Networks
    Converged
    Fabric
    Networks
    Converged
    Networks
  • Emulex Connect I/O Roadmap
    2010
    2011
    2012
    2013
    Unified Storage
    Multi-Fabric Technology
    Ethernet High
    Performance
    Computing
    Low Latency RoCEE
    RDMA
    I/O Management
    Value Added
    I/O Services
    16Gb
    32Gb
    8Gb
    Fibre Channel
    PCIe Gen3
    Converged Networking
    10Gb
    100Gb
    10GBaseT
    40Gb
    SR-IOV, Multichannel
    Universal
    LOMs
    10Gb
    10GBaseT
    40Gb
    Networked Server/ Power Management
    3rd Gen BMC
    4th Gen BMC
  • Waves of 10GbE Adoption
    Sources:
    Dell’Oro 2010 Adapter Forecast
    ESG Deployment Paper 2010
    IDC WW External Storage Report 2010
    IT Brand Pulse 10GbE Survey April 2010
    FCoE Storage 25% of I/O
    10GbE LOM Everywhere
    Wave 3
    Cloud Container Servers for Web Giants and Telcos
    1-2 Socket Server
    with NICs and Modular LOMs
    Wave 2
    10GbE Adoption Waves
    2-4 Socket X86 and Unix Rack Servers Drive 10Gb with Modular LOMs and UCNAs
    10Gb NAS and iSCSI Drive IP SAN Storage for Unstructured Data
    Wave 1
    10Gb LOM and Adapters for Blade Servers
    50% of IT Managers Cite Virtualization as the Driver of 10GbE in Blades
    2010
    2011
    2013
    Beyond
    2012
  • Wave 2 of 10GbE Market Adoption
    10GbE IP Storage
    Web Giants
    x86 & Unix Rack Servers
    65% of Server Shipments
    More that 2X the Revenue Opportunity
    Romley from Q4:11 to Q2:11
    Modular LOM on Rack
    More Cards vs. LOMs
    Cisco, Juniper, Brocade announced Multi-Hop FCoE in last 90 days
    10GbE NAS, ISCSI & FCoE Convergence
    Next Gen Internet Scale Applications (Search, Games & Social Media)
    Container & High Density servers migrating to 10GbE
    Growing to 15% of Servers
    Wave 2 – More Than Doubles 10GbE Revenue Opportunity CY12/13
  • The 10Gb Transition – Cross Over in 2012
    Source Dell Oro Group
  • #1) Switch Architecture Top of Rack
    Modular unit design – managed by rack
    Servers connect to 1-2 Ethernet switches inside rack
    Rack connected to data center aggregation layer with SR optic or twisted pair
    Blades effectively incorporate TOR design with integrated switch
    SAN
    Eth Aggregation
    FIBER
    FIBER
    FIBER
    SERVER CABINET ROW
  • Switch Architecture End of Row
    Server cabinets lined up side-by-side
    Rack or cabinet at end (or middle) of row with network switches
    Fewer switches in the topology
    Managed by row – fewer switches to manage
    SAN
    Eth Aggregation
    FIBER
    FIBER
    FIBER
    FIBER
    COPPER
    SERVER CABINET ROW
  • #2) 10Gb LOM on Rack Servers
    vs.
    Typically limited I/O expansion (1-2 slots)
    Much cheaper 10G physical interconnect – backplane Ethernet & integrated switch – leading 10G transition
    Earlier 10G LOM
    Generally more I/O & memory expansion
    More expensive 10G physical interconnect
    Later 10G LOM due to 10GBASE-T
  • Today’s Data Center
    Typically Twisted Pair for Gigabit Ethernet
    Some Cat 6, more likely Cat 5
    Optical cable for Fibre Channel
    Mix of older OM1 and OM2 (orange)and newer OM3 (green)
    Fibre Channel may notbe wired to every rack
    #3) Cables
  • Mid 1980’s
    Mid 1990’s
    Early 2000’s
    Late 2000’s
    100Mb
    1Gb
    10Gb
    10Mb
    UTP Cat 5SFP Fiber
    SFP+ Cu, Fiber
    X2, Cat 6/7
    UTP Cat 5
    UTP Cat 3
    TransceiverLatency (link)
    Power(each side)
    Cable
    Distance
    Technology
    Copper
    Twinax
    10m
    ~0.1ms
    ~0.1W
    SFP+ Direct Attach
    10GBase SR
    (short range)
    Optic
    Multi-Mode
    62.5mm - 82m50mm - 300m
    ~0
    1W
    Optic
    Single-Mode
    10GBase LR(long range)
    10 km
    Var
    1W
    10GbE Cable Options
    Cat6 - 55mCat 6a 100m
    Cat7 -100m
    ~2.5W (30m)~3.5W (100m)
    ~1W (EEE idle)
    Copper
    Twisted Pair
    ~2.5ms
    10GBASE-T
  • 3X Cost and 10X Performance
  • 10G BASE-T Will Dominate IT Connectivity
    Crehan Research 2011
  • New 10GBASE-T Options
    Top of Rack
    End of Row
    SFP+ is lowest cost option today
    Doesn’t support existing gigabit Ethernet LOM ports
    10GBASE-T support emerging
    Lower power at 10-30m reach (2.5w)
    Energy Efficient Ethernet reduces to ~1w on idle
    Optical is only option today
    10GBASE-T support emerging
    Lower power at 10-30m reach (2.5w this year)
    Can do 100m reachat 3.5w
    Energy Efficient Ethernet reduces to ~1w on idle
  • #4) Data Center Bridging
    Ethernet is a “best-effort” network
    • Packets may be dropped
    • Packets may be delivered out of order
    • Transmission Control Protocol (TCP) is used to reassemble packets in correct order
    Data Center Bridging(aka “lossless Ethernet”)
    • The “bridge to somewhere” – pause-based link between nodes
    • Provides low latency required for FCoE support
    • Expected to benefit iSCSI, enable iSCSI convergence
  • Feature
    Benefit
    StandardsActivity
    IEEE Data Center Bridging Standards
  • Savings with Convergence
    After Convergence
    60 Cables
    Before Convergence 140 Cables
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    LOM
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    CNA
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    Savings Up To:
    • 28% on switches, adapters
    • 80% on cabling
    • 42% on power and cooling
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    IP
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    (Based on 2 LOM, 8 IP and 4 FC Ports on 10 Servers)
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
    FC
  • Volume x86 systems
    High End Unix Systems
    To Converge or Not To Converge?
    Best business case for convergence
    Systems actually benefit by reducing adapters, switch ports and cabling
    Limited convergence benefit
    Typically large number of SAN and LAN ports
    System may go from 24 Ethernet and 24 Fibre Channel ports to 48 Converged Ethernet ports
    Benefits in later stages of data center convergence when Fibre Channel SAN fully running over Ethernet physical layer (FCoE)
  • #7) Select 10Gb Adapter
    10Gb LOM becoming standard on high-end servers
    Second adapter for high availability
    Should support FCoE and iSCSI offload for network convergence
    Compare performance for all protocols
  • Convergence Means Performance
    40 Gb/Sec
    900K IOPS/Sec
    Wider Track: More Data Lanes
    No Virtualization Conflicts
    Capacity for Provisioning
    Faster Engine: More Transactions
    More VMs/CPU
    QOS you demand
    Performance for Storage
    24
  • #8) Bandwidth and Redundancy
    NIC Teaming – Link Aggregation
    Multiple physical links (NIC ports) combined into one logical link
    Bandwidth aggregation
    Failover redundancy
    FC Multipathing
    Load balancing over multiple paths
    Failover redundancy
    Core
    Core
    Core
    Core
    a
    b
    Redundancy
    Redundancy
    Poor Density
    c
    d
  • NETWORK
    FRAMEWORK
    SAN
    LAN
    NETWORKCONVERGENCE
    #9 Management Convergence
    Future Proof
    SAN
    LAN
    Investment Protection
    Protects Your
    Configuration
    Investment
    Protects Your
    LAN & SAN Investment
    Protects Your
    Management
    Investment
  • Convergence and IT Management
    Traditional IT management
    Server
    Storage
    Networking
    Convergence is latest technology to perturb IT management
    Prior: Blades, Virtualization, PODs
    Drives innovation
    Storage
    Networking
    Server/Virtualization
  • #10) Deployment Plan
    Upgrade switches as needed to support 10Gb links to servers
    Plan for network convergence with switches that support DCB
    Focus on new servers
    Unified 10GbE platform for LOM, blade and stand-up adapters
  • Shaun Walsh
    Emulex
    The 10GbE Top 10
  • Flexible Converged Fabric Adapter Technology
    4x8
    2x8
    2x16
    Quad Port 8Gb FC
    Dual Port 8Gb FC
    Dual Port 16Gb FC
    1x40
    2x10
    Single Port 40Gb CNA
    Dual Port 10Gb CNA
    1x16 2x10
    4x10
    2x8 2x10
    Quad Port 10Gb CNA
    Single Port 16Gb FC
    Dual Port 10Gb CNA
    Dual Port 8Gb FC
    Dual Port10Gb CNA
  • Emulex at Interop Booth # 743
    First 16Gb HBA
    Fibre Channel
    Demonstration
    First UCNA
    10GbE Base-T
    Demonstration
    First UCNA
    40GbE
    Demonstration
    Interop
    May 9, 2011
    EMC World
    May 9, 2011
    Interop
    May 9, 2011
  • Wrap Up
    Many Phase to Market Transitions and Implementation
    10GBASE-T will drive 10Gb Cost in Racks
    Network Convergence is Coming; Best to be Ready
    Management of Domains Will be the Most Challenging
    10GBASE-T is Like Hanging Out with an Old Friend
  • THANK YOU
    For more information, please contact Shaun Walsh
    949.922.7472 shaun.walsh@emulex.com