• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Converged Networks: FCoE, iSCSI and the Future of Storage Networking
 

Converged Networks: FCoE, iSCSI and the Future of Storage Networking

on

  • 12,619 views

Presentation from EMC World 2010 (Boston)

Presentation from EMC World 2010 (Boston)

Statistics

Views

Total Views
12,619
Views on SlideShare
11,571
Embed Views
1,048

Actions

Likes
4
Downloads
580
Comments
2

5 Embeds 1,048

http://blogstu.wordpress.com 896
http://www.slideshare.net 147
http://translate.googleusercontent.com 3
http://webcache.googleusercontent.com 1
http://www.linkedin.com 1

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel

12 of 2 previous next

  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
  • That was great information from FMC world on converged network data center.
    Are you sure you want to
    Your message goes here
    Processing…
  • very fancy material... tks !!
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Converged Networks: FCoE, iSCSI and the Future of Storage Networking Converged Networks: FCoE, iSCSI and the Future of Storage Networking Presentation Transcript

    • Converged Data Center: FCoE, iSCSI, and the Future of Storage Networking Stuart Miniman, Technologist, Office of the CTO EMC Corporation
    • Agenda
      • The Journey to Convergence
      • Protocols & Standards Update
      • Solution Evolution
      • Conclusion and Summary
    • Rack Server Environment Today
      • Servers connect to LAN, NAS and iSCSI SAN with NICs
      • Servers connect to FC SAN with HBAs
      • Many environments today are still 1 Gigabit Ethernet
      • Multiple server adapters, multiple cables, power and cooling costs
        • Storage is a separate network (including iSCSI)
      Rack-mounted servers Ethernet LAN 1 Gigabit Ethernet 1 Gigabit Ethernet NICs Today < 30% of servers in the data center are SAN attached to storage Ethernet Fibre Channel Storage Fibre Channel SAN Fibre Channel HBAs 1 Gigabit Ethernet iSCSI SAN
      • Transport storage (SCSI) over standard Ethernet
      • Reliability through TCP
      • SCSI has limited distance, iSCSI provides even more flexibility than FC due to IP routing
      • 1Gb iSCSI has good performance
      • iSCSI has thrived, especially where the server, storage and network administrators are the same person
      The iSCSI Story Link IPsec IP TCP iSCSI SCSI Initiator IP Network
    • Why a New Option for FC Customers?
      • FC has a large and well managed install base
        • Want a solution that is attractive for customers with FC expertise / investment
        • Previous convergence options did not allow for incremental adoption
      • Requirement for a Data Center solution that can provide I/O consolidation
      • Leveraging Ethernet infrastructure and skill set has always been attractive
      FCoE allows an Ethernet-based SAN to be introduced into the FC-based Data Center without breaking existing administrative tools and workflows
    • Non-Ethernet Convergence Options
      • InfiniBand
        • Used broadly for High Performance Computing (HPC) environments
        • Low cost and ultra-low latency geared for server to server cluster
        • Separate use from general network (Ethernet) or storage (FC or Ethernet)
      Extend server’s PCI bus to appliance Appliance connects to existing infrastructure (FC, Ethernet)
      • PCIe
        • Extension of the server bus to an I/O aggregation box
          • Single network from server -> top of rack, similar to early FCoE deployments
        • Not a standard (parallel effort to MR-IOV standard), small players
        • Still using Ethernet and FC network and storage from the aggregation box
    • 10Gb Ethernet allows for Converged Data Center
      • Maturation of 10 Gigabit Ethernet
        • 10 Gigabit Ethernet allows replacement of n x 1Gb with a much smaller number (start with 2) of 10Gb Adapters
        • Single network allows for easier mobility for virtualization/cloud deployments
      • 10 Gigabit Ethernet simplifies server, network and storage infrastructure
        • Reduces the number of cables and server adapters
        • Lowers capital expenditures and administrative costs
        • Reduces server power and cooling costs
        • Blade servers and server virtualization drive consolidated bandwidth
      10 Gigabit Ethernet is the answer! iSCSI and FCoE both leverage this inflection point Single Wire for Network and Storage 10 GbE LAN SAN
    • FCoE Extends FC on a Single Network FC network FC storage Ethernet Network FCoE Switch Lossless Ethernet Links 2 options SAN sees host as FC Network Driver FC Driver Converged Network Adapter Server sees storage traffic as FC Ethernet FC FCoE SW Stack Standard 10G NIC
    • Time To Widespread Adoption 1990 2000 2010 1980 Defined 73 Standard 83 Widespread 93 Defined 85 Standard 94 Widespread 03 iSCSI Defined 00 02 Widespread 08 Standard Standard 10 Gigabit Ethernet 02 09 Widespread 07 09 ?? Defined Standard Widespread
    • Future
      • 40 & 100 Gb Ethernet (IEEE) standards will be completed in June 2010
      • 16Gb FC (T11) standard is targeted for completion at the end of 2010
      • Server adoption of FC ~ 3+ years, of Ethernet ~ 5+ years
        • Backbone typically faster adoption
      16 Gb FC 40/100 Gb Ethernet 32 Gb FC
    • Ethernet Cabling Different short-distance option (QSFP) Low power 5-10m distance (Rack solution) N/A Twinax Copper / SFP+DA (direct attach) Expect shift to optical w/ 40/100Gb OM3 100m OM4 125m Most backbone deployments are optical OM2 82m OM3 300m Rare < 1% Ethernet, but Standard for FC OM2 ( orange ) OM3 ( aqua ) OM4* ( aqua ) Optical (multimode) / LC Not supported in initial standard Some products on market, but not for FCoE yet Cat6 55m; Cat 6a 100m > 99% of existing cabling (lots of Cat 5e) Cat6 or Cat6a Copper (10GBase-T) / RJ-45 40/100Gb 10Gb 1Gb Cable Type / Connector
    • Agenda
      • The Journey to Convergence
      • Protocols & Standards Update
      • Solution Evolution
      • Conclusion and Summary
    • Standards for Next Generation Data Center
      • Fibre Channel over Ethernet (FCoE) protocol
        • Developed by International Committee for Information Technology Standards (INCITS) T11 Fibre Channel Interfaces Technical Committee
        • Fibre Channel over Ethernet allows native Fibre Channel to travel unaltered over Ethernet
        • FC-BB-5 standard ratified in June 2009
        • FC-BB-6 in process to expand solution
      • Converged Enhanced Ethernet (CEE)
        • Developed by IEEE Data Center Bridging (DCB) Task Group
        • DCB/CEE creates an Ethernet environment that drops frames as rarely as Fibre Channel
        • Technology commonly referred to as Lossless Ethernet
        • IEEE standards targeting ratification in mid 2010
        • Requirement for FCoE; Enhancement for iSCSI
      Two parallel industry standards seek to drive I/O consolidation in large data centers over time:
      • Companies working on the standard committees
        • Key participants: Brocade, Cisco, EMC, Emulex, HP, IBM, Intel, QLogic, Oracle(Sun), others
      • iSCSI is SCSI functionality transported using TCP/IP for delivery and routing in a standard Ethernet/IP environment
      iSCSI and FCoE Framing
      • FCoE is FC frames encapsulated in Layer 2 Ethernet frames designed to utilize a Lossless Ethernet environment
        • Large maximum size of FC requires Ethernet Jumbo Frames
        • No TCP, so Lossless environment required
        • No IP routing
      FCoE Frame iSCSI Frame Data FC Frame CRC Ethernet Header Ethernet Header FCoE Header FC Header FC Payload CRC EOF FCS IP TCP iSCSI
    • FCoE Frame Formats
      • Ethernet frames give a 1:1 encapsulation of FC frames
        • No segmenting FC frames across multiple Ethernet frames
        • FCoE flow control is Ethernet based
          • BB Credit/R_RDY replaced by Pause/PFC mechanism
      • FC frames are large, require Jumbo frames
        • Max FC payload size is 2180 bytes
        • Max FCoE frame size is 2240 bytes
      • FCoE Initialization Protocol (FIP) used for discovery and login
      Destination MAC Address Source MAC Address IEEE 802.1Q Tag ET = FCoE Ver Reserved Reserved Reserved SOF Encapsulated FC Frame (Including FC-CRC) EOF Reserved FCS Reserved FCoE Frame Format Bit 0 Bit 31
    • Storage Drivers and Server Virtualization NIC NIC vNIC vNIC vSCSI vSCSI *iSCSI initiator can also be in the VM virtual switch Hypervisor driver FC HBA FC HBA LAN traffic CNA CNA FCoE follows FC path Hypervisor iSCSI traffic
    • Storage Drivers and Server Virtualization NIC NIC vNIC vNIC vSCSI vSCSI FCoE software in the guest would send traffic through the vSwitch to the vNIC FC HBA FC HBA Hypervisor SW FCoE SW FCoE Hypervisor driver No FCoE access here currently virtual switch vSwitches from ESX (including Cisco 1000v option) and Hyper-V are not Lossless
    • FC-BB-6
      • Not required for multi-hop FCoE or other current deployments
      • Currently in the “herding cats” phase of defining goals
      • Likely to support point-to-point configuration which allows 2 FCoE devices to communicate without going through an FCF (or switch)
      For more, see Erik Smith of EMC E-Lab’s presentation FCoE - Topologies, Protocol, and Limitations Tues 8am and Thurs 8:30am
    • Lossless Ethernet
      • IEEE 802.1 Data Center Bridging ( DCB ) is the standards task group
      • Converged Enhanced Ethernet ( CEE ) is an industry consensus term
      • Link level enhancements (Priority Flow Control, Enhanced Transmission Selection, Data Center Bridging Exchange Protocol) are shipping in products today
        • Standards expected to be ratified in June ‘10
      • The “CEE cloud” or DCB-enabled LAN is only for the portion of your network that requires lossless functionality
        • Currently limited to multimode (300m) distances per link (no singlemode)
        • Limit the environment to the Data Center; Layer 2 (no routing)
      Enhanced Ethernet provides the Lossless Infrastructure which enables FCoE
    • PAUSE and Priority Flow Control
      • PAUSE transforms Ethernet into a lossless fabric
      • Classical 802.3x PAUSE is rarely implemented since it stops all traffic
      • A new PAUSE known as Priority Flow Control (PFC) function that can halt traffic according to priority tag while allowing traffic at other priority levels to continue
        • Creates lossless virtual lanes
        • PFC will be limited to Data Center
      • Per priority link level flow control
        • Only affect traffic that needs it
        • Ability to enable it per priority
        • Not simply 8 x 802.3x PAUSE
      Switch A Switch B
    • Enhanced Transmission Selection and Data Center Bridging Exchange Protocol (DCBX)
      • Enhanced Transmission Selection (ETS) provides a common management framework for bandwidth management
      • Allows configuration of HPC & storage traffic to have appropriately higher priority
      • When a given load in a class does not fully utilize its allocated bandwidth, ETS allows other traffic classes to use the available bandwidth
      • Maintain low latency treatment of certain traffic classes
      • Data Center Bridging Exchange Protocol (DCBX) is responsible for configuration of link parameters for DCB functions
      • Determines which devices support Enhanced Ethernet functions
      Offered Traffic t1 t2 t3 10 GE Link Realized Traffic Utilization 3G/s HPC Traffic 3G/s 2G/s 3G/s Storage Traffic 3G/s 3G/s LAN Traffic 4G/s 5 G/s 3G/s t1 t2 t3 3G/s 3G/s 3G/s 3G/s 3G/s 2G/s 3G/s 4G/s 6G/s
    • Beyond Link Level
      • Congestion notification
        • IEEE 802.1Qau ratified
          • Allows a switch to notify attached ports to slow down transmission due to heavy traffic, in order to reduce the chances of packet drops or network deadlocks
          • Moves the management of congestion back to the edge, which helps alleviate network-wide bottlenecks
      • Layer 2 multipathing
        • IETF TRILL - TR ansparent I nterconnection of L ots of L inks
          • Used with the Spanning Tree Protocol to provide more efficient bridging and bandwidth aggregation
          • Focuses on a bridging capability that will increase bandwidth by allowing and aggregating multiple network paths
          • Standards are stable; products are coming soon
      Throttle Switch Transmit Queue Switch Receive Buffer Throttle
    • Agenda
      • The Journey to Convergence
      • Protocols & Standards Update
      • Solution Evolution
      • Conclusion and Summary
    • iSCSI Deployment
      • iSCSI was > 15% of revenue ($1.8B in ‘09) and > 20% capacity in SAN market in 2009 *
      • 10 Gb iSCSI solutions are available
        • Can work on both Traditional Ethernet (recover from dropped packets using TCP) or Lossless Ethernet (DCB) environment
      • iSCSI natively routable (IP)
      • iSCSI solutions are much smaller scale than FC
        • A single FC director is larger than most iSCSI environments
      * According to IDC, 2009 Ethernet iSCSI SAN
    • FCoE Solutions in 2009
      • FCoE with direct attach of server to Converged Network Switch at top of rack or end of row
      • Tightly controlled solution
      • Server 10 GE adapters may be CNA or NIC
      • Storage is still a separate network
      Converged Network Switch FC Attach Rack Mounted Servers 10 GbE CNAs Ethernet LAN Storage Fibre Channel SAN Ethernet FC
    • Expansion of FCoE beyond a single switch
      • First solutions are with an FCoE aware Ethernet switch (also known as FIP snooping )
      • Enables rack bundle solutions
      Rack Area Network (RAN) allow for the rack as a unit of design and a unit of management FC switch FCoE switch FCoE aware Ethernet switch (embedded) CNA (embedded) FCoE enabled Blade Server storage
    • CX4-480 or Celerra NS-960
      • 2 x Nexus 6140
      • FCoE, Ethernet, FC ports
      • 8 x UCS 5100 chassis
      • Includes CNA blade (2 x FCoE)
      • vSphere/ESX 4.0
      • 16:1 consolidation
      • 1024 VMs
      Vblock1 example in 3 floor tiles
      • Cisco UCS
        • Optimized compute environment utilizing FCoE technology
        • Includes embedded FCoE adapter and ToR switch equivalent, plus the option for embedded FCoE switch
      • Vblock
        • Integrated Virtual Computing Environment combining best-in-class networking, compute, storage, security and management solutions
        • Cisco UCS + EMC storage + VMware vSphere/ESX
      Cisco & VCE FCoE Rack Area Network (RAN)
    • IBM & HP FCoE offerings
      • IBM BladeServers w/ embedded FCoE adapter for pass-thru switch module or an embedded FCoE switch module which attaches to ToR switch
      1x CNA Card 2-port 10Gb CNA 2 x 10Gb Switch 10-port 10Gb Switch or 14-port IBM 10Gb Pass-Thru TOR switch splits LAN & SAN traffic Blade Switch Top of Rack CNAs Converge traffic at the server End of Row Aggregation Core Converged edge will evolve from ToR/EoR to aggregation and core layers
    • Challenges for FCoE
      • FCoE solution development will take time to expand and mature, just as with other technologies – customers looking to create topologies similar to existing FC configurations:
        • Director support
        • Edge-core support (multi-hop)
      Storage Network vs
      • Organizational domain overlap
    • EMC and Ethernet
      • Best Practices
        • Google “ FCoE Tech Book ” (FCoE & Ethernet)
      • Services
        • Design, Implementation, Performance and Security offerings for networks
      • Products
        • Ethernet equipment for creating Converged Network Environments
    • FCoE Timeline Supported 2010 FCoE top-of-rack switches 2nd Generation CNAs Windows, Linux, VMware Cisco UCS and Vblock IBM BladeCenter Native FCoE storage FCoE blades for FC and Ethernet directors UNIX support Open FCoE Expanded multi-hop solutions More embedded/server solutions 10Gb DCB LOMs 10G-BaseT w/ FCoE TRILL solutions Direct Connect (FC-BB-6) 40GbE/100GbE Future
    • Agenda
      • The Journey to Convergence
      • Protocols & Standards Update
      • Solution Evolution
      • Conclusion and Summary
    • Next Generation Data Center
      • 10 Gigabit Ethernet
      Fibre Channel FCoE NAS iSCSI LAN
      • EMC is working with the standards communities and partners to deliver the same reliability and robustness in the next generation virtual data center that we deliver today
      The Converged Data Center sets the operational and capital efficiency foundations for the virtual data center and private clouds Private Cloud Virtualized Data Center Cloud Computing virtualization common infrastructure common management
    • Summary
      • A converged data center environment can be built using 10Gb Ethernet
      • Achieving a converged network requires consideration of technology, processes/best practices and organizational dynamics
      • 10 Gigabit Ethernet solutions are maturing
        • Active industry participation is creating standards that allow solutions that can integrate into existing data centers
        • Continued use of FC and adoption of FCoE can be flexible due to shared management
        • FCoE and iSCSI will follow the Ethernet roadmap to 40 and 100 Gigabit in the future
    • Related References
      • Full collection of FCoE references (blog, video, whitepaper, presentations and other EMC links) http:// blogstu.wordpress.com/tag/fcoe
      • Industry site with consolidated information http://www.fcoe.com/
      • T11 FCoE activity http://www.t11.org/fcoe
      • IEEE 802.1 Data Center Bridging task group page http://www.ieee802.org/1/pages/dcbridges.html
      • Other EMC Bloggers covering these technologies:
        • Chad Sakac http://virtualgeek.typepad.com/
        • Chuck Hollis http://chucksblog.typepad.com/
        • David Graham http:// flickerdown.com /
    •