Your SlideShare is downloading. ×
#VMUGMTL - Xsigo Breakout
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

#VMUGMTL - Xsigo Breakout

1,047
views

Published on

Xsigo - I/O Virtualization Overview

Xsigo - I/O Virtualization Overview

Published in: Technology

0 Comments
1 Like
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,047
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
0
Likes
1
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Key points: Dynamic, any-to-any connectivity  Connect any server to any network or storage Simplified infrastructure  70% less complexity, 100X more agility  Changes in SW, not HW Essential for the cloud computing model  Cloud without virtual I/O is expensive and inefficient
  • Use this to identify your customer’s pain points. Xsigo addresses each in different ways. Key points: Pain: Complexity  Virtualization consolidates servers, but demands more from the I/O. Each server now needs more connections for different storage types and networks. More cards, cables, and switch ports to manage. Pain: Utilization  Virtualization helps, but utilization is still limited by I/O. Few can afford to connect every server to every network and storage device, which then limits what those servers can do. Pain: Budgets  Infrastructure costs money for the boxes themselves and for the ongoing software and hardware maintenance, meaning both cap ex and op ex spending. When you factor in switch ports, cabling, and cards, it is very common that I/O costs far more than the servers themselves. Pain: Space/Power  More I/O means bigger servers, more power.
  • Xsigo supported: Booth demos Cloud VMware Express
  • Just like the other IT milestones in the past 20 years, I/O virtualization represents a dramatic evolution of I/O technology. Consolidation has been the overarching trend. As resources proliferate, we need to consolidate management to manage utilization and workloads. LAN consolidation  Multiprotocol router brought networks together. Eliminated “sneaker net”, delivering a huge boost in productivity. Storage networking  Consolidated storage in a single pool rather than across discrete servers or multiple arrays. Storage utilization went up, complexity went down. Server virtualization  Consolidated servers by allowing multiple applications to run in a pool of processing resources rather than on discrete machines. Xsigo virtual I/O  Any-to-any connectivity. Dynamically connect any server to any network or storage resource. Combination of server virtualization and I/O virtualization together let you run any application on any server  That is the fundamental concept of the cloud.
  • Server virtualization gave us the server cloud  Servers became interchangeable resources. Xsigo virtual I/O enables the infrastructure cloud  Any network and storage can be connected to any server at any time. Data centers have multiple network and storage resources Server virtualization lets you run any app on any machine. But that means that every machine potentially needs connectivity to every resource. Xsigo lets you connect whichever network or storage is needed, without the need to massively scale connectivity -- cards, cables, and switch ports.
  • Conventional I/O  Expensive, complex… and impossible to change once done. Convergence means: Consolidating infrastructure  70% less complexity Any-to-any connectivity  Take one wire and make it do the job of many  More efficiency and more flexibility.
  • Why traditional I/O does not work Conventional I/O was designed to run with one application per server. Because functionality was fixed, you only needed a few connections per server. It was simple. In the early days of virtualization, that was OK  Small number of VMs, mostly test/dev  less concern about redundancy, bandwidth, and network isolation Conventional I/O is no longer enough: More VMs per server  10-20 are common Production environments  Redundancy and network isolation are critical. Faster servers  Need more bandwidth New management features  Vmotion and Fault Tolerance both add network requirements A survey shows 7 to 16 I/O connections per server, which creates these problems: Unpredictable performance: With all that complexity, how do you make sure VMs have the BW they need? Cost: Cards and cables often cost more than the server itself. Congestion: Even with a bunch of cables, you can still get congestion. How do you find it and fix it? I/O you need?: How do you know if a particular server has the I/O you need? This is especially true with blades where I/O is limited. Cabling: A mess.
  • Objective is to move from silo infrastructure to cloud. Without Xsigo: Inflexible silos: Myriad of cables, configuration settings, and mappings  very hard to change Mutiple protocols and transports : If you want to run an Fibre Channel-based app on a server, but that server is not tied into the FC, or lacks the correct card, you’re stuck. End result: You cannot share resources and cannot consolidate management Some solutions (such as Cisco UCS and HP Virtual Connect) offer partial solutions, but they only work with that vendor’s gear. With Xsigo: Xsigo eliminates the silos . Creates universal connectivity. Any server can connect to any network or storage without ever having to re-wire servers or install new cards. Servers become interchangeable assets that can be deployed as needed.
  • How does this work? Traditional I/O has fixed resources : Cards, cables, and mappings that are hard to change. With Xsigo: Xsigo I/O Director becomes the consolidation point. For the servers: Fewer cables: Just 2 connections to each delivers fully redundant connectivity. Fewer cards: One or two cards replace all the NICs and HBAs. Virtual resources: Virtual NICs and HBAs replace the fixed cards. They look and act just like conventional resources. For the networks and storage: Connect to all storage types: FC, iSCSI, NAS. Down the road, if native FCoE storage catches on, we will offer a module for that as well. Connect to all network types: 1GE, 10GE. Migrate I/O between servers, even on live machines, to accelerate moves/adds/changes. Connect new networks without re-cabling servers. Add resources to live machines. At any time. On live servers. Without a server re-boot. And without having to enter the data center. (One customer said, “It can’t get much simpler than not having to be there.”)
  • VIO helped achieve this in 3 ways
  • The difference between old and new is dramatic. Without Xsigo you’re forever guessing about what I/O is needed. Guess wrong and you’re de-racking servers to add cards, route cables, search for switch ports. With Xsigo connectivity is on-demand. Need a new network for FT? You can configure it entirely in software.
  • The difference between old and new is dramatic. Without Xsigo you’re forever guessing about what I/O is needed. Guess wrong and you’re de-racking servers to add cards, route cables, search for switch ports. With Xsigo connectivity is on-demand. Need a new network for FT? You can configure it entirely in software.
  • Here we see how vSphere and Nehalem stack up. 3X more BW capability overall 9X more iSCSI capability.
  • Performance monitor tool shows capacity of 20Gb over a single cable. Combo of Ethernet and FC traffic on one transport. Key points: Nehalem + vSphere can drive 20Gb of traffic from one server. (2 x 20Gb cables) Xsgio can support that traffic with a standard configuration (2 cables to each server, active/active) Can you do this with 10Gb + FCoE? No performance data that I’ve seen.
  • The I/O consolidation point Connect networks and storage via I/O modules Connect servers to the server ports Highly serviceable design: Hot swappable I/O modules Redundant hot swappable fans and power supplies Passive midplane Choose the form factor that matches your uplink requirements. 4U VP780: Up to 15 I/O modules 2U VP560: Up to 4
  • I/O Director Family Choose the server connection to match your application needs: 10Gb, 20Gb, or 40Gb server connections. 10G: Easiest integration, plugs into existing server ports. 20G: Twice the performance of the 10G, at similar overall cost in many deployments. 40G: The world’s highest performance server interconnect. Each is available in the 2U high or 4U high form factor.
  • Choose the hot swappable modules for the uplinks you need.
  • 1U enclosure redundant power and cooling one power supply is active the other is a standby in case of failure the fan tray is hot-swappable the RS232 port is not active the Ethernet port is not active When will the management ports be used (if at all)?
  • 70% fewer switch cards, cables, and switch ports. Saves cost in: Acquisition Installation Space (fewer rack units) Software and hardware maintenance fees Downtime (less gear + all connections are redundant)
  • Saves power with: Fewer switches, cards Smaller servers Higher utilization Some of the backup numbers here: A 1G connection is 10 – 12 watts total. A 10G connection is 40 - 50 watts. A single server, running 10 1G connections, will consume 100 watts, which costs about $176 per year in power and cooling (for I/O alone), at 10 cents per KWHR. Xsigo consumes about 10 watts per server total for the I/O card, and about 800 watts for dual I/O Directors. The expansion switch is 7.5 watts/server. For 100 servers, that totals to 35 watts per server, a 65% power savings. A server is about 300 watts, so that’s 300 + 100 for conventional I/O (400 watts), vs. 300 + 35 (335 watts) for Xsigo, a 19% power savings for this example . FC and/or 10G push power higher.
  • Get more out of blades Without Xsigo I/O limited by mezzanine cards and switch ports. You can only get so many ports on a blade. With Xsigo Up to 64 virtual connections per blade Get whatever I/O is supported by your I/O modules (not limited by mazzanine card) Move those connections between blades at any time. HP Virtual Connect Note: Only virtualizes NICs FC connections still limited by installed mezzanine cards Cannot move I/O to non-HP systems or to any rack-mount server Cisco UCS Note: Need full Cisco environment for all features (servers / switches) Host card has specific functionality Very hard to configure (Cisco is shipping demo systems pre-configured only)
  • How Xsigo helps accelerate a server failover (replace one server with another) Without Xsigo  Re-map network and SANs, or move cards / cables. With Xsigo I/O moved to another server. I/O identities stay the same  no re-mapping, no re-wiring. Server boots from same LUN  comes up with the same apps and OS. All I/O appears exactly as before.
  • Unlike proprietary solutions, Xsigo is open. Deployed with most storage, server, and switch vendors. Supports VMware, Hyper-V, Xen, Windows, RedHat Linux, Solaris Cooperative support agreements in place with many vendors, including VMware Member of TSA Net.
  • See Salesforce.com case study for complete details!
  • Blade example Customer connected from blades to FC ports via pass thru module. Before Required MANY FC Director ports (at >$2000 per port) This is common  Do not want an external FC switch added (it makes management a real pain), so they go straight to the FC Director ($$$!!) After Consolidates I/O for a lot fewer FC ports and less cost. Xsigo is NOT a switch. It is I/O. So it does NOT add a mgmt layer. CapEx savings also deliver >$1M / yr savings in maintenance
  • Summary Costs less than 1G in I/O intensive environments Costs less than 10G just about anywhere. Host cards half the cost of FCoE Changes made in minutes, not days. 70% less complexity. Less to install, less to go wrong, less space. 30% less power than traditional I/O Scalable to hundreds of servers in a single environment Let’s you react quickly for application failover Open: No vendor lock-in  Servers / blades of your choice Future proof: Modular design
  • Purpose of the Cloud  Separate the server from the service . Server can be any machine, anywhere. Service = A specific capability (storage, network, etc…) delivered at a specific service level (performance, uptime, etc…). Virtual I/O lets you: Run any app on any server (“Any to any”) Guarantee service levels: Isolated connectivity QoS controls Fast reconfiguration to provision new storage/networks in SW
  • Cloud = “Apps can reside on any machine.” Without Xsigo All servers require connectivity to all resources  A connectivity mess Expensive Inflexible Not scalable With Xsigo Xsigo provides dynamic, any-to-any connectivity. Connect any server to any resource through a single cable Maintains the isolation and bandwidth characteristics of traditional I/O
  • Xsigo connects network and storage resources where needed A “network cloud”  Ports tied to a specific use (ie, “Vmotion network”) A “storage cloud”  Ports tied to a specific storage type (ie, “Production SAN”) Provision connectivity to servers to meet specific application needs.
  • Scaling is not limited. Add I/O Directors to scale both uplink bandwidth and port counts. Manage all I/O under a single management interface.
  • Powerful management See entire I/O infrastructure and manage it from a single pane of glass. Quickly see what is connect to what Identify issues at a glance Manage entire data center from a single pane of glass
  • Manage all resources from a single location Quickly locate and drill into the data and tasks you need now User configurable to meet your operational requirements
  • Templates let you create standardized connectivity for specific types of servers. IE, the “web server” template define the network and storage resources needed by that type of server. Networking guys can control all network resources. Storage guys the same.
  • Each I/O Director port is an isolated connection to a specific network or storage resource. This screen lets you view and manage those connections.
  • The iPad app lets you view data and perform basic management tasks from anywhere.
  • Transcript

    • 1. I/O Virtualization Overview Kevin O’Hear Sr. Systems Engineer 917-855-4231 kohear@xsigo.com
    • 2. What is Xsigo? Xsi  go  see-go n (2004) What is Xsigo? 3: completes the cloud computing model 2: significantly reduces capex and opex in enterprise data centers 1: connects any server to any network or storage device in seconds
    • 3. What Are Your Pain Points? Data Center Challenges Budget Pressure Cost savings, infrastructure deployment cost avoidance Static Resources Difficult to re-purpose assets Resource Utilization Inefficient use of server, network, and storage assets Power and Cooling Escalating power costs, constrained supply Space Space constraints; costly to expand Complexity Multiple storage protocols, network types. Numerous I/O connections
    • 4. VMworld 2009 VMworld 2010 2 I/O Directors supporting booth demonstrations on 1000 VMs 8 I/O Directors in cloud, supporting labs with up to 5600 VMs running on 112 servers 4 I/O Directors on VMware Express
    • 5. Data Center Evolution Enables next wave of computing. 1985 1990 1995 2005 Shared Storage LAN Consolidation Server Virtualization I/O Virtualization Foundation of the cloud
    • 6. Connect any server to any resource. Any to Any Connectivity HP Sun IBM Dell 10G 1G FC iSCSI NAS FCoE Web CRM Exchange SAP I/O Virtualization Completes the Cloud Server virtualization: Server cloud I/O virtualization: Resource cloud
    • 7. Data Center Convergence Traditional Virtual I/O
    • 8. Data Center Convergence Before After 98 cables 6 cables
    • 9. Proliferation of I/O Devices for VMs
    • 10. The I/O Problem Network I/O Connections Storage I/O Connections VM VM VM VM VM VM VM VM VM VM VM VM Application Server Congestion Unpredictable Performance Cabling Mess Costly The I/O you need??
    • 11. Infrastructure silos Numerous connectivity types
      • Cannot share resources.
      • Cannot consolidate management.
      HP Cisco FC NAS Multiple platforms Web CRM Exchange SAP VMWare HyperV Solaris Moving from Silos to Cloud Connect any server, any vendor, to any resource. 1G Ethernet FCoE iSCSI VMware 10G Ethernet 10G Ethernet 1G Ethernet ESX Xen Hyper -V Open Fabric Manager FlexAddress UCS Virtual Connect IBM Dell 10G 1G FC iSCSI NAS CIFS Web SAP Exchange CRM
    • 12. Virtual I/O Close-up 10G 1G FC iSCSI NAS FCoE Add isolated networks on demand Add resources to live servers Migrate virtual I/O on demand Inflexible system configuration. Low resource utilization. 10G 1G vNIC vNIC vNIC vHBA vHBA vHBA vHBA vNIC vNIC vNIC
    • 13. Virtual I/O Close-up 10G 1G FC iSCSI NAS FCoE Add isolated networks on demand Add resources to live servers Migrate virtual I/O on demand Inflexible system configuration. Low resource utilization. 10G 1G vNIC vNIC vNIC vHBA vHBA vHBA vHBA vNIC vNIC vNIC
    • 14. 3 Ways Virtual I/O Helps
      • Scalable Connectivity
      • More Bandwidth
      • Enables the Cloud
    • 15. #1 Scalable Connectivity and Bandwidth
      • Hardwired connections difficult to add / change
      • Difficult to accommodate new requirements
      • Fixed
      Traditional Virtual I/O
      • On-demand connectivity for:
        • Vmotion
        • FT network
        • Management
      • Meet future requirements without re-cabling
      • Dynamic
      FT network Vmotion network
    • 16. #1 Scalable Connectivity and Bandwidth Virtual I/O
      • Link consolidation might make need for QoS
      • Based on MAC address of VMkernel interface of interest
      FT network Vmotion network vSphere Requirements and Best Practices
    • 17. #1 Scalable Connectivity and Bandwidth VM to VM switching VNIC to VNIC switching Ethernet switching
      • How does traffic switch in this system?
      • VM to VM through vSwitch
      • vNIC to vNIC through Xsigo IO cards
      • Out vNICs to external switch
      OS DDR IB HCA Xsigo Drivers vNIC vNIC vHBA vHBA Application Hypervisor DDR IB HCA Xsigo Drivers vNIC vNIC vHBA vHBA VM VM VM VM VM VM VM VM VP780 Server Server FC Storage Array Ethernet Switch
    • 18. #2 More I/O Capacity 14% 23% 59% 86% vSphere network I/O Compared with ESX 3.5 vSphere iSCSI Throughput (max Gbs) Nehalem I/O Capacity I/O performance increase of Xeon 5400 vs. Xeon 5500 0.9 Gbs 9.1 Gbs 100% 300% To get the most from vSphere, you need more I/O
      • vSphere + Nehalem offer 3X more I/O capacity
        • Both eliminate limitations of previous gen products
      • I/O becomes a limiting factor
      ESX 3.5 ESX 4.0 Xeon 5400 Xeon 5500
    • 19. #2 More I/O Capacity
      • 20Gbs from one server
        • ESX 4.0 (vSphere)
        • Nehalem processor
      • FC and Ethernet traffic
      Fibre Channel traffic Ethernet traffic Total
    • 20.
      • Virtual I/O Overview
    • 21. Xsigo I/O Director Xsigo I/O Director Server connections I/O Modules 10 Gig E Ports VP780 I/O Director VP560 I/O Director Fibre Channel Ports Gig E Ports I/O Modules Gig E Ports Redundant Hot Swappable Fans and Power Supplies
    • 22. Xsigo I/O Director I/O Director Product Family 10Gb 20Gb 40Gb Easiest Integration Best Price/ Performance Highest Performance
    • 23. I/O Module Options 10x1G Ethernet 1x10G Ethernet 2x8G Fibre Channel
    • 24. Fat Tree Network Topology … up to 20 Host Servers per IS24 … up to 20 Host Servers per IS24 1 link per Host Server 4 uplinks per IS24
    • 25.  
    • 26.
      • Why Virtual I/O
    • 27. Less Complexity 70% less equipment. Virtual I/O Traditional 70% fewer cards, switches, ports, cables. Less cost.
    • 28. Less Power KWh (millions) Power for 1,000 servers, plus I/O resources, over 3 year period. 30% less power
      • 70% fewer I/O cards
      • 70% fewer switch ports
      • More bandwidth to each server
      Power
    • 29. Operational Savings
      • Reduced job completion time from 8 days to 2 days. New blade chassis deployed in 8 hours not 4 weeks.
      • Switched 1,000 VMs from FC storage to iSCSI in hours, not days or weeks.
      • Reduced Vmotion time by 2/3. Reduced backup time from 1.4 hrs to 7 minutes.
      • Reduced backup time from 57 hours to 27 hours.
      • Deployed isolated network for payment card transactions in hours, rather than weeks.
      Up to 95% savings Up to 95% savings Up to 92% savings 53% savings Up to 95% savings
    • 30. Scalable I/O for Blades Without Xsigo With Xsigo Offered by: Interoperable with: No reliance on mezzanine cards Up to 64 I/O connections per blade 6 switch modules 98 connections 2 switch modules 6 connections
    • 31. Flexible Compute Resources Application running on Server 1 Connectivity moved to Server 2 Server 1 powered down X Server 2 boots from same LUN Server 1 boots from external LUN Server 1 powered up Same OS and apps as Server 1 Same WWNs, MAC addresses, IP addresses Fault Application move to Server 2 required - No SAN re-mapping - No network re-mapping I/O profile moved to Server 2 Re-deploy server apps in minutes 2 1
    • 32. Open-Standards Approach
        • Proven with all leading vendors of:
          • - Infrastructure
          • - Servers
          • - Blade Systems
          • - FC Storage
          • - iSCSI Storage
          • - NAS
      Demonstrated Interoperability
    • 33.
      • Customer Examples
    • 34. Salesforce.com
      • Reduced server connectivity by 92%
      • >$1M capital  expense savings for each 6 blade chassis
      • Deploy 64 hosts in days rather than weeks
      • Production test cycle went from 8 days to 2 days
      • Build and destroy 8000 VMs per day
      Dell blade chassis connected to backend storage arrays Now a standard for salesforce.com virtualized environments Savings with Xsigo:
    • 35. Salesforce.com Without Xsigo With Xsigo 98 cables 6 cables
    • 36. Disney Internet Media Group Without Xsigo With Xsigo 192 FC Mezz Cards 48 Switch Modules 48 FC Pass Thru Modules 384 FC Director Ports 24 10G ports 408 Cables $2,085K capital cost 192 IB Mezz Cards 48 IB Switch Modules No Pass Thru Modules 8 FC Director Ports 4 10G Ports 60 Cables $668K capital cost Cap Ex: $1.4M Op Ex: Power: $100K TOTAL $1.5M SAVINGS Savings (24 blade chasses over 3 years) 67% capital cost savings 24 blade chasses 48 IB conn. 8 FC 4 10G 24 blade chasses 384 FC conn. 24 10G conn.
    • 37. Field Proven Design “ Simplifies and accelerates our server deployments. ” - Jay Leone, Lab Manager, Avaya “ Xsigo’s remote management simplifies our lives . ” - Bill Fife, Dir of IT, Wholesale Electric “ A highly flexible scale-out architecture .” - Mornay Van Der Walt, VMware “ Xsigo streamlines management .” - David Zacharias, IT manager, HiFX “ Up to 5X better performance at less cost .” - Lance King, IT manager, OC Tanner
    • 38. Summary Without Xsigo With Xsigo Immediate ROI Completes the cloud infrastructure 70% fewer cards, switch ports, cables Scalable to data center-wide deployments Reduce power and resource consumption Capital cost ($M) $592K $2.14M
    • 39.
      • How Virtual I/O Enables the Cloud
    • 40. Cloud Services
      • Typical Cloud Services
        • Customer VMs
        • Storage
          • Capacity
          • Performance
        • Network services
          • Bandwidth
          • Firewall
          • Load balancing
        • Back-end Services
          • Backup
          • Replication
          • Disaster Recovery
      • Service Level Agreements
        • Uptime guarantees
        • Provisioning guarantees
          • How long add services
        • Security Guarantees
        • Recovery guarantees
          • Fail-over time
          • Backup/recovery times
        • Performance guarantees
          • Throughput
          • Latency
          • Storage latency/ response time
      A Service delivered at a specific Service Level Cloud:
    • 41. Virtual I/O Completes the Cloud Backup Network Production SAN iSCSI Network Customer “B” prod. network Vmotion network Customer “A” prod network Console Network VM2 VM3 VM2 VM3 VM2 VM3 Without Xsigo With Xsigo Cust. A Network VM1 VM2 Console Network Cust. B Network VM3 Datastore Vmotion Network iSCSI Network Backup Network VM1 vNIC vNIC vNIC vNIC vNIC vHBA vHBA Cust. A Network VM1 VM2 Console Network Cust. B Network VM3 Datastore Vmotion Network iSCSI Network Backup Network VM1 vNIC vNIC vNIC vNIC vNIC vHBA vHBA Cust. A Network VM1 VM2 Console Network Cust. B Network VM3 Datastore Vmotion Network iSCSI Network Backup Network VM1 vNIC vNIC vNIC vNIC vNIC vHBA vHBA Network Cloud Storage Cloud
    • 42. Xsigo “Cloud”: Pool of resources that can be dynamically allocated Xsigo “Cloud” resource mapping
        • Network Cloud
          • Xsigo uplink port or group of ports tied to specific network resource
            • Dedicated networks
            • Firewall/VPN
            • Load-balancer
        • Storage Cloud
          • Xsigo uplink port or group of ports tied to specific storage resource
    • 43. Cloud Scaling Firewall/VPN Load Balancer NFS SAN iSCSI SAN FC SAN Map any server to any storage and network resource Pool uplink connectivity across multiple I/O Directors Core Switch
    • 44.
      • I/O Management for Cloud Infrastructure
    • 45. Topology Mapping Xsigo Topology View
    • 46. XMS 3.0 Dashboard
    • 47. Creating IO Templates
    • 48. Map Downlink ports to Clouds
    • 49. Manage from an iPad
    • 50. OV0907