2. What is Xsigo? Xsi go see-go n (2004) What is Xsigo? 3: completes the cloud computing model 2: significantly reduces capex and opex in enterprise data centers 1: connects any server to any network or storage device in seconds
3. What Are Your Pain Points? Data Center Challenges Budget Pressure Cost savings, infrastructure deployment cost avoidance Static Resources Difficult to re-purpose assets Resource Utilization Inefficient use of server, network, and storage assets Power and Cooling Escalating power costs, constrained supply Space Space constraints; costly to expand Complexity Multiple storage protocols, network types. Numerous I/O connections
4. VMworld 2009 VMworld 2010 2 I/O Directors supporting booth demonstrations on 1000 VMs 8 I/O Directors in cloud, supporting labs with up to 5600 VMs running on 112 servers 4 I/O Directors on VMware Express
5. Data Center Evolution Enables next wave of computing. 1985 1990 1995 2005 Shared Storage LAN Consolidation Server Virtualization I/O Virtualization Foundation of the cloud
6. Connect any server to any resource. Any to Any Connectivity HP Sun IBM Dell 10G 1G FC iSCSI NAS FCoE Web CRM Exchange SAP I/O Virtualization Completes the Cloud Server virtualization: Server cloud I/O virtualization: Resource cloud
10. The I/O Problem Network I/O Connections Storage I/O Connections VM VM VM VM VM VM VM VM VM VM VM VM Application Server Congestion Unpredictable Performance Cabling Mess Costly The I/O you need??
11.
12. Virtual I/O Close-up 10G 1G FC iSCSI NAS FCoE Add isolated networks on demand Add resources to live servers Migrate virtual I/O on demand Inflexible system configuration. Low resource utilization. 10G 1G vNIC vNIC vNIC vHBA vHBA vHBA vHBA vNIC vNIC vNIC
13. Virtual I/O Close-up 10G 1G FC iSCSI NAS FCoE Add isolated networks on demand Add resources to live servers Migrate virtual I/O on demand Inflexible system configuration. Low resource utilization. 10G 1G vNIC vNIC vNIC vHBA vHBA vHBA vHBA vNIC vNIC vNIC
14.
15.
16.
17.
18.
19.
20.
21. Xsigo I/O Director Xsigo I/O Director Server connections I/O Modules 10 Gig E Ports VP780 I/O Director VP560 I/O Director Fibre Channel Ports Gig E Ports I/O Modules Gig E Ports Redundant Hot Swappable Fans and Power Supplies
22. Xsigo I/O Director I/O Director Product Family 10Gb 20Gb 40Gb Easiest Integration Best Price/ Performance Highest Performance
24. Fat Tree Network Topology … up to 20 Host Servers per IS24 … up to 20 Host Servers per IS24 1 link per Host Server 4 uplinks per IS24
25.
26.
27. Less Complexity 70% less equipment. Virtual I/O Traditional 70% fewer cards, switches, ports, cables. Less cost.
28.
29.
30. Scalable I/O for Blades Without Xsigo With Xsigo Offered by: Interoperable with: No reliance on mezzanine cards Up to 64 I/O connections per blade 6 switch modules 98 connections 2 switch modules 6 connections
31. Flexible Compute Resources Application running on Server 1 Connectivity moved to Server 2 Server 1 powered down X Server 2 boots from same LUN Server 1 boots from external LUN Server 1 powered up Same OS and apps as Server 1 Same WWNs, MAC addresses, IP addresses Fault Application move to Server 2 required - No SAN re-mapping - No network re-mapping I/O profile moved to Server 2 Re-deploy server apps in minutes 2 1
36. Disney Internet Media Group Without Xsigo With Xsigo 192 FC Mezz Cards 48 Switch Modules 48 FC Pass Thru Modules 384 FC Director Ports 24 10G ports 408 Cables $2,085K capital cost 192 IB Mezz Cards 48 IB Switch Modules No Pass Thru Modules 8 FC Director Ports 4 10G Ports 60 Cables $668K capital cost Cap Ex: $1.4M Op Ex: Power: $100K TOTAL $1.5M SAVINGS Savings (24 blade chasses over 3 years) 67% capital cost savings 24 blade chasses 48 IB conn. 8 FC 4 10G 24 blade chasses 384 FC conn. 24 10G conn.
37. Field Proven Design “ Simplifies and accelerates our server deployments. ” - Jay Leone, Lab Manager, Avaya “ Xsigo’s remote management simplifies our lives . ” - Bill Fife, Dir of IT, Wholesale Electric “ A highly flexible scale-out architecture .” - Mornay Van Der Walt, VMware “ Xsigo streamlines management .” - David Zacharias, IT manager, HiFX “ Up to 5X better performance at less cost .” - Lance King, IT manager, OC Tanner
38. Summary Without Xsigo With Xsigo Immediate ROI Completes the cloud infrastructure 70% fewer cards, switch ports, cables Scalable to data center-wide deployments Reduce power and resource consumption Capital cost ($M) $592K $2.14M
43. Cloud Scaling Firewall/VPN Load Balancer NFS SAN iSCSI SAN FC SAN Map any server to any storage and network resource Pool uplink connectivity across multiple I/O Directors Core Switch
Key points: Dynamic, any-to-any connectivity Connect any server to any network or storage Simplified infrastructure 70% less complexity, 100X more agility Changes in SW, not HW Essential for the cloud computing model Cloud without virtual I/O is expensive and inefficient
Use this to identify your customer’s pain points. Xsigo addresses each in different ways. Key points: Pain: Complexity Virtualization consolidates servers, but demands more from the I/O. Each server now needs more connections for different storage types and networks. More cards, cables, and switch ports to manage. Pain: Utilization Virtualization helps, but utilization is still limited by I/O. Few can afford to connect every server to every network and storage device, which then limits what those servers can do. Pain: Budgets Infrastructure costs money for the boxes themselves and for the ongoing software and hardware maintenance, meaning both cap ex and op ex spending. When you factor in switch ports, cabling, and cards, it is very common that I/O costs far more than the servers themselves. Pain: Space/Power More I/O means bigger servers, more power.
Xsigo supported: Booth demos Cloud VMware Express
Just like the other IT milestones in the past 20 years, I/O virtualization represents a dramatic evolution of I/O technology. Consolidation has been the overarching trend. As resources proliferate, we need to consolidate management to manage utilization and workloads. LAN consolidation Multiprotocol router brought networks together. Eliminated “sneaker net”, delivering a huge boost in productivity. Storage networking Consolidated storage in a single pool rather than across discrete servers or multiple arrays. Storage utilization went up, complexity went down. Server virtualization Consolidated servers by allowing multiple applications to run in a pool of processing resources rather than on discrete machines. Xsigo virtual I/O Any-to-any connectivity. Dynamically connect any server to any network or storage resource. Combination of server virtualization and I/O virtualization together let you run any application on any server That is the fundamental concept of the cloud.
Server virtualization gave us the server cloud Servers became interchangeable resources. Xsigo virtual I/O enables the infrastructure cloud Any network and storage can be connected to any server at any time. Data centers have multiple network and storage resources Server virtualization lets you run any app on any machine. But that means that every machine potentially needs connectivity to every resource. Xsigo lets you connect whichever network or storage is needed, without the need to massively scale connectivity -- cards, cables, and switch ports.
Conventional I/O Expensive, complex… and impossible to change once done. Convergence means: Consolidating infrastructure 70% less complexity Any-to-any connectivity Take one wire and make it do the job of many More efficiency and more flexibility.
Why traditional I/O does not work Conventional I/O was designed to run with one application per server. Because functionality was fixed, you only needed a few connections per server. It was simple. In the early days of virtualization, that was OK Small number of VMs, mostly test/dev less concern about redundancy, bandwidth, and network isolation Conventional I/O is no longer enough: More VMs per server 10-20 are common Production environments Redundancy and network isolation are critical. Faster servers Need more bandwidth New management features Vmotion and Fault Tolerance both add network requirements A survey shows 7 to 16 I/O connections per server, which creates these problems: Unpredictable performance: With all that complexity, how do you make sure VMs have the BW they need? Cost: Cards and cables often cost more than the server itself. Congestion: Even with a bunch of cables, you can still get congestion. How do you find it and fix it? I/O you need?: How do you know if a particular server has the I/O you need? This is especially true with blades where I/O is limited. Cabling: A mess.
Objective is to move from silo infrastructure to cloud. Without Xsigo: Inflexible silos: Myriad of cables, configuration settings, and mappings very hard to change Mutiple protocols and transports : If you want to run an Fibre Channel-based app on a server, but that server is not tied into the FC, or lacks the correct card, you’re stuck. End result: You cannot share resources and cannot consolidate management Some solutions (such as Cisco UCS and HP Virtual Connect) offer partial solutions, but they only work with that vendor’s gear. With Xsigo: Xsigo eliminates the silos . Creates universal connectivity. Any server can connect to any network or storage without ever having to re-wire servers or install new cards. Servers become interchangeable assets that can be deployed as needed.
How does this work? Traditional I/O has fixed resources : Cards, cables, and mappings that are hard to change. With Xsigo: Xsigo I/O Director becomes the consolidation point. For the servers: Fewer cables: Just 2 connections to each delivers fully redundant connectivity. Fewer cards: One or two cards replace all the NICs and HBAs. Virtual resources: Virtual NICs and HBAs replace the fixed cards. They look and act just like conventional resources. For the networks and storage: Connect to all storage types: FC, iSCSI, NAS. Down the road, if native FCoE storage catches on, we will offer a module for that as well. Connect to all network types: 1GE, 10GE. Migrate I/O between servers, even on live machines, to accelerate moves/adds/changes. Connect new networks without re-cabling servers. Add resources to live machines. At any time. On live servers. Without a server re-boot. And without having to enter the data center. (One customer said, “It can’t get much simpler than not having to be there.”)
VIO helped achieve this in 3 ways
The difference between old and new is dramatic. Without Xsigo you’re forever guessing about what I/O is needed. Guess wrong and you’re de-racking servers to add cards, route cables, search for switch ports. With Xsigo connectivity is on-demand. Need a new network for FT? You can configure it entirely in software.
The difference between old and new is dramatic. Without Xsigo you’re forever guessing about what I/O is needed. Guess wrong and you’re de-racking servers to add cards, route cables, search for switch ports. With Xsigo connectivity is on-demand. Need a new network for FT? You can configure it entirely in software.
Here we see how vSphere and Nehalem stack up. 3X more BW capability overall 9X more iSCSI capability.
Performance monitor tool shows capacity of 20Gb over a single cable. Combo of Ethernet and FC traffic on one transport. Key points: Nehalem + vSphere can drive 20Gb of traffic from one server. (2 x 20Gb cables) Xsgio can support that traffic with a standard configuration (2 cables to each server, active/active) Can you do this with 10Gb + FCoE? No performance data that I’ve seen.
The I/O consolidation point Connect networks and storage via I/O modules Connect servers to the server ports Highly serviceable design: Hot swappable I/O modules Redundant hot swappable fans and power supplies Passive midplane Choose the form factor that matches your uplink requirements. 4U VP780: Up to 15 I/O modules 2U VP560: Up to 4
I/O Director Family Choose the server connection to match your application needs: 10Gb, 20Gb, or 40Gb server connections. 10G: Easiest integration, plugs into existing server ports. 20G: Twice the performance of the 10G, at similar overall cost in many deployments. 40G: The world’s highest performance server interconnect. Each is available in the 2U high or 4U high form factor.
Choose the hot swappable modules for the uplinks you need.
1U enclosure redundant power and cooling one power supply is active the other is a standby in case of failure the fan tray is hot-swappable the RS232 port is not active the Ethernet port is not active When will the management ports be used (if at all)?
70% fewer switch cards, cables, and switch ports. Saves cost in: Acquisition Installation Space (fewer rack units) Software and hardware maintenance fees Downtime (less gear + all connections are redundant)
Saves power with: Fewer switches, cards Smaller servers Higher utilization Some of the backup numbers here: A 1G connection is 10 – 12 watts total. A 10G connection is 40 - 50 watts. A single server, running 10 1G connections, will consume 100 watts, which costs about $176 per year in power and cooling (for I/O alone), at 10 cents per KWHR. Xsigo consumes about 10 watts per server total for the I/O card, and about 800 watts for dual I/O Directors. The expansion switch is 7.5 watts/server. For 100 servers, that totals to 35 watts per server, a 65% power savings. A server is about 300 watts, so that’s 300 + 100 for conventional I/O (400 watts), vs. 300 + 35 (335 watts) for Xsigo, a 19% power savings for this example . FC and/or 10G push power higher.
Get more out of blades Without Xsigo I/O limited by mezzanine cards and switch ports. You can only get so many ports on a blade. With Xsigo Up to 64 virtual connections per blade Get whatever I/O is supported by your I/O modules (not limited by mazzanine card) Move those connections between blades at any time. HP Virtual Connect Note: Only virtualizes NICs FC connections still limited by installed mezzanine cards Cannot move I/O to non-HP systems or to any rack-mount server Cisco UCS Note: Need full Cisco environment for all features (servers / switches) Host card has specific functionality Very hard to configure (Cisco is shipping demo systems pre-configured only)
How Xsigo helps accelerate a server failover (replace one server with another) Without Xsigo Re-map network and SANs, or move cards / cables. With Xsigo I/O moved to another server. I/O identities stay the same no re-mapping, no re-wiring. Server boots from same LUN comes up with the same apps and OS. All I/O appears exactly as before.
Unlike proprietary solutions, Xsigo is open. Deployed with most storage, server, and switch vendors. Supports VMware, Hyper-V, Xen, Windows, RedHat Linux, Solaris Cooperative support agreements in place with many vendors, including VMware Member of TSA Net.
See Salesforce.com case study for complete details!
Blade example Customer connected from blades to FC ports via pass thru module. Before Required MANY FC Director ports (at >$2000 per port) This is common Do not want an external FC switch added (it makes management a real pain), so they go straight to the FC Director ($$$!!) After Consolidates I/O for a lot fewer FC ports and less cost. Xsigo is NOT a switch. It is I/O. So it does NOT add a mgmt layer. CapEx savings also deliver >$1M / yr savings in maintenance
Summary Costs less than 1G in I/O intensive environments Costs less than 10G just about anywhere. Host cards half the cost of FCoE Changes made in minutes, not days. 70% less complexity. Less to install, less to go wrong, less space. 30% less power than traditional I/O Scalable to hundreds of servers in a single environment Let’s you react quickly for application failover Open: No vendor lock-in Servers / blades of your choice Future proof: Modular design
Purpose of the Cloud Separate the server from the service . Server can be any machine, anywhere. Service = A specific capability (storage, network, etc…) delivered at a specific service level (performance, uptime, etc…). Virtual I/O lets you: Run any app on any server (“Any to any”) Guarantee service levels: Isolated connectivity QoS controls Fast reconfiguration to provision new storage/networks in SW
Cloud = “Apps can reside on any machine.” Without Xsigo All servers require connectivity to all resources A connectivity mess Expensive Inflexible Not scalable With Xsigo Xsigo provides dynamic, any-to-any connectivity. Connect any server to any resource through a single cable Maintains the isolation and bandwidth characteristics of traditional I/O
Xsigo connects network and storage resources where needed A “network cloud” Ports tied to a specific use (ie, “Vmotion network”) A “storage cloud” Ports tied to a specific storage type (ie, “Production SAN”) Provision connectivity to servers to meet specific application needs.
Scaling is not limited. Add I/O Directors to scale both uplink bandwidth and port counts. Manage all I/O under a single management interface.
Powerful management See entire I/O infrastructure and manage it from a single pane of glass. Quickly see what is connect to what Identify issues at a glance Manage entire data center from a single pane of glass
Manage all resources from a single location Quickly locate and drill into the data and tasks you need now User configurable to meet your operational requirements
Templates let you create standardized connectivity for specific types of servers. IE, the “web server” template define the network and storage resources needed by that type of server. Networking guys can control all network resources. Storage guys the same.
Each I/O Director port is an isolated connection to a specific network or storage resource. This screen lets you view and manage those connections.
The iPad app lets you view data and perform basic management tasks from anywhere.