How Virtual I/O Helps You Do Three Times More with VMware vSphere and Intel Nehalem<br />
VMworld 2009<br />VMware ran all demos from one rack:<br /><ul><li>Data center in a 36U rack
960 virtual machines
160 cores</li></ul>At VMworld 2008, it took 14 racks.<br />
3 Ways Virtual I/O Helps<br />More Bandwidth<br />More Scalability<br />Enables the Cloud<br />
How Has Server I/O Changed?<br />Year 2008<br />Year 2009<br />
20Gbps From One Server<br /><ul><li> One server
 One PCI slot
 One cable</li></ul>20Gb of Fibre Channel AND Ethernet traffic<br />
What is Virtual I/O?<br />
Today’s Data Center<br />Infrastructure silos<br />Cloud infrastructure<br />Multiple platforms<br />Numerous connectivity...
Cannot consolidate management.</li></li></ul><li>Data Center with Virtual I/O<br />Cloud infrastructure<br />Web	  CRM	   ...
Simplified management.</li></li></ul><li>Data Center with Virtual I/O<br />Web	  CRM	    Exchange               SAP	<br />...
Data Center with Virtual I/O<br />Web	  CRM	    Exchange               SAP	<br />HP<br />Sun<br />Dell<br />VMWareHyperV 	...
NIC<br />HBA<br />NIC<br />HBA<br />NIC<br />HBA<br />Inflexible system configuration.<br />Low resource utilization.<br /...
HCA<br />HCA<br />NIC<br />Xsigo I/O Director<br />After Virtual I/O<br />Add connectivity on demand<br />Migrate connecti...
Less Complexity<br />70% fewer cards, switches, ports, cables. Less cost. <br />With Xsigo<br />Without Xsigo<br />70% les...
What is Driving the Need?<br />ESX 4.0<br />“vSphere”<br />Next generation Xeon<br />“Nehalem”<br />
What is vSphere?<br />VMware <br />ESX 4.0<br />New storage features<br />New network management features<br />Built for b...
What’s new in vSphere?<br />Many new features, but for I/O…<br />I/O Tuning<br /><ul><li> More efficient use of I/O resour...
Instantaneous, statefulfailover between the two virtual machines</li></ul>1TB Host Memory<br /><ul><li>Supports systems wi...
How does virtual I/O help vSphere and Nehalem?<br />
1) Scalable Connectivity<br /><ul><li>Scalable I/O
vSphere requires NICs for Fault Tolerance logging and vMotion
Xsigo enables the needed I/O
Can move FT vMachines to any device
Including blades and 1U servers
Reduces costs, cables, switching and dedicated hardware</li></ul>vSphere Requirements<br />
1) Scalable Connectivity<br />Without Xsigo <br />Hardwired connections difficult to add / change<br />Difficult to accomm...
Upcoming SlideShare
Loading in …5
×

How Virtual I/O Helps you do 3X More with VMware vSphere and Nehalem

1,274
-1

Published on

VMware vSphere and Intel Nehalem let you get far more from your servers... if your I/O can handle it. Learn how virtual I/O helps you get the most from these technologies, and helps you save cost and time as well.

Published in: Technology, Real Estate
0 Comments
4 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total Views
1,274
On Slideshare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
Downloads
100
Comments
0
Likes
4
Embeds 0
No embeds

No notes for slide
  • Rev in the way servers connect to net and storComing at the exact moment when servers need more capabilityVsphere and NehalemThe result is far greater compute density, more VMs per server, and much much lower costs. Here’s a real world example.
  • At VMworld, VMware used a single rack to drive all of their demos. Even then, they only used 500 virtual machines, or about half the capacity of this rack. Last yr, they used 14 racks to power the demos.Look at this video for more about what they achieved.
  • VIO helped achieve this in 3 ways
  • Dramatic change because IO managed as flexible resource.Wire once to the server, then dynamically share to the capacity where it’s needed. Result is far more effective capacity.
  • Performance monitor tool shows capacity of 20Gb over a single cable.Combo of Ethernet and FC traffic on one transport.
  • Today’s application silos are inflexible due to the myriad of cables, configuration settings, and mappings that make change difficult. Mutiple protocols and transports make it worse. If you want to run an Fibre Channel-based app on a server, but that server is not tied into the FC, or lacks the correct card, you’re stuck.Furthermore, data centers are usually heterogeneous. So you have multiple platform types and multiple storage types. System vendors offer partial solutions to the I/O problem, but they still do not address this issue. They only work with that vendor’s gear.End result is inefficiency: you cannot share resources and cannot consolidate management.This means resources cannot be shared. And you cannot consolidate management across all resources.
  • Xsigo eliminates the silos by creating universal connectivity. Any server can connect to any network or storage without ever having to re-wire servers or install new cards. So servers become interchangeable assets that can be deployed as needed. Whether a cloud infrastructure within the data center or across data centers, it separates the service from the server so each can be managed to the greatest efficiency.
  • So a server running a web app could be connected to the DMZ and to low-end storage now…
  • If requirements change, that same server can be connected to the application network and enterprise storage a minute later.
  • With Xsigo you create a unified fabric, and deploy virtual NICs and HBAs inside servers.This does a lot more than eliminate cables.<click>It lets you move I/O between servers and create new I/O within servers on-the-fly. At any time. Without a server re-boot. And without having to enter the data center.
  • Less complexity, which means fewer switch cards, cables, and switch ports. This is the big cost savings.
  • With discrete I/O, an individual I/O connection can become a bottleneck. A 1G pipe may be overloaded with the other connections on the same server remain idle.With virtual I/O, each connection has access to 20Gb of bandwidth when needed.
  • This creates the fabric of the dynamic data center. Today’s application silos remain inflexible due to the myriad of cables, configuration settings, and mappings that make change difficult. Mutiple protocols and transports make it worse. If you want to run an Fibre Channel-based app on a server, but that server is not tied into the FC, or lacks the correct card, you’re stuck.Furthermore, data centers are usually heterogeneous. So you have multiple platform types and multiple storage types. System vendors offer partial solutions to the I/O problem, but they still do not address this issue. They only work with that vendor’s gear.End result is inefficiency: you cannot share resources and cannot consolidate management.<click>Xsigo eliminates the silos by creating universal connectivity. Any server can connect to any network or storage without ever having to re-wire servers or install new cards. So servers become interchangeable assets that can be deployed as needed. Whether a cloud infrastructure within the data center or across data centers, it separates the service from the server so each can be managed to the greatest efficiency.
  • The I/O Director is: An enterprise class, hardware/software product. Modular: Add physical connectivity as you need it.
  • View resources all the way from the virtual machine to a physical port at the top-of-rack switch.
  • How Virtual I/O Helps you do 3X More with VMware vSphere and Nehalem

    1. 1. How Virtual I/O Helps You Do Three Times More with VMware vSphere and Intel Nehalem<br />
    2. 2. VMworld 2009<br />VMware ran all demos from one rack:<br /><ul><li>Data center in a 36U rack
    3. 3. 960 virtual machines
    4. 4. 160 cores</li></ul>At VMworld 2008, it took 14 racks.<br />
    5. 5. 3 Ways Virtual I/O Helps<br />More Bandwidth<br />More Scalability<br />Enables the Cloud<br />
    6. 6. How Has Server I/O Changed?<br />Year 2008<br />Year 2009<br />
    7. 7. 20Gbps From One Server<br /><ul><li> One server
    8. 8. One PCI slot
    9. 9. One cable</li></ul>20Gb of Fibre Channel AND Ethernet traffic<br />
    10. 10. What is Virtual I/O?<br />
    11. 11. Today’s Data Center<br />Infrastructure silos<br />Cloud infrastructure<br />Multiple platforms<br />Numerous connectivity types<br />Web CRM Exchange SAP <br />HP<br />VMware<br />Solaris<br />Sun<br />Virtual Connect<br />Crossbow<br />VMWare HyperV Solaris Xen <br />1G Ethernet<br />10G Ethernet<br />FC<br />iSCSI<br />IBM<br />Dell<br />Xen<br />Hyper V<br />Open Fabric Manager<br />FlexAddress<br />10G Ethernet<br />NAS<br />FC<br />1GEthernet<br />10G<br />1G<br />FC<br />iSCSI<br />NAS<br />FCoE<br /><ul><li>Cannot share resources.
    12. 12. Cannot consolidate management.</li></li></ul><li>Data Center with Virtual I/O<br />Cloud infrastructure<br />Web CRM Exchange SAP <br />HP<br />Sun<br />Dell<br />VMWare HyperV Solaris Xen <br />I/O Director converges the Infrastructure <br />IBM<br />10G<br />1G<br />FC<br />iSCSI<br />NAS<br />FCoE<br /><ul><li>Resources shared across platforms.
    13. 13. Simplified management.</li></li></ul><li>Data Center with Virtual I/O<br />Web CRM Exchange SAP <br />HP<br />Sun<br />Dell<br />VMWare HyperV Solaris Xen <br />IBM<br />NAS<br />FCoE<br />iSCSI<br />FC<br />DMZ<br />App<br />Server connected to DMZ and iSCSI now… <br />
    14. 14. Data Center with Virtual I/O<br />Web CRM Exchange SAP <br />HP<br />Sun<br />Dell<br />VMWareHyperV Solaris Xen<br />IBM<br />NAS<br />FCoE<br />FC<br />iSCSI<br />DMZ<br />App<br />Connected to App Net and FC a minute later. <br />
    15. 15. NIC<br />HBA<br />NIC<br />HBA<br />NIC<br />HBA<br />Inflexible system configuration.<br />Low resource utilization.<br />vNIC<br />vHBA<br />vNIC<br />vHBA<br />vNIC<br />vHBA<br />HCA<br />HCA<br />HCA<br />Before Virtual I/O<br />Fixed resources<br />LAN<br />SAN<br />
    16. 16. HCA<br />HCA<br />NIC<br />Xsigo I/O Director<br />After Virtual I/O<br />Add connectivity on demand<br />Migrate connectivity between servers in real time<br />Transparent <br />to Networks<br />and Storage<br />SAN<br />20Gbps<br />Dynamically <br />Shared <br />Bandwidth<br />LAN<br />
    17. 17. Less Complexity<br />70% fewer cards, switches, ports, cables. Less cost. <br />With Xsigo<br />Without Xsigo<br />70% less equipment. <br />
    18. 18. What is Driving the Need?<br />ESX 4.0<br />“vSphere”<br />Next generation Xeon<br />“Nehalem”<br />
    19. 19. What is vSphere?<br />VMware <br />ESX 4.0<br />New storage features<br />New network management features<br />Built for bigger jobs.<br />
    20. 20. What’s new in vSphere?<br />Many new features, but for I/O…<br />I/O Tuning<br /><ul><li> More efficient use of I/O resources</li></ul>Fault Tolerance<br /><ul><li>Zero downtime and zero data loss availability
    21. 21. Instantaneous, statefulfailover between the two virtual machines</li></ul>1TB Host Memory<br /><ul><li>Supports systems with up to 1TB of RAM  Enables more VMs per server</li></li></ul><li>What’s new in Nehalem?<br />3X more I/O capacity<br />
    22. 22. How does virtual I/O help vSphere and Nehalem?<br />
    23. 23. 1) Scalable Connectivity<br /><ul><li>Scalable I/O
    24. 24. vSphere requires NICs for Fault Tolerance logging and vMotion
    25. 25. Xsigo enables the needed I/O
    26. 26. Can move FT vMachines to any device
    27. 27. Including blades and 1U servers
    28. 28. Reduces costs, cables, switching and dedicated hardware</li></ul>vSphere Requirements<br />
    29. 29. 1) Scalable Connectivity<br />Without Xsigo <br />Hardwired connections difficult to add / change<br />Difficult to accommodate new requirements<br />With Xsigo <br /><ul><li>On-demand connectivity for:
    30. 30. Vmotion
    31. 31. FT network
    32. 32. Management
    33. 33. Meet future requirements without re-cabling</li></ul>FT <br />network<br />Vmotion<br /> network<br />
    34. 34. 1) Dynamic Bandwidth Allocation<br />Without Xsigo<br />With Xsigo<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />VM<br />I/O <br />Contention<br />I/O <br />Headroom<br />20G<br />1G<br />1G<br />1G<br />20G bandwidth available wherever needed<br />
    35. 35. 2) I/O Capacity <br /><ul><li>vSphere + Nehalem offer 3X more I/O capacity
    36. 36. Both eliminate limitations of previous gen products
    37. 37. I/O becomes a limiting factor</li></ul>vSphere network I/O Compared with ESX 3.5<br />vSphere iSCSI <br />Throughput<br />(max Gbs)<br />Nehalem I/O Capacity <br />I/O performance increase of <br />Nehalem vs. Xeon 5500<br />9.1 Gbs<br />86%<br />300%<br />59%<br />100%<br />23%<br />14%<br />0.9 Gbs<br />To get the most from vSphere, you need more I/O<br />
    38. 38. 3) Enables Cloud Computing<br />Web CRM Exchange SAP <br />vSphere manages compute resource cloud<br />HP<br />Sun<br />Xsigo manages storage and network resource cloud<br />IBM<br />Dell<br />10G<br />1G<br />FC<br />iSCSI<br />NAS<br />FCoE<br />Completes the cloud computing ecosystem<br />
    39. 39. Xsigo Enables Cloud Infrastructure<br />With Xsigo<br />Without Xsigo<br />App Cloud<br />Universal Cloud<br />db Cloud<br />Web Cloud<br />Resource silos restrict server deployments<br />Xsigo delivers flexibility, maintains resource isolation<br />
    40. 40. What is the product?<br />
    41. 41. Xsigo I/O Director<br />
    42. 42. Management Integrated with vSphere<br />Virtual NIC<br />Physical port<br />Virtual machine<br />
    43. 43. Summary<br />vSphere demands more connectivity<br />vSphere + Nehalem have 3X more I/O capacity<br />vSphere enables cloud computing<br />Xsigo dynamically configures up to 64 connections per server<br />Xsigo eliminates I/O bottlenecks<br /><ul><li>20Gb bandwidth
    44. 44. Dynamic bandwidth utilization</li></ul>Xsigo enables the cloud infrastructure<br /><ul><li>Eliminates connectivity limitations</li></li></ul><li>OV0907<br />
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×