Get Better I/O Performance inVMware vSphere 5.1Environments with Emulex 16GbFibre Channel HBAsJoey Dieckhans, VMwareYea-Ch...
Agenda Introduction What’s New With VMware vSphere 5.1 for Storage Performance Study Emulex LPe16000 16Gb Fibre Channel (1...
What’s New With VMwarevSphere 5.1 for Storage
Space Efficient Sparse Virtual DisksJoseph Dieckhans   A new Space Efficient Sparse Virtual Disk which       1.        Re...
Increasing VMFS File Sharing LimitsJoseph Dieckhans   vSphere 5.1 supports sharing a file on a VMFS Datastore    with up ...
Storage DRS & vCloud DirectorJoseph Dieckhans   vCloud Director Interoperability/Support for Linked Clones       •     vC...
Storage vMotion – Parallel Migration EnhancementJoseph Dieckhans   In vSphere 5.1 Storage vMotion performs up to 4 parall...
16GFC PerformanceStudy by VMware
New 16GFC Support in vSphere5.1• Provide new support for 16GFC on vSphere 5.1 for better storage I/O performance• Performa...
Comparison of Throughput and CPU Efficiency                 16GFC Driver delivers double the throughput at better CPU effi...
More Bandwidth and Better IOPs 16GFC Adapter can attain much better IOPs compared to the 8Gbps wire speed limit of a   8GF...
Server and Workload Configuration ESX Host  HP Intel Proliant DL370, Dual Quad Core Xeon W5580 processors  Emulex LPe160...
Emulex 16GFC PCIe3.0 HBAs
Single Port Max IOPS        LPe16002                                   LPe12002                       © 2011 Emulex Corpor...
Single Port Max MB/s        LPe16002                                   LPe12002                       © 2011 Emulex Corpor...
Half the I/O Response TimeAverage I/O response during a single SSD LUN read I/O         LPe16002                          ...
Best Practices for 16GFC HBAs Stay up to date with firmware and drivers tested and supported by VMware HCL Update the firm...
HBA Management inVirtual Environments
OneCommand Manager for VMware vCenter Server OneCommand Manager software plug-in for the VMware vCenter Server console – R...
OneCommand Manager for VMware vCenter ServerCluster View – Hosts in a VMware Cluster               VMware            Hosts...
Resources
Implementers Lab One-stop site for IT administrators and system architects (implementers) Technically accurate and straigh...
Additional Resources VMware                                          Emulex  – Storage I/O performance on                 ...
Final Thoughts…  Virtualization adoption is spreading   – More virtualization spreading to cloud, VDI, and mission critica...
Q&A      © 2011 Emulex Corporation   25
© 2011 Emulex Corporation   26
Upcoming SlideShare
Loading in...5
×

Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16Gb Fibre Channel HBAs

1,696

Published on

This webinar covers the improvements in storage I/O throughput and CPU efficiency that VMware vSphere gains when using an Emulex 16Gb Fibre Channel Host Bus Adapter (HBA) versus the previous generation HBA. Applications virtualized on VMware vSphere 5.1 that generate storage I/O of various block sizes can take full advantage of 16Gb Fibre Channel wire speed for better sequential and random I/O performance.

Published in: Technology
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,696
On Slideshare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
16
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16Gb Fibre Channel HBAs"

  1. 1. Get Better I/O Performance inVMware vSphere 5.1Environments with Emulex 16GbFibre Channel HBAsJoey Dieckhans, VMwareYea-Cheng Wang, VMwareAlex Amaya, Emulex
  2. 2. Agenda Introduction What’s New With VMware vSphere 5.1 for Storage Performance Study Emulex LPe16000 16Gb Fibre Channel (16GFC) PCIe 3.0 HBAs Strategic Management Conclusion Q&A © 2011 Emulex Corporation 2
  3. 3. What’s New With VMwarevSphere 5.1 for Storage
  4. 4. Space Efficient Sparse Virtual DisksJoseph Dieckhans  A new Space Efficient Sparse Virtual Disk which 1. Reclaims wasted / stranded space in side a Guest OS 2. Uses a variable block size to better suit applications / use cases Traditional VMDK Space Efficient Sparse VMDK Wasted Blocks No Wasted Blocks Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
  5. 5. Increasing VMFS File Sharing LimitsJoseph Dieckhans  vSphere 5.1 supports sharing a file on a VMFS Datastore with up to 32 concurrent ESXi hosts. (previous limit was 8) VMFS-5 Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
  6. 6. Storage DRS & vCloud DirectorJoseph Dieckhans  vCloud Director Interoperability/Support for Linked Clones • vCloud Director will use Storage DRS for the initial placement of linked clones during Fast Provisioning. • vCloud Director will use Storage DRS for managing space utilization and I/O load balancing. Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
  7. 7. Storage vMotion – Parallel Migration EnhancementJoseph Dieckhans  In vSphere 5.1 Storage vMotion performs up to 4 parallel disk migrations per Storage vMotion operation Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
  8. 8. 16GFC PerformanceStudy by VMware
  9. 9. New 16GFC Support in vSphere5.1• Provide new support for 16GFC on vSphere 5.1 for better storage I/O performance• Performance results  Newly added 16GFC driver has twice the throughput compared to 8GFC HBA, at better cpio (cpu cost per I/O)  Reached 16GFC wire speed for random I/Os in 8KB block sizes.• Whitepaper  Storage I/O Performance on VMware vSphere5.1 over 16 Gigabit Fibre Channel
  10. 10. Comparison of Throughput and CPU Efficiency 16GFC Driver delivers double the throughput at better CPU efficiency per I/O  Sequential read I/Os over a 16GFC or a 8GFC port (Single Iometer worker in single VM)  Throughput and CPU cost per I/O comparison between two adapters. (see note on server configuration) Throughput CPU Cost per I/O 8Gb 16Gb 8Gb 16Gb 1800 100% Sequential read throughput (MBps) 1600 1400 95% CPU cost per I/O 1200 (lower is better) 1000 90% 800 85% 600 400 80% 200 75% 0 1KB 4KB 8KB 16KB 32KB 64KB 256KB KB 4KB 8KB 16KB 32KB 64KB 256KB Block size Block size
  11. 11. More Bandwidth and Better IOPs 16GFC Adapter can attain much better IOPs compared to the 8Gbps wire speed limit of a 8GFC port.  Random read I/Os from 1 VM to 8 VMs over a 16GFC port (single Iometer worker per VM) Random Read Throughput Random Read IOPs 1 VM 2 VMs 4 VMs 6VMs 8 VMs 1 VM 2 VMs 4 VMs 6VMs 8VMs 1,800 600,000 1,600 500,000 Random read throughput (MBps) 1,400 I/Os per seconds (IOPs) 1,200 400,000 1,000 300,000 800 600 200,000 400 100,000 200 0 0 1KB 4KB 8KB 16KB 1KB 4KB 8KB 16KB Block size Block size 8Gbps wire speed limit on the throughput of a 8Gb FC HBA
  12. 12. Server and Workload Configuration ESX Host  HP Intel Proliant DL370, Dual Quad Core Xeon W5580 processors  Emulex LPe16002 16GFC HBA initiator  Emulex LPe12000 8GFC HBA initiator EMC VNX7500 Storage Array  8GFC target ports connected to 16GFC SAN switch for LPe16002 initiator  8GFC target ports connected to 8GFC SAN switch for LPe12000 initiator  32 SSD cached luns of size 256MB, with mirrored write cache enabled at the VNX array Virtual Machine and Workload  Windows 2008 R2, 64-bit Gust O; single vcpu, and single PVSCSI virtual controller  Single Iometer worker, and 4 target luns in each VM, at 32oios per target lun
  13. 13. Emulex 16GFC PCIe3.0 HBAs
  14. 14. Single Port Max IOPS LPe16002 LPe12002 © 2011 Emulex Corporation 14
  15. 15. Single Port Max MB/s LPe16002 LPe12002 © 2011 Emulex Corporation 15
  16. 16. Half the I/O Response TimeAverage I/O response during a single SSD LUN read I/O LPe16002 LPe12002 © 2011 Emulex Corporation 16
  17. 17. Best Practices for 16GFC HBAs Stay up to date with firmware and drivers tested and supported by VMware HCL Update the firmware preferably during planned downtime OEM adapters – visit partner website for latest firmware and drivers Update inbox drivers Always check with the storage vendor for the recommended queue depth settings Always check with the storage vendor for the recommended Multipathing policy © 2011 Emulex Corporation 17
  18. 18. HBA Management inVirtual Environments
  19. 19. OneCommand Manager for VMware vCenter Server OneCommand Manager software plug-in for the VMware vCenter Server console – Real-time lifecycle management for Emulex adapters from vCenter Server – Builds on Emulex CIM providers and OCM features – no new agents – Extends the vCenter Server console with an Emulex OneCommand tab Display / manage adapters with multiple views and filters: – View per VMware host; per VMware cluster; per network fabric – Firmware version, hardware type and many other display filters Batch update adapter firmware across VMware clusters – Deploy firmware across hosts in a cluster © 2011 Emulex Corporation 19
  20. 20. OneCommand Manager for VMware vCenter ServerCluster View – Hosts in a VMware Cluster VMware Hosts, VMs and Clusters Emulex OneCommand Tab OCM Cluster-based Management Tasks Data Window for Selected Items © 2011 Emulex Corporation 20
  21. 21. Resources
  22. 22. Implementers Lab One-stop site for IT administrators and system architects (implementers) Technically accurate and straight- forward resources Fibre Channel and OEM Ethernet and ESXi 5.0 Deployments How-to Guides for Solutions from: – HP – IBM – Dell Please wander around our website – Implementerslab.com © 2011 Emulex Corporation 22
  23. 23. Additional Resources VMware Emulex – Storage I/O performance on – www.ImplementersLab.com VMware vSphere 5.1 over – Demartek LPe16000B 16GFC Evaluation report – Blog: Storage Protocol – OneCommand Manager for Comparison – A vSphere VMware vCenter Perspective – OneCommand Manager – Technical Resources – OneCommand Vision © 2011 Emulex Corporation 23
  24. 24. Final Thoughts… Virtualization adoption is spreading – More virtualization spreading to cloud, VDI, and mission critical applications Virtualization density is increasing – Enabled by bigger servers, more memory, faster networks and vSphere Fibre Channel is the most popular network for SANs – Networking is the #2 factor (after memory) for bigger VM deployments 16GFC from Emulex is here: – Lower latency, better throughput and more IOPS for bigger VM Deployments – Best management for vSphere © 2011 Emulex Corporation 24
  25. 25. Q&A © 2011 Emulex Corporation 25
  26. 26. © 2011 Emulex Corporation 26
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×