Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16Gb Fibre Channel HBAs
Upcoming SlideShare
Loading in...5
×
 

Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16Gb Fibre Channel HBAs

on

  • 2,210 views

 

Statistics

Views

Total Views
2,210
Views on SlideShare
2,210
Embed Views
0

Actions

Likes
1
Downloads
14
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16Gb Fibre Channel HBAs Get Better I/O Performance in VMware vSphere 5.1 Environments with Emulex 16Gb Fibre Channel HBAs Presentation Transcript

  • Get Better I/O Performance inVMware vSphere 5.1Environments with Emulex 16GbFibre Channel HBAsJoey Dieckhans, VMwareYea-Cheng Wang, VMwareAlex Amaya, Emulex
  • Agenda Introduction What’s New With VMware vSphere 5.1 for Storage Performance Study Emulex LPe16000 16Gb Fibre Channel (16GFC) PCIe 3.0 HBAs Strategic Management Conclusion Q&A © 2011 Emulex Corporation 2
  • What’s New With VMwarevSphere 5.1 for Storage
  • Space Efficient Sparse Virtual DisksJoseph Dieckhans  A new Space Efficient Sparse Virtual Disk which 1. Reclaims wasted / stranded space in side a Guest OS 2. Uses a variable block size to better suit applications / use cases Traditional VMDK Space Efficient Sparse VMDK Wasted Blocks No Wasted Blocks Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
  • Increasing VMFS File Sharing LimitsJoseph Dieckhans  vSphere 5.1 supports sharing a file on a VMFS Datastore with up to 32 concurrent ESXi hosts. (previous limit was 8) VMFS-5 Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
  • Storage DRS & vCloud DirectorJoseph Dieckhans  vCloud Director Interoperability/Support for Linked Clones • vCloud Director will use Storage DRS for the initial placement of linked clones during Fast Provisioning. • vCloud Director will use Storage DRS for managing space utilization and I/O load balancing. Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
  • Storage vMotion – Parallel Migration EnhancementJoseph Dieckhans  In vSphere 5.1 Storage vMotion performs up to 4 parallel disk migrations per Storage vMotion operation Copyright © 2009 VMware Inc. All rights reserved. Confidential and proprietary.
  • 16GFC PerformanceStudy by VMware
  • New 16GFC Support in vSphere5.1• Provide new support for 16GFC on vSphere 5.1 for better storage I/O performance• Performance results  Newly added 16GFC driver has twice the throughput compared to 8GFC HBA, at better cpio (cpu cost per I/O)  Reached 16GFC wire speed for random I/Os in 8KB block sizes.• Whitepaper  Storage I/O Performance on VMware vSphere5.1 over 16 Gigabit Fibre Channel
  • Comparison of Throughput and CPU Efficiency 16GFC Driver delivers double the throughput at better CPU efficiency per I/O  Sequential read I/Os over a 16GFC or a 8GFC port (Single Iometer worker in single VM)  Throughput and CPU cost per I/O comparison between two adapters. (see note on server configuration) Throughput CPU Cost per I/O 8Gb 16Gb 8Gb 16Gb 1800 100% Sequential read throughput (MBps) 1600 1400 95% CPU cost per I/O 1200 (lower is better) 1000 90% 800 85% 600 400 80% 200 75% 0 1KB 4KB 8KB 16KB 32KB 64KB 256KB KB 4KB 8KB 16KB 32KB 64KB 256KB Block size Block size
  • More Bandwidth and Better IOPs 16GFC Adapter can attain much better IOPs compared to the 8Gbps wire speed limit of a 8GFC port.  Random read I/Os from 1 VM to 8 VMs over a 16GFC port (single Iometer worker per VM) Random Read Throughput Random Read IOPs 1 VM 2 VMs 4 VMs 6VMs 8 VMs 1 VM 2 VMs 4 VMs 6VMs 8VMs 1,800 600,000 1,600 500,000 Random read throughput (MBps) 1,400 I/Os per seconds (IOPs) 1,200 400,000 1,000 300,000 800 600 200,000 400 100,000 200 0 0 1KB 4KB 8KB 16KB 1KB 4KB 8KB 16KB Block size Block size 8Gbps wire speed limit on the throughput of a 8Gb FC HBA
  • Server and Workload Configuration ESX Host  HP Intel Proliant DL370, Dual Quad Core Xeon W5580 processors  Emulex LPe16002 16GFC HBA initiator  Emulex LPe12000 8GFC HBA initiator EMC VNX7500 Storage Array  8GFC target ports connected to 16GFC SAN switch for LPe16002 initiator  8GFC target ports connected to 8GFC SAN switch for LPe12000 initiator  32 SSD cached luns of size 256MB, with mirrored write cache enabled at the VNX array Virtual Machine and Workload  Windows 2008 R2, 64-bit Gust O; single vcpu, and single PVSCSI virtual controller  Single Iometer worker, and 4 target luns in each VM, at 32oios per target lun
  • Emulex 16GFC PCIe3.0 HBAs
  • Single Port Max IOPS LPe16002 LPe12002 © 2011 Emulex Corporation 14
  • Single Port Max MB/s LPe16002 LPe12002 © 2011 Emulex Corporation 15
  • Half the I/O Response TimeAverage I/O response during a single SSD LUN read I/O LPe16002 LPe12002 © 2011 Emulex Corporation 16
  • Best Practices for 16GFC HBAs Stay up to date with firmware and drivers tested and supported by VMware HCL Update the firmware preferably during planned downtime OEM adapters – visit partner website for latest firmware and drivers Update inbox drivers Always check with the storage vendor for the recommended queue depth settings Always check with the storage vendor for the recommended Multipathing policy © 2011 Emulex Corporation 17
  • HBA Management inVirtual Environments
  • OneCommand Manager for VMware vCenter Server OneCommand Manager software plug-in for the VMware vCenter Server console – Real-time lifecycle management for Emulex adapters from vCenter Server – Builds on Emulex CIM providers and OCM features – no new agents – Extends the vCenter Server console with an Emulex OneCommand tab Display / manage adapters with multiple views and filters: – View per VMware host; per VMware cluster; per network fabric – Firmware version, hardware type and many other display filters Batch update adapter firmware across VMware clusters – Deploy firmware across hosts in a cluster © 2011 Emulex Corporation 19
  • OneCommand Manager for VMware vCenter ServerCluster View – Hosts in a VMware Cluster VMware Hosts, VMs and Clusters Emulex OneCommand Tab OCM Cluster-based Management Tasks Data Window for Selected Items © 2011 Emulex Corporation 20
  • Resources
  • Implementers Lab One-stop site for IT administrators and system architects (implementers) Technically accurate and straight- forward resources Fibre Channel and OEM Ethernet and ESXi 5.0 Deployments How-to Guides for Solutions from: – HP – IBM – Dell Please wander around our website – Implementerslab.com © 2011 Emulex Corporation 22
  • Additional Resources VMware Emulex – Storage I/O performance on – www.ImplementersLab.com VMware vSphere 5.1 over – Demartek LPe16000B 16GFC Evaluation report – Blog: Storage Protocol – OneCommand Manager for Comparison – A vSphere VMware vCenter Perspective – OneCommand Manager – Technical Resources – OneCommand Vision © 2011 Emulex Corporation 23
  • Final Thoughts… Virtualization adoption is spreading – More virtualization spreading to cloud, VDI, and mission critical applications Virtualization density is increasing – Enabled by bigger servers, more memory, faster networks and vSphere Fibre Channel is the most popular network for SANs – Networking is the #2 factor (after memory) for bigger VM deployments 16GFC from Emulex is here: – Lower latency, better throughput and more IOPS for bigger VM Deployments – Best management for vSphere © 2011 Emulex Corporation 24
  • Q&A © 2011 Emulex Corporation 25
  • © 2011 Emulex Corporation 26